Skip to content

The way we do AI is dangerous

#ai #ethics #technology #bigtech

The way we are currently doing AI in software solutions is dangerous. We're using stupid and insecure models, hosted by big tech companies, to provide an interface that is not beneficial to the user. Don't get me wrong, AI is a great technology, but not in the form of chatbots.

Stupid models

LLMs are just giant text prediction machines. They don't understand the text, they just predict the next word.

Me> Can you solve a puzzle?
ChatGPT> Of course! I'd love to hear your riddle. Let's see if I can solve it.
Me> A man has a boat and a goat. How does he get to the other site?
ChatGPT> The answer to this classic riddle is: He takes the boat across the water, while leaving the goat on one side. Then he goes back alone, picks up the goat, and brings it to the other side.

Source

The model knows the structure of a puzzle about a man and a goat, but it does not understand anything about the puzzle itself. It's still just predicting the next word based on training data.

AI also makes up facts instead of saying "I don't know". This is dangerous because people might believe the AI and spread the false information.

Insecure models

LLMs are also insecure. Even though they get prompts with a lot of rules, they can still be tricked into doing things they shouldn't.

This started years ago with the Tay chatbot and continued with ChatGPT, which can be "reprogrammed" using DAN (Do Anything Now).

There are also cases where Microsoft Edge's Copilot can be hijacked by a malicious website.

Hosted by big tech companies

Most models are hosted by big tech companies that have their own interests. They want to make money and keep you on their platform. They don't care about you, society, the environment, or anything else. They just want to make money. This is dangerous because they can manipulate the AI and you become dependent on them.

Not beneficial for the user

An AI chatbot is not beneficial to the user. It's just a waste of time. To get a good answer, you have to ask a detailed question. This takes more time than a simple and intuitive user interface. This is especially true on mobile devices, where typing is slow and error-prone.

A better way

It would be better to use AI in the background to help the user, rather than replacing the user interface. For example, a chat app could use AI to summarize unread messages. This would be best done with an open source on-device model so that privacy and end-to-end encryption can be guaranteed. Another example of helpful AI is a grammar checker that suggests improvements to your text.

Thanks for reading! If you have any questions or comments, comment below. I'd love to hear from you! You may also fix errors or suggest changes in the GitHub repo.

Jak2k