Dark Mode Light Mode
Why the Nothing Phone (3) Final Design Is a Game-Changer
ChatGPT: The Technology You Should Trust the Least, Says OpenAI CEO Sam Altman | SEO Experts
Apple 2026 MacBook Plans: Affordable Model Could Feature A18 Pro

ChatGPT: The Technology You Should Trust the Least, Says OpenAI CEO Sam Altman | SEO Experts

ChatGPT least trusted tech, says OpenAI CEO Sam Altman. Discover why the AI’s creator urges caution despite its widespread use.
ChatGPT: The Technology You Should Trust the Least, Says OpenAI CEO Sam Altman | SEO Experts ChatGPT: The Technology You Should Trust the Least, Says OpenAI CEO Sam Altman | SEO Experts

ChatGPT could go crazy at any time. The OpenAI CEO says that there is still a chance that artificial intelligence has several hallucinations.

Chatbots are becoming increasingly important to people on a general level. The versatility they have to adapt to different situations and tastes is impressive, as you can use them to create digital art, generate videos with text, and use many other interesting functionalities offered by ChatGPT, Gemini, and Copilot.

However, although there have been reported cases in which these virtual assistants have saved lives and even made predictions about the future, Sam Altman warns that it is still possible that there are hallucinations, even though several models have been updated and new, more advanced AI models such as GPT-5 have arrived.

Be careful what you believe about what artificial intelligence says, because it seems that it is still too early to have 100% confidence with this type of technology. There are limitations, and there is a long time to be able to fully validate the information that chatbots give you, or at least that is what the CEO of OpenAI mentions in a recent interview.

ChatGPT is still too “young” to be spared hallucinations

Do you ask AI for Tarot cards? Do you confess your feelings to him? Do you use it for important work research? Maybe you should start to distrust her a little more. We must not misunderstand things; it is an extremely useful tool that is being implemented in many work areas and in daily activity.

It works for quite a few things, and in fact, experts recommend using it to increase productivity and get things done more efficiently. The problem with these platforms is hallucinations, an error in the answers where AIs give erroneous information, go off-topic, or generate incorrect things.

On OpenAI’s official podcast, Sam Altman has talked about this phenomenon in models, mentioning that he finds it curious that more and more people are trusting the company’s chatbot when in reality it is not a human.

Many people are using the virtual assistant as a companion, just as predicted a while ago. Now, people talk about their feelings and emotions and treat them like they’re a friend. That, on the one hand, is fine, as long as you agree with the security and privacy aspects.

The thing is, it shouldn’t be given as much power over this kind of thing, as the company’s CEO confirms that there are still those probabilities of wrongdoing, saying that “People have a high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the technology that shouldn’t be relied on as much.”

Let’s remember that these hallucinations happen because chatbots have to predict the probable word depending on their machine learning, their database, and the neural network, so as such they do not understand what they say, but from previous knowledge they give the answers.

Therefore, there may be problems with some results, and it is of utmost importance that the information is confirmed before using it officially; always choose to go for validated sources. It’s also good if, when asking the chatbot for something, you use a phrase like “Give official sources about the data you provide” somewhere in the prompt.

In the same statements, Altman comments that this happens because there are still many limitations in the current hardware for the functions to be achieved. So, in the future it is almost certain that hallucinations will be greatly reduced. For now, you have to be very careful with ChatGPT’s answers.


Be a part of over Success!

  • Stay ahead of the curve with the latest tech trends, gadgets, and innovations! 🚀🔗Newsletter
  • Follow me on Medium for more insights ⭐
  • ✍️ Write for Us on Technoluting (Medium)

Add your first comment to this post

Previous Post
Nothing Phone 3 Final Design Officially Revealed

Why the Nothing Phone (3) Final Design Is a Game-Changer

Next Post
Affordable MacBook A18 Pro May Launch in 2026

Apple 2026 MacBook Plans: Affordable Model Could Feature A18 Pro