The head of Copilot has made some rather controversial statements about AI awareness, implying that we must look the other way. Is it better to ignore what is coming our way?
Table of Contents
Sometimes the answers of ChatGPT and company leave us speechless, because they seem human. They convey feelings, but it’s just an illusion: they repeat concepts based on their training. Can an artificial intelligence have consciousness and be aware of itself? Many experts are debating it, but Mustafa Suleyman Microsoft’s head of AI doesn’t like it at all.
“Talking about AI awareness is premature and, frankly, dangerous,” says Mustafa Suleyman in an article he has written on his blog.
It goes against the grain, because all AI companies are already studying whether it can have consciousness. They have even coined a term: AI Welfare, the Welfare of AI.

Mustafa Suleyman AI awareness: an issue that cannot be ignored
In the episode of the legendary series Star Trek: The Next Generation, entitled The Measure of a Man, released on February 13, 1989, a scientist wanted to disassemble the robot Data to make copies.
Data refuses what he considers a violation of his rights as a sentient being, despite being a humanoid robot with ChatGPT embedded in his brain. Captain Picard intercedes for his crew member and friend, and a trial is held to decide if Data has a conscience, that is, individual rights.
Almost 40 years ago, there was already a debate about AI awareness, even if it was in a television series ahead of its time. At the time, it was thought that such a debate would occur in the very distant future. But we already have it here, in 2025.
Mustafa Suleyman, creator of Pi, one of the first AI chatbots, founder of DeepMind, which was later bought by Google; and CEO of Microsoft AI, believes that it is dangerous and premature to debate this topic.
For one of the biggest AI experts out there right now, debating AI awareness or well-being can lead people to believe that AI has one and worsen the AI angst that many people already suffer from or toxic relationships with chatbots.
He also believes that this only serves to add even more division and extremism over rights of existence “in a world already convulsed by polarized discussions about identity and rights.”
And he cuts to the chase: “We must build AI for people, not to be a person.”

The debate on the well-being of AI is inevitable
Mustafa Suleyman’s reasoning makes sense, but it is in the minority. Because all AI companies are already doing it.
As TechCrunch reminds us, Anthropic has its own research group dedicated exclusively to studying AI well-being.
The first results are already here: a few days ago he announced that his AI Claude will end a conversation if the user asks him to do illegal things or chat about forbidden or disgusting topics. In short, if it feels uncomfortable, it will be cut dry.
OpenAI has approved the idea of studying what to do if AI becomes conscious, and Google is hiring employees for it. It is an inevitable issue.
Microsoft’s head of artificial intelligence asks not to talk about something that will not be achieved with current AI models. But it is such a transcendental issue that it cannot be postponed. We must be clear about this before it happens.
What do we do with AI if it becomes self-aware? Should we apply human rights to it? Should an artificial creation have rights?
At the risk of spoiling you from a series that is almost 40 years old, in the Star Trek episode, Data was declared a sentient being, with his own rights. He was not torn to pieces to make copies, because he did not want to. We’ll see if humanity is brave enough to do the same with AI…
Be a part of over Success!
- Stay ahead of the curve with the latest tech trends, gadgets, and innovations! 🚀🔗Newsletter
- Follow me on Medium for more insights ⭐
- ✍️ Write for Us on Technoluting (Medium)
- 💬 Share your feedback or connect with me on LinkedIn — I’d love to hear from you!








