Adrien - Tuesday, December 23, 2025

🤔 Conscious AI: A philosopher's perspective

Can artificial intelligence become conscious? This question, once confined to science fiction, now occupies a central place in ethical and scientific discussions.

A philosopher from the University of Cambridge argues that we are unable to answer this question due to a glaring lack of evidence about the very nature of consciousness.

According to this researcher, the only justifiable position today is agnosticism. He believes a reliable method for testing machine consciousness will likely remain out of reach for a long time, perhaps indefinitely. This fundamental uncertainty significantly complicates debates on the regulation and moral significance of AI.


This discussion requires distinguishing two concepts: consciousness and sentience (explained at the end of the article). The first concerns perception and self-awareness, which can remain ethically neutral. Sentience, on the other hand, implies positive or negative conscious experiences, such as suffering or pleasure. It is the latter that triggers moral considerations, according to the expert.


In discussions, two main camps oppose each other. Some believe that reproducing the functional architecture of consciousness on silicon would be enough to create consciousness. Others believe that consciousness depends on specific biological processes in an embodied organic subject, and that a computer simulation would not generate a genuine experience. These positions are based on assumptions that go beyond current evidence.

This theoretical opposition runs into a double practical difficulty. Our evolutionary common sense, accustomed to living beings, cannot be applied to AI. At the same time, scientific research does not yet have a deep explanation of consciousness. This double impasse reinforces the idea that we cannot know, and perhaps will never know, if a machine is conscious.

What is sentience and why is it essential?


Sentience refers to the capacity of a being to experience subjective sensations, such as pain or pleasure. Unlike simple consciousness, which can be limited to environmental perception, sentience implies an affective dimension. It is this quality that forms the basis of ethical concerns, because it makes an individual vulnerable to suffering or capable of enjoyment.

In the animal kingdom, studies indicate that certain species, like crustaceans, might be sentient. This raises questions about their treatment, while billions are consumed every year. For artificial intelligence, the question is even more difficult, because there is no clear way to detect such experiences in a machine.

The importance of sentience lies in its direct link to morality. A system that is purely conscious but without feelings would not require the same protections as a sentient being. This distinction helps prioritize ethical efforts, focusing on entities that can truly suffer harm or benefit from well-being.

In practice, recognizing sentience requires indirect evidence, such as evocative behaviors or neurological similarities. For AI, these clues are missing, which keeps the debate in a persistent fog and justifies a cautious approach.
Ce site fait l'objet d'une déclaration à la CNIL
sous le numéro de dossier 1037632
Informations légales