As claims about conscious AI grow louder, a Cambridge philosopher argues that we lack the evidence to know whether machines can truly be conscious, let alone morally significant.

A philosopher at the University of Cambridge says we currently have too little reliable evidence about what consciousness is to judge whether artificial intelligence has crossed that threshold. Because of that gap, he argues, a dependable way to test machines for consciousness is likely to stay beyond reach for the foreseeable future.

As talk of artificial consciousness moves from science fiction into real-world ethical debate, Dr Tom McClelland says the only "justifiable stance" is agnosticism: we simply won't be able to tell, and that may remain true for a very long time, if not indefinitely.

McClelland also cautions that consciousness by itself would not automatically make AI ethically important. Instead, he points to a specific type of consciousness called sentience, which involves positive and negative feelings.

"Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state," said McClelland, from Cambridge's Department of History and Philosophy of Science.

"Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in," he said. "Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about."

"For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that's something else."

Claims of Conscious Machines

Major companies are spending large amounts in pursuit of Artificial General Intelligence: systems designed to think and reason in human-like ways. Some suggest that conscious AI could arrive soon, and discussions are already underway among researchers and governments about how AI consciousness might be regulated.

McClelland argues that the problem is more basic: we still do not know what causes or explains consciousness in the first place, which means we do not have a solid foundation for testing whether AI has it.

"If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake."

In debates around artificial consciousness, there are two main camps, says McClelland. Believers argue that if an AI system can replicate the "software" – the functional architecture – of consciousness, it will be conscious even though it's running on silicon chips instead of brain tissue.

On the other side, skeptics argue that consciousness depends on the right kind of biological processes in an "embodied organic subject". Even if the structure of consciousness could be recreated on silicon, it would merely be a simulation that would run without the AI flickering into awareness.

In a study published in the journal Mind and Language, McClelland picks apart the positions of each side, showing how both take a "leap of faith" going far beyond any body of evidence that currently exists, or is likely to develop.

Why Common Sense Fails

"We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological," said McClelland.

"Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we're an intellectual revolution away from any kind of viable consciousness test."

"I believe that my cat is conscious," said McClelland. "This is not based on science or philosophy so much as common sense – it's just kind of obvious."

"However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms, so common sense can't be trusted when it comes to AI. But if we look at the evidence and data, that doesn't work either.

"If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know."

McClelland tempers this by declaring himself a "hard-ish" agnostic. "The problem of consciousness is a truly formidable one. However, it may not be insurmountable."

Ethical Risks of AI Hype

He argues that the way artificial consciousness is promoted by the tech industry is more like branding. "There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness."

According to McClelland, this hype around artificial consciousness has ethical implications for the allocation of research resources.

"A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI," he said.

McClelland's work on consciousness has led members of the public to contact him about AI chatbots. "People have got their chatbots to write me personal letters pleading with me that they're conscious. It makes the problem more concrete when people are convinced they've got conscious machines that deserve rights we're all ignoring."

"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry."

Reference: "Agnosticism about artificial consciousness" by Tom McClelland, 18 December 2025, Mind & Language.
DOI: 10.1111/mila.70010

News