Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.

The rise of artificial intelligence (AI) has changed how people interact, but it also poses a global risk to human dignity, according to new research from Charles Darwin University (CDU).

Lead author Dr. Maria Randazzo, from CDU’s School of Law, explained that AI is rapidly reshaping Western legal and ethical systems, yet this transformation is eroding democratic principles and reinforcing existing social inequalities.

She noted that current regulatory frameworks often overlook basic human rights and freedoms, including privacy, protection from discrimination, individual autonomy, and intellectual property. This shortfall is largely due to the opaque nature of many algorithmic models, which makes their operations difficult to trace.

The black box problem

Dr. Randazzo described this lack of transparency as the “black box problem,” noting that the decisions produced by deep-learning and machine-learning systems cannot be traced by humans. This opacity makes it challenging for individuals to understand whether and how an AI model has infringed on their rights or dignity, and it prevents them from effectively pursuing justice when such violations occur.

Dr Maria Randazzo
Dr. Maria Randazzo has found AI has reshaped Western legal and ethical landscapes at unprecedented speed. Credit: Charles Darwin University

“This is a very significant issue that is only going to get worse without adequate regulation,” Dr. Randazzo said.

“AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behaviour.”

“It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

Global approaches to AI governance

Currently, the world’s three dominant digital powers – the United States, China, and the European Union – are taking markedly different approaches to AI, leaning on market-centric, state-centric, and human-centric models, respectively.

Dr. Randazzo said the EU’s human-centric approach is the preferred path to protect human dignity, but without a global commitment to this goal, even that approach falls short.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” she said.

“Humankind must not be treated as a means to an end.”

Reference: “Human dignity in the age of Artificial Intelligence: an overview of legal issues and regulatory regimes” by Maria Salvatrice Randazzo and Guzyal Hill, 23 April 2025, Australian Journal of Human Rights.
DOI: 10.1080/1323238X.2025.2483822

The paper is the first in a trilogy Dr. Randazzo will produce on the topic.

News

Nanomotors: Where Are They Now?

First introduced in 2004, nanomotors have steadily advanced from a scientific curiosity to a practical technology with wide-ranging applications. This article explores the key developments, recent innovations, and major uses of nanomotors today. A [...]

AI Tool Shows Exactly When Genes Turn On and Off

Summary: Researchers have developed an AI-powered tool called chronODE that models how genes turn on and off during brain development. By combining mathematics, machine learning, and genomic data, the method identifies exact “switching points” that [...]