Blog

The AI Revolution and Its Limitations

ai-generated-8202491

I wonder if the United States, now that it will be governed by tech giants, will take into account such basic and fundamental principles of democracy as the right to equality, human dignity, non-discrimination, data protection, transparency, and the right to informed consent in the use of artificial intelligence and robotics systems.

My doubts arise because some believe that regulating these rights means hindering innovation and technological progress, when in reality, it ensures that we have reliable artificial intelligence that provides security and confidence to citizens. In fact, if we apply Asimov’s laws of robotics, they prioritize human safety by establishing the fundamental principle that:

“A robot may not injure humanity, or, through inaction, allow humanity to come to harm.”

Harm is also caused, of course, when our fundamental rights and humanistic values are attacked or undermined. While these values are intrinsically European, they are also universal, characterized by Europe’s contribution to society.

The application of these norms or principles, as I mentioned, should not hinder research, innovation, or development in the field of AI and robotics. Rather, they contribute to making innovation higher quality, more advanced, and widely distributed while avoiding risks and uncertainties—never forgetting that AI should serve humanity.

One of the obligations set forth in the EU Artificial Intelligence Regulation (RIA), recently approved in Europe, is the principle of transparency. This ensures that any decision made with the assistance of AI, which may have a significant impact on human life, can always be justified, thereby preventing specific risks of manipulation. In fact, users of AI systems that generate or manipulate image, sound, or video content resembling real people, objects, places, or events—content that could mislead someone into thinking it is authentic—are required to disclose that it has been artificially generated or manipulated.

However, not all AI systems are subject to the same obligations and controls. The strictest requirements apply to high-risk AI systems, as defined in Article 6 of the RIA, which includes areas such as access to employment, education, and public services. These systems are required to undergo human oversight, as regulated in Article 14 of the RIA, to prevent or minimize risks to health, safety, or fundamental rights. Furthermore, a market surveillance mechanism is required, in which providers must conduct post-market monitoring by collecting and analyzing data from users or other sources. Any serious incidents or malfunctions must be reported to the relevant market surveillance authority where the incident occurred.

Regarding AI systems that affect human behavior through deliberately manipulative or deceptive techniques—with the goal or effect of materially distorting a person’s or group’s behavior—mass surveillance, and social scoring, these are directly classified as prohibited and therefore unacceptable practices under Article 5 of the RIA.

The following diagram illustrates the different AI risk levels and the corresponding responsibilities depending on the intended purpose of each system.

RIA Risk Levels

While it is true that AI systems that pose no risk—such as those used for data management or administrative process automation—or those with limited risk, such as chatbots and virtual assistants, are not subject to exhaustive control or conformity assessment, they are still encouraged to voluntarily apply certain codes of conduct regulated under Article 95 of the RIA. These include minimizing the environmental impact of AI systems, preventing negative impacts on vulnerable individuals or groups, ensuring accessibility for people with disabilities, and promoting gender equality.

Considering everything that an adequate regulatory framework for AI and robotics offers—such as human oversight to ensure trustworthy AI, the ability to challenge automated decisions that affect our personal and financial spheres, and guarantees that our fundamental rights, including equality, justice, non-discrimination, and the right to be properly informed, are upheld—do these obligations really hinder innovation? Or is our main goal rather to ensure that the technological revolution serves humanity while preventing risks to our security, health, integrity, and freedom?

Leave a Reply