Superintelligence: Should we stop a race if we don’t actually know where the finish line is?

By Anthony Hié, Chief Innovation & Digital Officer, member of the Executive Committee, Excelia (France)

Geoffrey Hinton, pioneer of deep learning and former researcher at Google; Yoshua Bengio, Director of the Quebec Artificial Intelligence Institute (Mila) and co-winner of the Turing Award; Stuart Russell, Professor of Computer Science at the University of California, Berkeley, and specialist in intelligent systems security… These three industry savants are among the main signatories of an international petition(1) supported by more than 700 experts and prominent figures calling for a pause in the development of so-called ‘super-intelligent’ artificial intelligence.

Entrepreneurs, philosophers and public figures have joined forces with them to warn of a potential danger, namely, a race for algorithmic power that is being run without any real governance or understanding of its consequences.

A call to slow things down to avoid reaching the point of no return
The signatories call for a temporary halt to all development of artificial intelligence whose capabilities exceed those of current systems pending ‘broad scientific consensus that it will be done safely…’ It is not a question of rejecting innovation but of pausing to assess the consequences of a technological evolution that already seems to exceed our capacity for understanding.

Whilst major players in the sector, such as OpenAI, Google DeepMind, Anthropic and Baidu, compete to develop increasingly complex models, researchers are fearful of a potential shift towards technological advancement for its own sake, to the detriment of human control.

A concept that is as compelling as it is indeterminate
The term ‘superintelligence’ encapsulates the concerns raised. It refers to an AI system whose capabilities would surpass those of humans in almost every field: logical reasoning, creativity, strategic planning and even moral judgement. However, in reality, the situation is less clear-cut: no one actually knows what such an entity would be like, or how to measure it. Would it be an intelligence capable of self-improvement without supervision? An emerging consciousness? Or simply a system that performs even more efficiently than our current models? This semantic ambiguity is at the very heart of the problem: how can we stop something that is so difficult to define?

In the wake of the work of psychologist Howard Gardner, we know that there is no single form of intelligence, but rather multiple intelligences(2): linguistic, logical-mathematical, spatial, bodily-kinesthetic, musical, interpersonal, intrapersonal, and naturalist. Each of us employs a unique combination of these different forms of intelligence, which cannot simply be boiled down to calculations or abstract reasoning.

A global race 
For many, halting this progress seems utopian. How can a pause be enforced globally when the world’s major powers have such divergent economic and geopolitical interests? The United States, China and the European Union are in fierce competition to dominate the strategic sector of artificial intelligence; slowing down unilaterally would risk losing a decisive advantage. However, for the signatories, the absence of international coordination is precisely what makes this pause essential. They are calling for the creation of an independent public body to oversee the most advanced developments. This idea reflects the AI Act, the regulation recently adopted by the European Union. This pioneering legislation classifies AI systems according to their level of risk and imposes strict requirements for transparency, traceability and human oversight.

However, for many experts, this framework is still inadequate for the challenges posed by potentially self-improving artificial intelligence. The AI Act sets out compliance rules, but it does not yet address the issue of superintelligence, which cannot be measured using any current assessment framework. The call for caution therefore becomes an ethical requirement: think before racing ahead.

Lucid ignorance
Researchers themselves recognise the irony of the situation: they are concerned about a phenomenon that they cannot yet describe. Superintelligence is currently a theoretical concept, a kind of projection of our anxieties and ambitions. But it is precisely this uncertainty that warrants caution. If we do not know the exact nature of the finish line, should we really keep on racing forward without knowing what we are heading for?

The issue is no longer simply technological; it is philosophical, political and profoundly human. The possibility of superintelligence raises questions not only about our ability to invent, but also about our ability to govern ourselves. Perhaps this is the sign of true intelligence: knowing when to stop before machines start thinking (and acting) for us.

AISuperintelligence
Comments (0)
Add Comment