Adrien - Friday, September 26, 2025

🧠 Will AI surpass us? A survey to read

Artificial intelligence is progressing at a frantic pace. While some see it as an unprecedented opportunity, others worry about the potential risks associated with a technology that could one day surpass our own intellectual capabilities.

A recent survey conducted by Live Science reveals that nearly half of respondents believe we should stop the development of AI now, fearing existential threats. Only 9% think the benefits will outweigh the risks, and 30% advocate for slowing down until adequate safeguards are in place. These figures show a deep division in public opinion.


The concept of singularity, where AI would surpass human intelligence and improve autonomously, is at the heart of these concerns. Experts point out that if this stage is reached, it might be impossible to control the technology, leading to unpredictable scenarios. This idea is not new, but it is gaining credibility with recent advances in deep learning and language models.

In the survey comments, many readers express a sense of urgency, arguing that it is already too late to reverse the trend. Others compare these fears to those raised by past innovations, such as electricity, noting that catastrophic predictions often did not come true. This historical perspective offers a counterbalance to alarmists.

What is artificial general intelligence (AGI)?



Artificial general intelligence, often abbreviated AGI, refers to a form of AI capable of understanding, learning, and applying its knowledge across various domains, much like a human being. Unlike today's specialized AIs, which excel in specific tasks like image recognition or translation, AGI would possess cognitive flexibility allowing it to adapt to new situations without reprogramming.

The development of AGI represents a major qualitative leap, as it could autonomously solve the most complex problems, from medical research to natural resource management. However, this versatility raises ethical and practical questions, particularly about how to ensure it acts in alignment with human values and does not become a threat.

Researchers are exploring various approaches to achieve AGI, ranging from deep reinforcement learning to neural architectures inspired by the human brain. Projects like those from OpenAI or DeepMind aim to create more generalist systems, but the technical obstacles remain immense, requiring advances in algorithms and computing power.

If AGI became a reality, it could revolutionize entire sectors.

Technological singularity: myth or reality?


Technological singularity is a futuristic concept popularized by thinkers like Ray Kurzweil, describing a point where artificial intelligence surpasses human intelligence and improves at an exponential rate, escaping all control. This idea is based on the assumption that superintelligent AIs could design even more advanced versions of themselves, creating an unstoppable loop of progress.

The implications of such a singularity are profound: it could lead to incredible technological advances, such as solving global problems like climate change or diseases, but also to dystopian scenarios where humanity would lose its supremacy in intelligence on Earth. Debates among scientists are intense, with some believing that singularity is inevitable, while others consider it exaggerated speculation.


Mathematical models and simulations attempt to assess the probability of singularity, taking into account factors such as the law of accelerating returns and the physical limits of computing. However, uncertainty remains high, as it depends on unpredictable breakthroughs in artificial intelligence and neuroscience.

To prepare for this eventuality, initiatives are emerging to develop ethical frameworks and safety mechanisms, such as aligning AI values with human values.
Ce site fait l'objet d'une déclaration à la CNIL
sous le numéro de dossier 1037632
Informations légales