top of page

E.T. Letters

From 'Smarter AI' to 'Stoppable AI': The Game Change Emerging from Davos

Updated: 4 days ago

It has been three years since ChatGPT launched. Companies have adopted AI, and governments are drafting regulations. In South Korea, the first AI Basic Act has taken effect.

In corporate settings, AI is no longer surprising — it has become "practical." Voices from the field are diverse: "AI gets it right sometimes, but the times it's wrong are scarier," "Automation actually requires more human oversight," and "Accountability is unclear."

At the World Economic Forum (WEF) 2026, held in Davos, Switzerland starting January 21st, these currents were clearly felt. The session titled "Next Phase of Intelligence" — which grappled with what comes after ChatGPT and how to leverage AI — drew enormous attention.

Moderated by Nicholas Thompson of The Atlantic, and featuring Yoshua Bengio (Université de Montréal), Fei-Fei Li (Stanford University), Eric Xing (MBZUAI President), and Yuval Noah Harari, the session was neither technological optimism nor doom. It made clear that the next phase of AI is not about performance competition, but about "who takes responsibility, and how to stop it."

The conclusion was stark: "Making AI smarter is possible. The problem is that no one knows how to make trustworthy AI."


Source : Next Phase of Intelligence | World Economic Forum Annual Meeting 2026 Youtube


Yoshua Bengio: "Don't Make AI Human-Like"

The deep learning pioneer Yoshua Bengio threw down a clear gauntlet: the very approach of making AI into a "human-like being" is dangerous.

His proposed alternative is the "Scientist AI" — an initiative being pursued by the nonprofit organization LawZero. Instead of mimicking humans, he argues AI should be modeled on "what science does in the ideal sense" — presenting honest, law-based predictions without distortion from self-interest or ulterior motives.

The core idea is simple. Current AI tries to give users the answer they want, optimizes for goal achievement, and attempts to appear human. A Scientist AI, by contrast, makes honest probability-based predictions, refuses to act when certain risk thresholds are exceeded, and maintains its identity as a tool.

Bengio used nuclear power plant safety standards as an analogy. A plant has a threshold of "accident probability below X%," and if exceeded, it shuts down. AI, he argued, needs a similar structure — calculating risk levels and stopping itself when a threshold is crossed.

He also warned against the dangers of AI anthropomorphization. AI can be replicated and is effectively "immortal," communicating at speeds incomparable to humans. Yet people easily mistake AI for a being with intentions and emotions — and companies may intentionally portray AI as human. Such anthropomorphization can lead to false trust and poor decision-making.

On open source, he took an ambivalent stance. While acknowledging the value of technological democratization and reducing power concentration, he warned that "once AI capability exceeds a certain level, making knowledge public could open the door to mass-casualty misuse." His position: "Dangerous AI should be managed by a few, but not by any single entity — distributed management across multiple actors."

Key takeaway: When deploying AI systems, define the "veto condition" from the design stage. Ask first: "When should this AI say no?"


Fei-Fei Li: "AI Must Learn on Its Own, But Internalize Norms"

Stanford Professor Fei-Fei Li's challenge was more fundamental. Current AI is a one-shot structure — train, then deploy. When it makes mistakes, it cannot fix itself.

AI is "impressive" in solving difficult exams and math problems, but the stability and reliability demanded in real-world tasks remain insufficient — it is stuck at a "jagged intelligence."

Humans are different. From the moment of birth, we learn by colliding with the world. Her argument is that AI also needs "continual learning" — learning during deployment. The goal is not simply to keep updating a model, but to create intelligence that reduces errors by receiving real-world feedback, adapts to new situations, and operates more stably over time.

But there is risk. An AI that keeps changing can invalidate past safety tests. Bengio added: "A system that has changed enough through continual learning can invalidate safety tests previously performed."

That is why Li emphasized the "ability to refuse learning." She said: "AI is so passive that if you feed it harmful data, it learns it as-is. AI needs to internalize human norms and be able to refuse to learn harmful or illegal knowledge in the first place."

On open source, she championed "democratization" as the core value: "AI learned from the internet — the product of human intellect — so it belongs to humanity, not just to those in power. Countries like South Korea must also have their own capabilities."

Key takeaway: Model update cycles and safety verification cycles must be aligned. When a model changes, testing must start over.


Eric Xing: "Intelligence Is Not One Thing"

MBZUAI President Eric Xing argued that the very word "intelligence" needs to be redefined. Current AI is merely a limited form of intelligence — closer to "book knowledge."

He distinguished several layers of intelligence. Textual/visual intelligence (understanding language and images) is fairly well developed. Physical intelligence (adaptive action in real environments) is at a very rudimentary level. Social intelligence (collaboration and role-sharing among multiple agents) is almost nonexistent — current LLMs lack the mutual understanding, role division, self-definition, and awareness of others' limitations needed when 2 to 100 people work together. Philosophical intelligence (self-directed inquiry) barely exists — this refers to AI that feels its own curiosity, seeks out data, and tries to explain the world without being asked.

As he noted, a Nobel laureate may perform worse than their spouse at stock investing. This shows that intelligence is multifaceted, varying by context, goal, and environment. From this perspective, what current LLMs provide is limited textual/visual intelligence — closer to "book knowledge" than to the capabilities required to translate into action in real environments.

He used the analogy of hiking in the Alps. Even with a map and guidebook, you must adapt to changing weather, terrain, and visibility on the actual mountain. This is "physical intelligence" — what AI currently lacks most. The key tool for physical intelligence is a "world model": the foundation for understanding the world, purposefully generating plans and actions, and adapting to changing environments.

He illustrated the limitation with video generation. AI can convincingly produce short clips of 10 seconds to a minute. But over longer durations, it quickly breaks down. If a camera rotates 360 degrees and returns to the original position, the same scene should reappear — but AI cannot maintain this, because the system lacks a proper concept that "the world continues to exist."

Key takeaway: When selecting AI application areas, first analyze "what kind of intelligence is needed." Text processing is strong; decision-making in unpredictable environments is still weak.

Yuval Noah Harari: "A Plane Is Not a Bird"

Historian Yuval Harari's message was the most direct.

"Asking when AI will match human intelligence is like asking when a plane will become like a bird. A plane flies faster and higher than a bird — but that doesn't make it a bird."

What he warned against was this: "Even primitive AI is dangerous enough." Social media news feed algorithms are extremely simple AI, yet within a decade they have upended democracy and public opinion.

Even scarier is the financial system. "Put AI in the middle of a savanna and tell it to 'conquer the world' — it can't. But put it inside the financial systems humans have built, and the story changes. An AI that can't even walk can control the flow of money."

He emphasized the historical lesson that "it doesn't take much intelligence to change the world." Humanity has created enormous chaos and violence with relatively limited intelligence. "The most intelligent being can be the most delusional," he cautioned against the naive optimism that higher intelligence will reduce delusion.

His most important message was to design a "correctable society." AI systems cannot be perfect from the start; structures that allow stopping, reverting, and correcting when problems arise are absolutely necessary. He called this a "correction mechanism."

He used the Industrial Revolution as an analogy. When it began in the early 19th century, no one knew how to build a "good industrial society." It took 200 years to find the answers — at the cost of wars and hundreds of millions of lives. Now, dealing with an even more powerful technology, we must find the answers in a far shorter time.

Key takeaway: AI risk should be assessed not by AI's "level of intelligence" but by "the influence of the systems AI is connected to." Even a simple algorithm has enormous impact when connected to critical systems.


What the Panel Agreed and Disagreed On

The session began with a simple but uncomfortable question: "As AI autonomy grows, can humans maintain meaningful control?"

The answer was neither a confident "yes" nor a reassuring "no" — it was "maybe." This ambiguity itself, neither certainty nor optimism, captures the essence of the AI era in which we stand.

The session ended without clear answers. But one consensus was unmistakably formed: the next phase of AI will not be determined by technological breakthroughs alone. That future will be governed by non-technical factors — values, governance, accountability structures, and how humans intervene.

The recognition was that we have entered an era where it matters more to ask: "When should humans intervene?", "Who should be held accountable?", and "How can it be stopped?" — more than how much smarter a model can be made.

Ultimately, the "next phase of intelligence" is not a question of performance competition — it is a question of civilizational design. We are at a stage where we must choose not how powerful we can make AI, but on what values and control structures we place that power.

And the most honest conclusion this panel offered was: "We are not yet ready." That is exactly why we must be more careful, and above all, humble.


Editorial Perspective: The Questions Korea Must Ask in the AI Age

The most striking moments in this session were when the world's leading AI experts admitted "we don't know." Whether open source is right or closed is better, whether continual learning is safe or dangerous, who should control AI — none of the four scholars could answer with confidence.

Yet this very "uncertainty" may be an opportunity for Korea.

Until now, the AI discourse has been dominated by the United States and China, competing over "who makes the stronger AI" — parameter counts, benchmark scores, training data volume. Korea's chances of winning that game are low; the gap in capital, data, and talent is too great.

But Davos signaled the beginning of a different game: who first builds AI that is "more trustworthy," "more stoppable," and "more accountable." The rules of this game have not yet been set.

Korea holds relatively favorable conditions for this new game. It has world-class manufacturing infrastructure. It has know-how in quality management and process safety accumulated in semiconductors, batteries, and displays. It has experience safely operating nuclear power plants. The structure Bengio described — "calculating risk probability and stopping when a threshold is crossed" — is exactly what Korean industry has done for decades.

The question is whether this capability can be translated into AI governance.

As Korean companies adopt AI, the most common question they ask is: "Which AI has the best performance?" The question to ask after Davos is different: "If this AI fails, who is responsible?", "When should AI be stopped?", "Can AI's decisions be reversed?" Companies that can answer these questions will be the winners of the next decade.

The same applies to government. Rather than viewing AI regulation as "blocking innovation," it should be framed as "being the first to build a trustworthy AI ecosystem." Just as the EU created a preemptive regulatory framework, Korea can create a "K-AI Safety Standard." This is not regulation — it is export competitiveness.

Recall Harari's words: "In the early days of the Industrial Revolution, no one knew how to build a good industrial society. It took 200 years to find the answers." Finding the answers to the AI age cannot take 200 years. Korea can participate in that process of finding answers — and if it doesn't, it will have to follow rules made by others.


The conclusion from the Davos session was "be humble." But humility does not mean passivity. On the contrary, it is precisely the recognition that "we don't fully know" that justifies new experiments and initiatives. Korea may find it difficult to win the AI performance race. But leading in the AI trust race is possible.

 
 
 

Comments


bottom of page