Who Does the World Model Designer Answer To?
Yann LeCun raised $1.03 billion for AMI Labs to build AI that understands the physical world — then declared he lacks the "legitimacy" to decide how it's used. But legitimacy and responsibility are not the same. This essay examines how that silence functions as structural immunity.
Yann LeCun's Billion-Dollar Silence
On March 10, 2026, AMI Labs — the Paris-based AI startup co-founded by Yann LeCun — closed a $1.03 billion seed round. Pre-money valuation: $3.5 billion. That's the price tag on a company with no product and no revenue. Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions came in as co-lead investors. Nvidia, Samsung, Toyota Ventures, and Publicis Groupe signed on as strategic backers. Tim Berners-Lee, Mark Cuban, and Eric Schmidt lent their names as individual investors. It is one of the largest seed rounds in the history of artificial intelligence.
LeCun is one of the three godfathers of deep learning, sharing the 2018 ACM Turing Award with Yoshua Bengio and Geoffrey Hinton. He spent twelve years leading Meta's AI research lab, FAIR, before announcing his departure in November 2025, choosing independence as Mark Zuckerberg pivoted toward short-term LLM commercialization. At AMI Labs, LeCun serves as co-founder and Executive Chairman. The CEO seat belongs to Alexandre LeBrun.
But at this glittering launch, one remark from LeCun set off an alarm in my head. In an interview with WIRED, he said this:
"I don't think any of us — whether it's me or Dario Amodei, Sam Altman, or Elon Musk — has any legitimacy to decide for society what is a good or bad use of AI. Technology can be used for good things or bad things. If a government is somewhat authoritarian, it could be used for bad things."
To be clear: LeCun did not say "I bear no responsibility." What he said was that he lacks legitimacy — not responsibility. And AMI Labs' own description does include the phrase "controllable and safe" among its stated goals. But the problem lies not in what LeCun said. It lies in what he didn't say. At the very moment he was announcing a billion-dollar bet on AI that understands the physical world, he went silent on the designer's active responsibility for the social consequences of that AI.
This essay is not a celebration of world models as a technological possibility. It is an analysis of what LeCun's statement structurally does — whether his declaration of absent legitimacy performs the rhetorical function of diluting design responsibility as well.
World Models: From Language to Reality
First, the technical context. The world model that AMI Labs is pursuing represents a fundamentally different approach from the large language models (LLMs) that currently dominate the AI market.
LLMs learn from text data, predicting the next word probabilistically. The result is a remarkable capacity for language generation — but not an understanding of cause and effect in the physical world. LeCun has long criticized this limitation. In his 2022 paper A Path Towards Autonomous Machine Intelligence, he proposed an alternative architecture called JEPA — Joint Embedding Predictive Architecture. Rather than processing text, JEPA learns from sensory data like video in abstract representation spaces, aiming to build an internal model of how the world actually works.
The vision itself is intellectually seductive. While LLMs skim across the surface of language, world models promise to dig into the depth of reality — a proposition with the potential to fundamentally reset the direction of AI research.
But two things should not be confused. The allure of a vision is not the same as its feasibility. JEPA-based models are evolving. Meta's V-JEPA 2, released in 2025, demonstrated physical-world applications through robotic interaction and planning capabilities. But it has yet to prove broad commercial superiority over LLMs. The very concept of "world model" lacks a single agreed-upon academic definition. Even AMI Labs CEO LeBrun conceded that "in six months, every company will call itself a world model to raise money." Whether this lands as genuine vision or mere buzzword is something nobody can predict yet.
Nor are LLMs and world models necessarily mutually exclusive. Multimodal LLMs already integrate vision and audio, and in the long run, the two approaches may well converge. The "LLMs versus world models" dichotomy makes for a clean narrative, but the actual evolutionary path of the technology is far messier.
The Rhetoric of Legitimacy: What LeCun Said and What He Didn't
Back to LeCun's statement. "We have no legitimacy to decide on behalf of society" sounds, at first glance, like democratic humility. The idea that a handful of tech CEOs shouldn't determine the future of humanity is a widely accepted principle of democratic governance.
But dissect the actual function of that sentence and a different landscape emerges.
LeCun invoked the concept of legitimacy to decide while leaving unmentioned the concept that should have stood right beside it: responsibility to design safely. There is no basis to conclude that LeCun intentionally denied responsibility. But the structural effect of his statement is another matter. When someone declares "I have no right to decide what counts as good or bad use of AI," the listener may well take away a second message: therefore, the designer bears no special obligation. When you declare the absence of legitimacy and stay silent on responsibility, that silence begins to function as exoneration.
In law, the distinction is unambiguous. The fact that a car manufacturer has no legitimacy to set traffic policy does not mean they can ship a car without brakes. The core of product liability is the designer's duty of care — and that duty persists whether or not the designer holds any social decision-making power. Legitimacy and responsibility are separate legal and ethical categories.
David Hume's is-ought gap operates here too. "Technology can be used for good things or bad things" is a descriptive claim — a statement of what is. But LeCun converts this description into a normative conclusion — a statement of what ought to be. The observation that technology is dual-use morphs into the implication that designers need not intervene. This is a variant of the classic naturalistic fallacy.
So what about Dario Amodei, CEO of Anthropic? Amodei has consistently demonstrated the opposite stance. In his 2024 essay Machines of Loving Grace, he championed the positive potential of AI while maintaining the philosophy that AI developers bear a heavier responsibility precisely because they understand the risks most deeply. In 2026, when the U.S. Department of Defense demanded that Anthropic lift its restrictions on the use of its AI for autonomous weapons and domestic surveillance, Amodei refused. In Amodei's framework, a developer's responsibility exists regardless of whether legitimacy is present. Those with capability bear the burden. It is the exact structure of the expert duty of care in legal doctrine.
The gap between LeCun and Amodei is not a matter of temperament or personality. It is a collision between two fundamentally different philosophical positions on AI governance. LeCun stands on technological neutralism. Amodei stands on technological responsibility.
AI That Understands the Physical World — and Physical-World Consequences
Here is the question that demands asking: Are the technical ambitions of world models compatible with LeCun's governance philosophy?
LLM errors occur at the level of text. Hallucination means generating a wrong sentence. It can be offensive and dangerous, but the blast radius stays within the domain of language.
World model errors are different. When an AI that has learned the causal structure of physical reality makes a wrong inference, the consequences manifest in the physical world. Nabla, the medical AI startup that AMI Labs has designated as its first partner, was co-founded by LeBrun himself. With the launch of AMI Labs, he stepped down as Nabla's CEO to become its Chief AI Scientist and board chairman. In medicine, when an AI that "understands the physical world" makes a wrong call, the cost is paid by the patient. Between textual hallucination and real-world misjudgment lies a categorical difference.
It is precisely at this juncture that a tension emerges between LeCun's technical ambition and his governance philosophy. He criticizes LLMs for failing to understand the real world — then declares he will build AI that does. But when it comes to the consequences of that AI operating in the real world, he says he lacks the legitimacy to decide. The scale of his ambition to comprehend reality is strikingly mismatched with the quietness of his stance on the designer's role in that reality.
Technological Neutralism as Structural Immunity
This is not one man's contradiction. It is a structural one — because LeCun is far from alone. "Technology is neutral; good and bad are the user's business" is Silicon Valley's oldest self-defense rhetoric. When Facebook was used to fuel the genocide in Myanmar, Mark Zuckerberg acknowledged deficiencies in content moderation capacity but remained reluctant to accept the platform's structural responsibility. In the early days of the Project Maven controversy, Google emphasized the separation of technology from its use.
The rhetoric works the same way every time. It justifies a structure in which developers capture the upside of building the technology while society absorbs the cost of its use. When LeCun says "if a government is authoritarian, it could be used for bad things," he is displacing responsibility onto authoritarian governments. The technology designer vanishes. All that remains are users — and victims.
To borrow a concept from Alfred North Whitehead: technology is a stubborn fact. Once designed and deployed, it becomes a resistant reality that cannot be nullified by interpretation or preference. An AI system is the materialization of its designer's choices — which data to train on, which objective function to set, which safety mechanisms to include or omit. These choices operate stubbornly long after deployment. The claim that "technology is neutral" conceals this stubbornness.
Look at the companies that invested in AMI Labs. Nvidia makes GPUs. Samsung makes semiconductors. Toyota makes autonomous vehicles. Publicis Groupe makes advertising. Their reasons for investing in world models are not pure academic curiosity. AI that understands the physical world can be directly applied to robotics, autonomous driving, smart factories, and military technology. The direction of investment telegraphs the direction of use. If the designer of that technology denies the legitimacy of deciding how it's used while staying silent on the responsibility of designing it safely — then who is guarding the gate in this billion-dollar ecosystem?
Open Research: Another Name for Another Question
AMI Labs is committed to open research — publishing papers and releasing code as open source. It's a philosophy LeCun has held since his FAIR days. LeBrun's claim that "technology advances faster when it's open" aligns with the ideals of the open-source community.
But open research is not a substitute for governance. (Honestly, is there any domain governance doesn't need to cover?) Making code public does not distribute responsibility for its social impact. If anything, when the code for AI that understands the physical world is made available to everyone, the "authoritarian governments" LeCun worried about may be the first to exploit it. Open research and responsible governance must coexist. The former cannot stand in for the latter.
World Labs, led by Stanford's Fei-Fei Li, recently raised $1 billion to pursue spatial intelligence research. The race in world models has begun, and billions of dollars in capital are flowing in this direction. The faster the technical competition accelerates, the more dangerous the governance vacuum becomes.
The Future That Silence Builds
LeCun is not entirely wrong. The proposition that "a handful of tech CEOs should not decide on behalf of all of society how AI is used" is sound. It is a principle of democratic decision-making. AI governance should ultimately be the domain of civil society, legislatures, and international institutions.
But the moment that proposition is converted into a rhetoric of self-exoneration, democratic humility degrades into irresponsibility. The precise formulation should be this: "We have no right to decide — but we do have a responsibility to design safely." LeCun delivered the first half and deleted the second.
A person who has taken a billion dollars to build AI that understands the physical world stays silent about what that AI will do in the physical world. This silence is not a personal philosophical choice. It is the structural immunity that technology capitalism grants its designers. The making of technology is private; the consequences of technology are public. The familiar pattern — privatize the gains, socialize the risks — is repeating itself in AI.
The vision of world models is, in itself, intellectually captivating. The attempt to move beyond the surface of text and understand the structure of reality is one of the most ambitious directions AI research can take. But the greater the ambition, the greater the responsibility toward the reality in which that technology will operate. This is Amodei's principle. It is also a foundational principle of law. The greater the capability, the greater the duty of care.
Who does the designer of world models ultimately answer to? The billion-dollar investors? The open-source community? Or the people living in the physical world that this AI has promised to understand? LeCun has not yet answered this question. And that silence is the most stubborn fact about the AMI Labs project.