
If A.I. research remains open and accessible, wouldn’t that give U.S. rivals—like China—an advantage? “You might think that. The problem is we’ll be slowing ourselves down, and geopolitical rivals will be better able to catch up,” said Meta’s chief A.I. scientist Yann LeCun during an onstage interview with The Atlantic CEO Nicholas Thompson on July 8 at the AI for Good Summit in Geneva, Switzerland.
LeCun, widely regarded as one of the godfathers of A.I. for his foundational work on the architecture behind modern systems, argued that open-source A.I. systems offer more global benefit than harm, including on the geopolitical front. Restricting research in an effort to limit rivals, he warned, would ultimately backfire. “They’ll still get access—just with a delay,” LeCun said. “And we’ll lose the reciprocal benefit of feeding off the global innovation flywheel. It’s a bit like shooting yourself in the foot.”
This year’s AI for Good Summit focused on global cooperation, and the debate over open-source technology fit squarely within that theme. Meta’s own Llama model is open source, and its architecture contributed to the rise of DeepSeek, a Chinese A.I. company that launched a powerful LLM on limited resources earlier this year.
Thompson raised a concern: “It sounds like you want the West to lead in A.I. If that’s the goal, shouldn’t there be restrictions on a model as powerful as Llama—and on who can access it around the world?”
LeCun pushed back, arguing that openness is also safer—because it fosters diversity. “The magic of open research is that you accelerate progress by involving more people,” he said.
“The biggest danger of A.I. isn’t bad behavior,” he added. “It’s that every digital interaction in our future will be mediated by A.I.” In that world, diverse open-source systems let users choose their own biases—like reading different news sources.
Looking ahead, LeCun envisioned an international partnership to train foundation models collaboratively—creating a shared global knowledge base while still maintaining national security and data sovereignty.
What LeCun is working on at Meta
Much of today’s generative A.I. development revolves around large language models (LLMs). But LeCun is outspoken in his belief that LLMs are not the path to achieving artificial superintelligence.
While he doesn’t consider them entirely useless, he called LLMs a “dead end if you are interested in reaching human-level A.I.” In particular, he argued that LLMs fall short when it comes to replicating human-like cognitive abilities—such as reasoning, planning, maintaining persistent memory, and elaborating on thoughts.
In contrast, LeCun has spent recent years developing a different approach known as JEPA, or Joint Embedding Predictive Architecture. As described by his employer, Meta, JEPA learns by constructing an internal model of the outside world, comparing abstract representations of images rather than raw pixels.
The latest version, V-JEPA 2, functions as a video encoder that can feed into a language model. “The idea of JEPA,” said LeCun, “is you have a system that looks at video and learns to understand what happens over time and space.”
As the broader A.I. world continues to fixate on LLMs, LeCun—and a growing number of his peers—believes they are far from the ultimate solution.
(Except for the headline, this story has not been edited by PostX News and is published from a syndicated feed.)