The Future of AI

future of AI WAICF

Last week I attended Yann LeCun’s keynote at the World AI Cannes Festival which captured my attention. LeCun – professor at NYU, Chief AI Scientist at Meta, winner of a Turing Award in 2018 – shared profound insights into the current state and the future trajectory of Artificial Intelligence. His presentation was not just a glimpse into the advancements in AI from a technical point of view, but a call for a shift in how we approach these systems. 

 

The Present Limitations of AI 

LeCun made it clear: while AI has made significant strides, we are still far from achieving human-level intelligence across all domains. Today's most advanced AI models suffer from several critical shortcomings — they lack common sense, memory, reasoning, and the capability for hierarchical planning.  

Despite their linguistic fluency, large language models (LLMs), particularly auto-regressive LLMs, fall short of being factual, non-toxic, and controllable. Their inability to truly understand how the world works, mimicking undesirable behaviors from their training sets, is a fundamental flaw that cannot be remedied without significant redesign. Currently, their best use cases are limited to writing assistance, generating first drafts, stylistic polishing, and coding. 

A Comparison with Human Learning 

A striking comparison made by LeCun highlighted that a 4-year-old child has been exposed to 50 times more data than any existing LLMs. This vast difference underscores a crucial point: human learning is not solely reliant on language but is deeply rooted in sensorial experience. 

The Path Forward: What Are We Missing? 

The keynote shed light on the critical elements missing in today's AI systems: 

  • Systems that can learn world models from sensory inputs. 
  • Systems equipped with persistent memory. 
  • Systems capable of planning actions. 
  • Systems that are inherently controllable and safe by design, rather than being made so through fine-tuning. 

Research Directions and New Model Architectures 

LeCun's talk also pointed towards promising research directions aimed at overcoming these limitations. Innovations such as Objective-Driven AI, Joint Embedding Predictive Architecture (JEPA), and DinoV2 represent the forefront of efforts to devise AI systems that are more aligned with the human way of learning and interacting with the world. 

The Vision of AI-Mediated Future and The Need for Open-Source 

Looking ahead, LeCun envisions a future where all our interactions with the digital world will be mediated by AI assistants. These assistants will not only serve as interfaces but will also act as repositories of human knowledge and culture. However, he stressed the importance of these AI platforms being open source. The control over culture and knowledge should not be in the hands of a few corporations in the West Coast of the US or China. The need for open-source AI platforms is imperative to ensure that the benefits of AI are accessible to all, preventing monopolization and fostering innovation. 

Under LeCun’s guidance, Meta's AI division has taken a radical step by open-sourcing its most capable models, notably the powerful Llama-2. This move distinctly sets Meta apart from its main competitors, such as Google DeepMind, Microsoft-backed OpenAI, and Amazon-backed Anthropic, who have opted not to release the weights of their neural networks. Mark Zuckerberg highlighted the strategic advantage of open-source software, suggesting that it often becomes an industry standard, facilitating easier integration of innovations into products. While there is debate about the extent to which Llama-2 can be considered truly open-source, it is undeniably more open than any of its competitors. 

The “Godfathers of AI” battle 

 

LeCun’s vision on how to deal with AI Alignment differs completely from Geoffrey Hinton’s and Yoshua Bengio’s, his fellow winners of the Turing Award in 2018 and altogether known as the “Godfathers of AI”. While LeCun believes in a self-regulating market based on open-sourced foundational models, Hinton and Bengio are seriously worried about possible (evil) applications of human-level AI. Hinton, inventor of the backpropagation algorithm, this position at Google in May 2023 to warn the world about the potential risks of AI surpassing human intelligence, including job displacement, biased decision-making, autonomous weaponry, and self-modifying AI systems. Bengio, inventor of Generative Adversarial Networks, has similar concerns about AI safety too, highlighting the urgency for AI regulation and being the promoter of a open letter to the scientific community to slow down developments on AI systems passing the Turing test.  

 

 

 

Conclusion 

Yann LeCun's keynote at the World AI Cannes Festival offered a comprehensive and forward-thinking perspective on the future of AI, underlining both its potential and its current limitations. Through his insights, LeCun has articulated a vision for AI that transcends mere technological advancement, advocating for a paradigm shift towards open-source AI platforms to democratize the benefits of AI, ensure its safe development, and foster innovation. This approach sets a contrasting path to the concerns raised by his peers, Geoffrey Hinton and Yoshua Bengio, who emphasize the urgent need for regulation and caution in the face of AI's potential risks. LeCun's advocacy for open-source AI as a solution to these challenges highlights a fundamental belief in the power of transparency and community-driven development to steer AI towards beneficial outcomes for humanity. As we stand at the cusp of significant advancements in AI, the divergent views of the "Godfathers of AI" underscore the complexity of the path forward.  

At Hyntelo, we believe that balancing innovation with ethical considerations and safety will be crucial in navigating the future of AI, making the ongoing dialogue between these opposing opinions an essential part of shaping a future where AI enhances human capabilities without compromising our values or security.