Differing Views on the AI’s Future

Thanks Denise Holt for sharing this excellent article https://www.linkedin.com/pulse/deep-learning-rubbish-karl-friston-yann-lecun-face-off-denise-holt-n8i4c/ that highlights the debate at the Davos 2024 WEF, where Karl Friston and Yann LeCun presented differing views on AI’s future. Although the title “Deep Learning is Rubbish” is intentionally provocative to capture Friston’s critical stance on deep learning, it might oversimplify the nuanced views and complex discussion about the future of AI presented by them. I would have love the debate to go much deeper addressing learning world models, reasoning, understanding, goals, planning, agency, etc., that are all key to making real progress in AI. See also my article on “Intelligent Agents, AGI, Active inference and the Free Energy Principle” https://lnkd.in/d4WebpXs and https://www.linkedin.com/posts/jacquesludik_ai-wef-worldeconomicforum-activity-7153828111908757504-9-lm .

Karl Friston’s Active Inference framework, rooted in the Free Energy Principle, along with VERSES’s spatial web technologies like HSML and HSTP, proposes a model of learning and understanding the world that is closer to how natural systems self-organize and adapt. This approach emphasizes understanding the environment in a holistic, interconnected way and aim to create sophisticated, networked knowledge graphs that can simulate understanding context and relationships in a dynamic world. These tools might mirror certain aspects of natural intelligence by processing complex, interrelated data. Yet, they still operate within the confines of human-designed algorithms and computational models, distinct from the organic processes observed in nature.

On the other hand, Yann LeCun’s advocacy for energy-based, self-supervised learning focuses on efficient and scalable approach to AI training and learning world models, where the AI learns mappings from input representations to output representations directly, without explicitly encoding them in knowledge graphs which represents a more symbolic and structured approach. The human brain and natural intelligence agents don’t use explicit knowledge graphs to understand the world. The brain’s processes and organic, fluid learning processes observed in nature are more akin to pattern recognition and predictive modeling.

Both agreed on the need for energy-efficient AI and the limitations of current models in understanding complex world dynamics. Both approaches offer unique strengths: Friston’s mirrors natural processes and might provide deeper insights into complex dynamics, while LeCun’s offers practical scalability and efficiency in learning representations. The choice between these methods depends on the specific goals and constraints of the AI application in question. I see Active Inference AI as one of the building blocks to build intelligent agents on trustworthy AI guardrails in an ever growing AI toolbox alongside Energy-based Self-supervised learning, Generative AI, etc.

https://www.linkedin.com/posts/jacquesludik_thanks-denise-holt-for-sharing-this-excellent-activity-7157843692442030080-zDmM

Post by jludik

Comments are closed.