AI at Meta, Open Source, Limits of LLMs, AGI and the Future of AI

Excellent conversation between Lex Fridman and Yann LeCun about AI at Meta, Open Source, Limits of LLMs, AGI and the Future of AI.

Good background for some of the technical discussion on Yann’s proposals and work can be found in some of his papers such as “A Path Towards Autonomous Machine Intelligence”

See also my article on “Intelligent Agents, AGI, Active inference and the Free Energy Principle” and questions about reasoning, understanding, goals, planning, agency etc. to make real progress in AI. and

See also other articles in the “Democratizing AI” Newsletter

Some of the key topics addressed in Lex’s latest discussion with Yann:

🎾 Importance of open source AI in countering concentration of power and shaping a positive future for humanity.

See also my book on “Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era” that shares similar sentiments.

🎾 The process of language generation involves auto regressive prediction, distinct from pre-planning speech regardless of language.

🎾 Training joint embedding predictive architecture (JEPA) requires contrastive learning to avoid collapse and produce meaningful representations.

🎾 Innovative AI techniques: I-JEPA and V-JEPA for image and video masking, improving video representation prediction.

🎾 Self-supervised learning in AR LLMs.

🎾 Uncovering the depth of human communication beyond language through implicit cues and humor.

🎾 Development of a new AI system for complex problem-solving using internal world models.

🎾 Recommendations to shift from generative models to joint embedding architectures for improved image representation.

🎾 Utilizing open source platforms to enhance AI systems for diverse applications and industries.

🎾 Ethical considerations in AI development, including defining hate speech and dangerous content, and the limitations of large language models.

🎾 Intelligence is multidimensional, making it impossible to compare entities based on skills. AI doomers’ catastrophic scenarios are based on false assumptions.

🎾 Fear of new technology’s societal impact is rooted in human psychology and resistance to change.

🎾 Challenges in training world models, planning in non-physical systems, and scaling up innovative work in AI learning world models, reasoning, understanding, goals, planning, agency, etc., that are all key to making real progress in AI.

Post by jludik

Comments are closed.