FRANKFURT SCHOOL

BLOG

The Evolution of AI: Milestones, Challenges and the Future
Executive Education / 24 May 2024
  • Share

  • 2120

  • 0

  • Print
Moritz Strube is an expert and practitioner with 25 years of experience in artificial intelligence and data science. He is currently the CTO of the AI start-up InspectifAI and was previously CTO and co-founder of various tech start-ups. He studied maths, economics and business administration. Moritz has been a lecturer at Frankfurt School for Artificial Intelligence and Data Science courses for 7 years and has successfully taught hundreds of students the basic concepts of AI.

To Author's Page

More Blog Posts
Die Rente ist sicher?! Ein Märchen aus uralten Zeiten
Skalierung von KI im Finanzbereich – Ein Leitfaden für Führungskräfte
Blockchain Beyond Speculation: The Real Impact of Decentralized Systems

The objective of Artificial Intelligence (AI) as a subfield of computer science is to create systems capable of performing tasks that would typically require human intelligence. This encompasses a wide range of capabilities such as learning, reasoning, problem-solving, perception, and understanding of natural language. The ultimate goal is to develop machines that can operate autonomously, adapt to new situations and perform complex tasks in a manner similar to humans.

From AI Winters to Deep Learning Breakthroughs

AI has gone through various phases that were characterised by different approaches and periods of reduced interest and investment, known as the “AI winter”. From its beginnings in the 1950s and 1960s, through the golden years of the 1960s and 1970s, to the emergence of machine learning in the 1980s and 1990s, the development of AI has experienced some significant turning points. In particular, the era of Big Data and Deep Learning since the 2000s has facilitated breakthroughs in Deep Learning thanks to the availability of large data sets and significant advances in methods and computing power, particularly through GPUs. This has led to unprecedented progress in fields like computer vision, natural language processing and autonomous vehicles.

Deep Learning (DL) has become the dominant approach in the field of Artificial Intelligence (AI) following a series of groundbreaking successes, most notably the triumph of AlexNet in 2012. The success of AlexNet and the subsequent developments have led to significant advances in several areas, from improved architectures for Computer Vision to applications in other domains such as natural language processing and generative modelling.

Despite the vast capabilities and groundbreaking applications of Deep Learning across various fields, these technologies face several challenges and limitations. These include dependence on large data sets, high computational costs, lack of interpretability and transparency, problems with generalisation, bias and fairness, overfitting, vulnerability to attack, energy consumption, regulatory and ethical concerns as well as challenges with scalability and robustness of the models.

The entire industry is actively engaged in addressing these challenges by dedicating immense resources and innovative efforts to push the boundaries even further. However, when it comes to realising the broader vision of AI, Deep Learning faces more fundamental problems. Yann LeCun, who received the 2018 Turing Award for his work on Deep Learning, recently pointed out that large language models do not have the capabilities of perception, memory or reasoning and do not generate actions, which are things that intelligent systems should do.

Challenges and the Quest for Human-like Intelligence in AI Research

This criticism highlights the significant limitations of current models compared to the broader goals of AI research to create artificial agents that can perform tasks with human-like intelligence. To bridge this gap, future research could aim to develop AI systems that can truly perceive, remember, reason and act in the world – moving beyond pattern recognition and text generation to achieve a more holistic form of intelligence. This may involve integrating LLMs with other types of AI technologies, such as robotic systems that interact with the physical world or AI systems equipped with more sophisticated forms of reasoning and problem-solving capabilities.

Overall, the development and current state of Artificial Intelligence (AI), especially in the realm of Deep Learning, demonstrates both impressive successes and significant challenges. While Deep Learning is capable of mastering complex tasks across various fields and driving groundbreaking technological advancements, the fundamental limitations of these technologies become apparent in the context of the broader vision of Artificial Intelligence and the quest for Artificial General Intelligence (AGI). The limitations in understanding, memory, reasoning and the ability to act autonomously in the physical world highlight the gap between current AI capabilities and human levels of intelligence.

To close this gap and move closer to the vision of AI that can truly act intelligently and autonomously, innovative breakthroughs are needed that go beyond today’s approaches to Deep Learning. The future of AI research may lie in developing new paradigms [^2] that incorporate a deeper integration of cognitive capabilities, stronger cognitive abilities, better generalisation across different domains and genuine interaction with the environment. Despite the challenges, AI remains a fascinating and dynamic field of research, and its potential and limitations continue to be actively explored and expanded.

[1] This is an abridged version of the article “Challenges for and limitations of Deep Learning”, which was published here: https://open.substack.com/pub/moritzstrube/p/challenges-for-and-limitations-of

[2] In an open letter to OpenAI, the company Verses recently announced that it has achieved the necessary breakthrough: https://www.verses.ai/press-2/verses-identifies-new-path-to-agi-and-extends-invitation-to-openai-for-collaboration-via-open-letter

 

0 COMMENTS

Send