In a recent discussion, Dario Amodei, the CEO of Anthropic, forecasted that human-level artificial intelligence (AI) could be achieved between 2026 and 2027 if the current pace of AI advancements continues. Speaking on a podcast hosted by Lex Fridman, Amodei likened the ongoing evolution in artificial general intelligence (AGI) to an educational progression, stating, “We’re starting to get to PhD level, and last year we were at the undergraduate level, and the year before, the level of a high school student.” This analogy underscores the rapid growth and capabilities of current AI systems.
Amodei emphasized that while the advancements are promising, several factors could impede progress. He mentioned potential setbacks including data shortages, cluster scaling limitations, and geopolitical issues that could affect microchip supply chains. Nonetheless, he remains optimistic, asserting that with a consistent downward trajectory in performance improvements, reaching human-level AI in the upcoming years is feasible.
During the conversation, Amodei reflected on the dual nature of powerful technologies. “Things that are powerful can do good things, and they can do bad things,” he remarked, highlighting the need for caution and ethical considerations in the development and deployment of human-level AI. He poignantly added, “With great power comes great responsibility,” stressing the potential societal implications of this emerging technology.
Anthropic’s flagship product, the AI chatbot Claude, stands as a significant player in the competitive landscape against other prominent models like OpenAI’s ChatGPT. When questioned about the future iterations of Claude, specifically Claude Opus 3.5, Amodei was reticent about providing a specific release date, emphasizing instead the company’s mission to foster a “race to the top”—a commitment to encouraging ethical practices among AI developers and organizations.
In parallel, OpenAI CEO Sam Altman has echoed similar timelines regarding the arrival of AGI, hinting that advancements in their hardware may lead to achieving AGI within the next five years. AGI is crucially defined as AI that matches or surpasses human capabilities across a broad spectrum of cognitive tasks. As these discussions unfold, it is clear that the race towards human-level AI is not only exciting but also fraught with challenges that will require careful navigation.