OpenAI Chief Executive Sam Altman claims to be at the doorsteps of the development of Artificial General Intelligence (AGI), defined as a capability of a machine surpassing human intelligence by the ability to apply intelligence to every intellectual task that can be performed by humans. Indeed, in a recent blog post, Altman forecasted that first AI agents will enter the workforce as early as 2025, changing the very nature of the industries and hastening scientific discovery.
Whereas the optimism of Altman may excite some, skeptics abound in AI circles. Experts would argue that the definition of AGI is fluid and elusive. What Altman presented as AGI could very well amount to more advanced automation rather than the complete human-level capabilities.
Evolving Definitions of AGI
The concept of AGI has shifted over time as AI models have improved. Many systems now pass benchmarks like the Turing Test, yet they lack human traits such as sentience or emotional reasoning.
“While these systems achieve remarkable results, they’re far from true AGI,” says Humayun Sheikh, CEO of Fetch.ai. “AGI isn’t just about intelligence—it’s about sentience, and that milestone is still distant.”
Despite this, Altman remains steadfast, claiming OpenAI has cracked the code for building AGI. Critics suggest such bold predictions may be aimed at maintaining investor confidence, given OpenAI’s steep operational costs.
Can AGI Agents Revolutionize Work?
Altman predicts that AI agents entering the workforce in 2025 will significantly impact productivity. However, these agents are likely to excel at repetitive tasks rather than creative or decision-making roles.
“AI agents can reason and analyze, but they lack human ingenuity,” explains Harrison Seletsky, Director of Business Development at SPACE ID. Similarly, Charles Wayn, co-founder of Galxe, highlights ongoing issues with AI consistency and reliability. He estimates AGI may still be years away, rather than months.
Current AI frameworks like Crew AI and LangChain already demonstrate agentic behaviors, enabling systems to collaborate with humans on specialized tasks. Yet, these systems frequently rely on human intervention to address context, bias, or errors.
The Future of Human-AI Collaboration
The rise of AI agents does not necessarily spell doom for human workers. Experts suggest a collaborative approach, where AI handles repetitive tasks, and humans focus on creativity and critical thinking, could enhance productivity.
Research from the City University of Hong Kong advocates for AI-human collaboration to ensure sustainable growth. “AI creates both challenges and opportunities,” the study states. “Collaboration is key to addressing these challenges effectively.”
However, some industries are already replacing humans with AI agents, yielding mixed results. As of 2024, approximately 25% of CEOs express enthusiasm for AI-driven automation to cut labor costs. On the other hand, AI limitations—like hallucinations or lack of context—ensure that human oversight remains essential.
The Pursuit of ASI
Altman’s ambitions extend beyond AGI to Artificial Superintelligence (ASI)—AI systems that surpass human intelligence in all areas. While Altman has not set a clear timeline for ASI, he previously estimated its arrival within “a few thousand days.”
Many experts disagree. Yann LeCun, Meta’s chief AI scientist, argues that hardware and training limitations keep ASI out of reach for now. Forecasting Institute studies place a 50% likelihood of ASI development around 2060.
Critics like Eliezer Yudkowsky believe Altman’s grand announcements may be more about generating short-term excitement for OpenAI’s progress.