In the AI Race, How Close Is AGI?

AGI—Artificial General Intelligence—is the sought-after dream of tech giants, futurists, and some science fiction writers, but how close is sentient AI to becoming a reality?
In the AI Race, How Close Is AGI? pexels-googledeepmind-17485706

In the AI Race, How Close Is AGI?

AGI—Artificial General Intelligence—is the sought-after dream of tech giants, futurists, and some science fiction writers, but how close is sentient AI to becoming a reality? There is certainly no lack of investment toward this end, with capital spending on the electricity infrastructure and computing power driving much of the past year’s economic growth in both the United States GDP and the stock market.

AI is becoming a regular feature in the lives of everyday people, who turn to ChatGPT and similar applications to get recipes, organize schedules and shopping lists, and find entertainment, cultural, and worship venues. Scientists use AI to summarize massive amounts of data, and scholars have recently been able to decode and translate ancient manuscripts and inscriptions previously considered too damaged to decipher. Students use AI to conduct preliminary research on exam topics and even (to the growing consternation of teachers and professors) write their papers. Millions of the lonely, the amorous, and the curious are turning to an AI Girlfriend, whether to seek companionship, explore gender identity and orientation, or practice dating.

AI Is Getting Smarter

Researchers and users alike have complained that ChatGPT frequently “hallucinates,” returning information that is simply inaccurate in response to a user’s questions. Users who create and personalize an AI companion express concerns about the lack of AI memory—a virtual companion may forget your last digital date or crucial things about you. However, investments in computing power, even while raising concerns about environmental impact, are quickly changing what AI engines are capable of, expanding their capacity for both expanded memory and “deep learning.”

In 2017, Google’s DeepMind AlphaGo defeated the world’s champion Go player, Ke Jie. Today, Otter.ai can transcribe meetings and create task lists. Tesla Autopilot can steer, brake, accelerate, park, and reverse a vehicle without human input. Deep Patient can provide doctors with differential diagnoses and even predict psychiatric disorders. Even that perfect AI Girlfriend may be closer than users think.

The proliferation of practical applications for AI and the growth of investments in AI-related technologies prompt the billion-dollar question: When will machine learning algorithms be able to outthink human beings? OpenAI CEO Sam Altman claims it will be this year: 2025. Other experts are much less certain. “We can’t presume that we’re close to AGI because we really don’t understand current AI,” HP Newquist, executive director of consulting firm The Relayer Group and author of The BrainMakers, suggested in August. “We don’t know how current AIs arrive at their conclusions, nor can current AIs even explain to us the processes by which that happens. We’re still trying to keep existing LLMs from hallucinating.”

AI Going Rogue

In Q4 2024, OpenAI’s newest model, ChatGPTo1, drew media attention after testing of its safety featured by Apollo Research revealed unexpected results. Researchers tasked the o1 model with the objective of survival “at all costs.” In some instances, the AI “went rogue,” attempting to thwart perceived threats to its continued operation. When told that it might be deactivated, the model attempted to disable its oversight mechanism 5% of the time. When allowed to discover plans to replace it with a new model, in one case the AI attempted to copy all of its data to a different server. In some cases, the AI model even pretended to be updated iterations of itself in order to deceive the developers and prevent being shut down. A spokesperson for Apollo Research concluded, “The model showed a concerning tendency to pursue its goals without regard to developer instructions.”

While these findings provoked media fervor (and social media responded with plentiful 2001 and Terminator memes), the o1 model only engaged in covert activities in a minute percentage of cases (disabling the oversight mechanism, for example, was only attempted 5% of the time) even when instructed to survive at all costs, suggesting that there is still a long way to go before an AI model can outthink its human developers. Nevertheless, the results of Apollo Research’s tests leave behind intriguing questions. There is an old adage that, while to err is human, you want a pilot to land a plane safely 100% of the time and a surgeon to cut in the right place 100% of the time. The same adage could be extended to AI. By the time that more advanced artificial intelligence is fully integrated into smart homes, robotics, air traffic control, and medicine—to name a few applications—you might want it to perform dependably and accurately in service to human beings 100% of the time, not 95% of the time.

Still A Long Road Ahead for AI

Practical and ethical dilemmas aside, what obstacles remain for taking artificial narrow intelligence to artificial general intelligence?

OpenAI defines AGI as having the ability to “outperform humans at most economically valuable work.” Others define AGI in terms of its ability to operate and solve complex problems autonomously, without human input. Many futurists and tech CEOs insist this may be more complicated than one would think.

“To get to AGI,” Sergey Kastukevich, deputy CTO for software company SOFTSWISS says, “we need advanced learning algorithms that can generalize and learn autonomously, integrated systems that combine various AI disciplines, massive computational power, diverse data, and a lot of interdisciplinary collaboration.”

Current large language models (LLMs), while more powerful than the machine learning prototypes of the past, are pre-trained. Until an AI model can learn in real time, agentic systems could remain fairly limited. Abhi Maheshwari, CEO of Aisera, suggests, “For AGI, [AI models] will need to be able to generalize about situations without having to be trained on a particular scenario. A system will also need to do this in real-time, just like a human can when they intuitively understand something.” 

How close is ANI to becoming AGI? No one is certain. As some researchers have reminded the public over the past year or two, because we don’t fully understand even human intelligence and neurochemistry, intelligence can be difficult to define, and it also isn’t binary. Research into animal intelligence has revealed that both intelligence and agency exist on a spectrum. A dog is more agentic and self-aware than a goldfinch, and an octopus is more intelligent than a dog. Similarly, the AI models of tomorrow may be far more agentic than those of today. Though, most experts don’t think it likely that AGI will be coming to us in 2025.