AI Content
March 15, 2024

How Close Are We to AGI (Artificial General Intelligence)?

The rise of generative AI models like ChatGPT and Gemini has fuelled another buzzword - AGI. Artificial General Intelligence is mentioned in hushed whispers across boardrooms and conspiracy back-alleys, as a catalyst for the AI armageddon that’s about to come calling.

But what is AGI? And is it around the corner or some distance away?

Artificial General Intelligence is an AI system capable of thinking and performing tasks like humans. It can adapt to a situation, learn to respond, and execute the action, without needing training or human intervention - at least in theory.

However, with AI and AGI, two questions always follow: 

  1. Are we close to achieving AGI? 
  2. Will AI (or AGI) replace humans?

To know more about ‘what is AGI’, read this article.

When will artificial general intelligence be achieved?

The answer is not simple.

There are varying answers to how long it will take for AGI - a few years, a decade, or maybe two. However, there is an underlying consensus that AGI will become a reality in this lifetime.

In a 2022 Expert Survey on Progress in AI (2022 ESPAI), 50% of the respondents believed that high-level machine intelligence could exist by 2059.

Nobody knows for sure as people are extremely tight-lipped, especially AI companies and their leaders. This only adds to the fear and mystery surrounding AGI. But, at times, some information slips through in interviews and podcasts. Here’s what tech leaders have to say about AGI:


  • Sam Altman, CEO of OpenAI, believes that AGI will possibly be built in the “reasonably close-ish future”. However, he also felt that AGI’s impact on the world and jobs would be considerably smaller than we are imagining right now. 
  • Google DeepMind’s co-founder Shane Legg said in an interview with a tech podcaster that there’s a 50% chance that AGI can be achieved by 2028. He had publicly announced the same in his blog in 2011.
  • While speaking about his vision behind xAI, Elon Musk predicted a full AGI by 2029. He argues that instead of replacing humans, AI requires them to be interesting and useful.
  • A similar prediction was made by Ray Kurzweil at a 2017 SXSW conference in Texas. Kurzweil is a futurist, inventor, and author, and it’s said that he has made 147 predictions in total, out of which 86% have been true. 
“By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”

- Ray Kurzweil


And we can’t discount quantum computing as a potential accelerator in AGI development. A recent study published in Nature Communications confirms this notion.

However, the high costs and specialized expertise required for quantum computing create significant accessibility hurdles, raising questions about its practical implementation and the potential risks of faster, more powerful AI.

But why are people scared of AGI?

To understand the fear grabbing people across Tech and VC boardrooms, we need to truly grasp the potential of AGI - 

Machines replacing human beings.

This scenario might sound straight out of science fiction, often brought to life in books and movies. But with AI's rapid advancements, the question of "when" rather than "what if" seems increasingly valid.

Below are the key factors of AGI that invoke fear in humans: 

  • Losing control to superintelligence: The possibility of AGI surpassing human intelligence and operating independently raises ethical, transparency, and safety questions. We might lose control of a superintelligent AI, leading to scenarios where its goals might not align with ours, potentially causing harm.
  • Ethical dilemma: AGI can possibly exacerbate inequalities and become available only to a small section of society.  While developing AGI, we might be confronted with questions like: How can we ensure AI is built with human values like fairness, justice, and safety in mind? Can we prevent the malicious use of this powerful technology? 
  • Job market displacement: With human-like intelligence and autonomy, AGI can make jobs like a bank teller, and data entry clerk obsolete. A WEF Future of Jobs 2023 Report highlights a significant shift in the labor market in the coming years. It could create new jobs in the sectors of AI, automation, and data analytics but only a certain section of the workforce would be able to benefit from it, rendering others who cannot keep up jobless and unemployable. However, some organizations that foresee the future of jobs have already started training and preparing their workforce to be future-ready.

Source

Can AGI developers and AI leaders help mitigate the fears?

The prospect of AGI fuels contrasting perspectives of the future – some fear a dystopian world ruled by superintelligent machines, while others see a golden age of human-AI collaboration.

Could the reality, perhaps, lie somewhere in between? We don’t know, yet.

Ultimately, technological capabilities alone doesn’t define the impact of AGI on society. There is a need for a thoughtful approach to integrating AGI into our daily lives. AI leaders and AGI developers need to keep transparency, accountability, and responsible alignment with human values as central tenets in building AGI. By doing so we can harness AGI’s potential to address complex challenges and expand our understanding of it. 

When do you think AGI, possibly the ultimate achievement in the field of AI, will be achieved?

Let us know your thoughts.

Want to cut the clutter and get information directly in your mailbox?