By Taranjeet Singh

AI Hits A Speed Bump: Why The Next Big Thing Isn't Coming So Fast

Artificial Intelligence is a name that has waved a new tide in the past few years. There is a race going on in the tech world to build the most powerful artificial intelligence systems to surpass human intelligence.

With the passing of time, each development in the AI arena has received immense support from the general public. ChatGPT was launched under GPT-3.5, the first LLM model to be used by the general public, and then a more advanced model called GPT-4, GPT-4o, and so on.

Now, a new term is being developed in the tech world, artificial general intelligence (AGI), to empower the business world.

However, the race to develop a smarter AI ecosystem has taken a backseat!

AI-developing companies, such as OpenAI, Google, and Anthropic, are promising new AI models at regular times; however, these models are unable to deliver the expected results.

Yann LeCun, AI pioneer and senior researcher at Meta, made a controversial statement by calling AI dumber than a cat.

He gave an interview on Wall Street and said, "Today's models are really just predicting the next word in a text. But they're so good at this that they fool us. And because of their enormous memory capacity, they can seem to be reasoning when, in fact, they're merely regurgitating information they've already been trained on."

Does Scaling AI Models Mean Better Output?

The complete AI industry has followed only one notion for a long time, and increasing the size of models (in terms of computing power, data, and parameters) will lead to more powerful AI systems.

For example, you are baking a cake. The bigger the cake, the more ingredients are required. Similarly, companies are scaling AI laws by developing them more and increasing their training capacity. The more AI models are trained, the better they can provide answers.

However, everything has a limit, and so do AI models. Ilya Sutskever, the founder of Safe Superintelligence (SSI), claims that ChatGPT's developers are going beyond the functionality of the AI model, therefore decreasing performance.

Furthermore, Sutskever said to Reuters in May 2023, "The 2010s were the age of scaling; now we're back in the age of wonder and discovery once again."

No Data Authenticity

It is a known fact that all AI models are trained on human-made data. When this capacity takes a toll, then AI companies make their way towards scraping publicly available data from the internet; however, this approach has reached its dead-end.

When the motive is to build a robust AI system that people can leverage to manage complex tasks, then companies need access to high-quality datasets, which are harder and more expensive to acquire.

"Access to unique, human-generated data is paramount in the pursuit of human-like intelligence in AI. Think of it this way: the human brain learns from a lifetime of diverse, nuanced experiences. High-quality data leads to more accurate, reliable models that are less prone to hallucinations. Synthetic data does have a role, but we cannot discount the value of human-generated data," said Gnani.ai’s Gopalan.

Defining the Cost

Companies need to shell out a hefty amount of their reserves to train AI models. The budget required to build the latest models, like GPT-4o and Google’s Gemini, is out of reach for many companies. Each AI model can cost tens or even hundreds of millions of dollars.

If proper analysis has to be performed, then the cost of training an AI model is estimated to be around $100 million, and it will increase in the near future.

What Are the Feasible Ideas?

AI companies like OpenAI are working relentlessly towards forming a sustainable AI ecosystem by training their models.

OpenAI research scientist Noam Brown mentioned on X, “With o1, we developed one way to scale test-time compute, but it isn't the only way and might not be the best way. I'm excited to see academic researchers explore new approaches in this direction."

OpenAI has shifted its focus from formulating larger models to developing AI agents that are created on the basis of new use cases for existing models.

"We will have better and better models, but I think the thing that will feel like the next giant breakthrough will be agents," Sam Altman, founder of OpenAI, said in a Reddit post.

Everyone in the field of AI is playing their part in empowering the AI ecosystem.