Blog
Index

Is the AI bubble real? Truths, myths and fictions

The AI bubble is the new hot topic in business publications. From that moment on, the Successionin which Sam Altman was dismissed by the board of OpenAIand then come back and sack his entire board, the world of the artificial intelligence seems more like a figment of fiction than an obvious reality.

Every day we wake up to new news. From artificial intelligences that take control and have to be switched off, bankruptcy projections for the market leaders due to lack of profitability, management comings and goings, and projects, efficiency pledges that are never completely fulfilled experts who predict the end of humanityIt is difficult to fully understand what is going on.

The AI bubble Are we going to wake up tomorrow without a ChatGPTAre robots going to take over the world and we're going to end up in the Matrix playing batteries for them?

Let's take a look at the problems that exist in the field of artificial intelligence today, their possible solutions and those that do not seem likely to be resolved.

But first, let's look at the background that gave rise to the bubble rumours.

The race for artificial intelligence

AI Career

Get there first whatever it takes. That seems to have been the guideline that made ChatGPT became the global phenomenon it is today.

The aim was not to produce a finished or flawless product, but simply to go to market ahead of the competition. When your competitors are companies like Google o Amazonwho have a near-monopoly in their respective markets, you know you're taking a gamble.

ChatGPT came out, with many shortcomings, but it managed to turn OpenAI in the king of the artificial intelligenceThis was helped by a large partnership contract with Microsoft.

The early departure of ChatGPT had several consequences. One of them was the soap opera of the dismissal of Sam Altman by the Board of OpenAI. Although the reasons were well founded.

On the one hand, the Board feared that the product was not safe enough to go to market. On the other hand, money was being wasted (and if we have learned anything in recent months, it is that the profitability of AI is, at present, a chimera). Moreover, the accusations against Altman of despot and creating an environment of psychological abuse in the company, did not make their position easy either.

Altman was dismissed and Microsoft hired him on the spot. Afterwards, the workers themselves OpenAI they called for his return. And he came back and got rid of the board of directors of his company.

Meanwhile, the competition did its own thing, with Google and its AI standing out. Geminia Multimodal AI which stood up quite well in any comparison with ChatGPT.

The end of the world and why tech gurus are obsessed with AI

End of the world

There have been several reports by journalists from Silicon Valley which I think it is interesting to highlight, because it could be fundamental in preventing the AI bubble burst.

It is said, and has been published in the United States, that the big tech gurus, obsessed with data, put an algorithm to work on what was the main threat to the survival of the human species.

Right now, there is no shortage of threats. Climate change, volcanic eruptions, potential invasions by extraterrestrial civilisations, nuclear disasters, world wars, artificial intelligence and so on. According to the algorithms involvedand by a long way, artificial intelligence was the main threat to humanity.

These gurus, with their entrenched anti-democratic technocratic thinking (i.e. they believe that the "best" are the ones who should lead humanity and that, of course, they are the best), decided to investing in artificial intelligence to avoid it.

It may seem counterproductive, but their approach was that, if they could control the AI developmentBut also - as supermen - they could prevent it from destroying us.

Moreover, they had an additional approach. As the threat was so prominent against the other options, almost all resources were to be devoted to itThe environmental aspects, which are the main losers in the race to achieve the IA.

Meanwhile, workers in the industry are becoming increasingly critical of the IA and the proliferation of technology-free schools in San Francisco and Silicon Valley for their children.

Profitability, the great battleground

Profitability

More than just the end of the world and gossip aside, many expert voices claim that the real problem facing artificial intelligence is cost-effectiveness.

The situation is obvious. Generalist models such as ChatGPT require to be trained with a huge amount of data. This costs money. It also costs money for the model to process so much data to give an answer.

On top of this, data can run out and results are not as good as expected.

To solve this problem, Sam Altman (this was the main reason given for his dismissal) and the company are shelling out large sums of money.

A couple of weeks ago, the news broke that OpenAI only had funds to operate for one more year before filing for bankruptcy.. A few days later, the news broke that would go through a round of funding and Microsoft, among others, would rescue it.

This is not only the case in OpenAIThe losses seem to be the norm for the whole sector, with the 2026 as the year of collapse of the AI bubble.

However, Microsoft's rescue does not seem anecdotalbut rather the general trend we are going to see in the coming years, with big companies investing heavily in artificial intelligence even knowing its almost impossible return.

After all, this is a common tactic in many so-called disruptive companies, which run for years at a loss until they change the established business model, and which, more often than not, never become profitable.

But how much time can be invested on a sunk basis?

Sustainability, the big loser

Sustainable development

It seemed that this era was going to be marked by commitments in the area of sustainability. As a result, more and more businesses are being governed by the ESG criteria or the green finance continue their unstoppable ascent.

However, the AI bubble It seems to be here to change that. Companies such as Google are going to failing to meet its environmental commitments due to the extra energy costs of its investments in artificial intelligence.. And, again, this is the general trend in large companies.

From 2019 to 2024, Google has increased its emissions by 48% by using artificial intelligence.. This is exactly the opposite of what he promised to do, since he intended to reach out to Net Zero in 2030. However, as builds more data centres and increases CO2 emissionsThis long-standing commitment is becoming a dead letter.

I remember Teresa Rojoa sociology professor I had in the late 2000s at the University of Seville and who worked with the European Commissionwho argued that the current (then) era was marked by the sustainability. After the information society, she argued in the subject of Public Opinion for the emergence of something like the green society.

As the years went by, it seemed that he might have been right. Environmental commitments were increasing from governments, businesses and society in general. However, the advent of the IA has completely transformed the paradigm.

Is it possible that by preventing AI from destroying humanity, we are encouraging climate change to do so? Or even, is it possible that investing so much in AI and neglecting the climate is one of the reasons why AI may wipe out the human species? We shall see. Or not because we will be dead.

Regulation, the last hope for neo-Luddites

Regulatory regulation

If you are one of those who think that we are possibly dead, you still have a hope, the regulation. The European Unionwith its recent AI Lawhas already set to work, providing a regulatory framework which aims to protect against the potential dangers of artificial intelligence.

This framework is setting the standard for the rest of the world, as well as for the member countries themselves, which will have to develop regulations at national level.

The emergence of deepfakes and the breaking down the barriers between fact and fiction are part of the risks that states must mitigate. The disappearance of image and video as definitive evidence or source of truth (although there has always been the possibility of manipulation) can become a unprecedented danger to the political and social order.

At the same time, the very concept of truth has lost some of its meaning. As more hoaxes and conspiracy theories such as flat earth, or anti-vaccine continue to proliferate, we are coming to the point of a world that says goodbye to the scientific method and that he will always find a "proof" that allows him to maintain his view of the world.

If we add to that the artificial intelligence biasesThe future may become quite dark for minorities, the underclass and the champions of diversity, as it continues to benefit from a very partial view of reality that corresponds to that of the ruling classes.

Society, transforming the idea of progress into the idea of shoddiness

Will artificial intelligence kill SEO

At the same time, a counter-phenomenon is being generated, brought about by the speed in the race for artificial intelligence. The obvious flaws in the texts and in the use of AI to create images is doing that, for society at large, AI contenttext or multimedia, is bad content.

This, of course, has implications for the great problem of AI: the profitability. The general public is asking, why would I pay for a premium account to make content with ChatGPT? And for AI to be profitable, it is not enough for a few companies to use it (and pay for it), it would have to be something generalised.

An article like this one you are reading, no matter how many millions of prompts you put into it, could never (with current technology) be written by a current artificial intelligence tool.

I recently commented in LinkedIn that watching an episode of one of the Ru Paul's Drag Race franchises, specifically the Canadian one, a drag queen competition, there was a challenge where the contestants had to present a look based on artificial intelligence.

What do you think they wore? Well, basically, badly executed things, hands with more fingers, eyes with two pupils each and outfits that reflected all those mistakes we find when artificial intelligence is used.

How to achieve the cost-effectiveness of AI? How do you raise the price of products or expand the market when the results are not good enough? That is a question for Silicon Valley.

Said another of these not so long ago AI experts with apocalyptic messages that there was a clear problem. Whereas in previous technological revolutions, it was possible to make faster and cheaper hyper-qualified processesIn this one, we have tried to simplify simple tasks such as writing an email, preparing a PPT or making a summary.

Face made with AI

Tasks that do not have a very high cost, i.e. you do not need a high qualification to perform them, but the technology does require a high consumption of resources (higher than what a human would do) to perform them.

While it was very cost-effective to automate production in a robotics plant, for example, it is not so cost-effective - it is not - to have an AI answer your emails for you if you want it to be unnoticeable. And the price of current general AI tools is not even the real price it should be to be a profitable technology.

Thus, the question of the AI bubble has a clear answer. Whether it will explode or not will depend on how much money the large technology companies. What seems clear is that the public is not willing to pay what it is worth, and that the results are not good enough (in applied artificial intelligence or simple models the story is quite different). in order to be able to raise the prices of current premium models.

Might we be spared extinction because we don't want to pay for a picture of a hand with six fingers? Time will tell.

Picture of Pablo Herrera
Pablo Herrera

Head of SEO & Content

Index

Share this post

Subscribe to our blog