top of page
Search
khazzaka

THE AI BUBBLE: BETWEEN PROMISES AND REALITIES?

Can AI truly transform financial management, or is it merely a bubble promising more than it can deliver?
What is the potential of web3 enhanced by AI (web3^AI), and how can businesses leverage these two revolutions?

Webinar replay presented by Valuechain Consulting in collaboration with Diapason Group and Syrtals on November 15, 2024.



The conference addresses AI in finance, highlighting market growth and increasing adoption while also emphasizing the current limitations of AI regarding technological maturity, awareness, and generalization.

The webinar sheds light on the differences between human intelligence and AI, as well as the challenges of appropriately using AI within companies. It also raises the question of a potential bubble in the AI sector. Massive investments in AI spark concerns about a speculative bubble, with valuations disproportionate to the revenues generated. A comparison with the 2000 dot-com bubble is discussed. The lack of awareness and reasoning ability in AI models is emphasized, with concrete examples illustrating their reasoning limitations. This analysis highlights the challenges and risks associated with the current AI hype while underlining the importance of awareness in this field.

The speaker explores several aspects of AI, including its mathematical limitations, increasing energy consumption, and practical applications. Demonstrations showcase AI in action, such as real-time video analysis and simulated dialogues between experts. The importance of controlling and understanding AI despite its limited capacities is stressed, along with warnings about its significant energy impact.

A live demo features a simulation of an economist (LLM Mistral) and a physicist (LLM Gemma) collaborating on a project entirely powered by a local AI setup, demonstrating the efficiency gains of this approach for businesses. The demo illustrates the power of large language models (LLMs) to enhance work efficiency, while acknowledging the need for human intelligence to complement AI’s limitations. By showcasing a concrete example of AI auditing contracts, Valuechain highlighted the current maturity of AI and its potential benefits for businesses.



And here is an automatic verbatim generated by the Valuechain Verbatim solution:


We are going to discuss, as announced in the event title, AI in finance between promises and realities—meaning: is there a bubble? We will also try to answer two questions. The first question: are there risks? Is there indeed a significant bubble? And secondly, how can we take advantage of AI in business today? What are the current capabilities in terms of AI maturity? If you notice the title mentions Web3 and the power of AI, it’s because it is also an excerpt from a broader course, a major training program on the connection between two worlds—Web3 with blockchains and AI. But today, of course, we will focus solely on AI.

To begin, let’s discuss the AI market cap from 2020 to 2030. You will all realize that AI has made a huge leap forward, particularly with the recent advent of Large Language Models (LLMs), which can also be measured in dollars. Today, the market cap is approximately $300 billion, and projections are extremely optimistic, predicting it will more than double by 2030—in less than 5 to 6 years.

These growth projections give us a strong indication of the market’s dynamism and rapid expansion, albeit at a pace that could suggest a bubble. The second indicator I share with you is the number of users.

Over the same period, the number of users surged from 115 million in 2020 to 300 million in 2024, with projections reaching 700 million—almost 1 billion users by 2030. We clearly see that the major players today, in terms of brands, are predominantly focused on LLMs (Large Language Models), but AI obviously encompasses more than just that.

This gives us a very strong indication of the increase in adoption. For comparison, let’s look at figures from the Web3 sector: the market cap of cryptocurrencies is not $700 million but $2.8 trillion in revenue.

This highlights the maturity and size difference between the two markets. In terms of users, AI currently has 314 million users, while crypto and blockchain have 580 million users. This provides a comparative view of the dynamics of both industries.

Let’s quickly revisit the basics: what truly distinguishes natural intelligence from artificial intelligence? Humans think, understand, and adapt—we’ve all learned that intelligence is about adaptation. Artificial intelligence, on the other hand, simulates this intelligence. It operates in programmed or learning machines, primarily to solve problems when applied.

In terms of characteristics, humans possess cognitive flexibility, which AI lacks—a significant difference. There are various types of AI: expert systems that demonstrate a form of intelligence, supervised and unsupervised machine learning, neural networks, and deep learning. We won’t dwell on the distinctions between them, but it’s important to note that there are multiple types of AI.

When comparing learning processes, humans often learn through observation and imitation—copying is a valid form of learning. For example, certain inventions were borrowed to create Apple Pay. Humans frequently test through experiments as well. Machines, on the other hand, use different learning models, including supervised and unsupervised learning, but they struggle with generalization. We will explore this AI limitation today.

AI struggles to generalize, while humans excel at it, and we will explain why. Finally, regarding cognition—the holy grail of AI sought by Google for decades—humans possess awareness. They understand, give meaning, and demonstrate not only intellectual intelligence but also emotional and creative intelligence.

The limitations of AI today stem from its restriction to computational tasks. It performs calculations, even when deep learning is involved, as it remains a form of calculation.

AI is excellent for repetitive tasks and far surpasses humans in these cases. However, we haven’t sufficiently addressed the concept of awareness.

Before diving further, I want to mention a notable book titled Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy. It’s a remarkable book published several years before LLMs emerged.

In this book, George Gilder, who is a friend of Larry Page and the presidents of Google, Apple, Oracle, and Microsoft, and an engineer himself, presents the vision and findings of the leaders of these companies regarding AI and big data.

He noted in the book that training an AI model to recognize whether there is a cat or a dog in an image requires the equivalent of a nuclear station, with hundreds of thousands of GPUs in data centers consuming megawatts of energy—50 megawatts, 100 megawatts, or even more, as you’ll see in the numbers later. To cool such data centers, they would need to consume a river’s worth of water, like the Seine.

By contrast, the same learning task—recognizing a cat or dog in an image—is completed by the brain of a 3- or 4-year-old child using just 14 watts of energy and requiring far less data. This highlights the technological immaturity of AI. While significant progress has been made, and productivity gains are real, the cost is extraordinarily high, and the efficiency is still far weaker compared to humans.

Moving on, let me remind you of the famous Turing theorem, or the Turing Test. Turing, the inventor of the computer, said that a computer could be considered intelligent if it could fool a human into believing it is human—in a blind dialogue. Today, when we interact with tools like ChatGPT, it’s undoubtedly impressive, but we can all tell that there’s still a machine behind it.

AI has made an enormous leap forward, but it has not yet achieved the goal of completely fooling a human. We will explore why later.

Regarding methodologies, I won’t dwell on this, but knowing when to use AI or expert systems is critical. There are still many, many use cases where AI is unnecessary. In other cases, AI complements expert systems well. However, very few scenarios exist today where AI alone is sufficient.

To give some examples, companies operating in 29 countries deal with massive but structured and well-coded data. For cases like machine breakdowns, payment card support issues, or car malfunctions in assistance systems, expert systems are dozens of times more efficient, reliable, and cost-effective than AI.

That said, AI can partially address some needs, though not always efficiently. Other challenges arise, such as AI solutions often relying on remote servers, sometimes located abroad. Local AI solutions exist, as I will demonstrate today, but such concerns complicate the decision of when to use AI.

Before we move on to demos, let’s talk briefly about the AI bubble. Today, there are clear indicators of a bubble. An exponential increase in investment—$42 billion raised in one year for AI startups—is one sign. Generative AI, particularly LLMs, attracted 48% of these funds. Bloomberg projects that the AI market will reach $1.3 trillion by 2032, an annual growth rate of 43%. Goldman Sachs reports similar numbers, with $1 trillion in investments expected in the coming years. However, much of this reflects speculation—a classic bubble indicator.

To analyze further, consider that in October 2024, OpenAI reached a valuation of $157 billion after a funding round. Perplexity, a competitor, grew from a valuation of $500 million to $9 billion in just one year. But have the revenues kept pace? Not at all. OpenAI generates $3.4 billion annually but holds a valuation of $157 billion. Perplexity fares worse: $50 million in annual revenue, yet valued at $9 billion. The price-to-sales ratio of 180 is extraordinarily high, reflecting a significant valuation bubble.

This situation mirrors the 2000 dot-com bubble, where venture capital investments surged by 1150% over five years. The current excitement, interest, and hope surrounding AI are undeniable, but revenue figures often fail to justify the valuations. The numbers don’t lie—we are clearly in the midst of a massive AI bubble.

Does this mean AI will collapse? No. Does it mean AI will stop being used? Also no. The real question is: what can we actually do with AI? Despite the trillions of dollars being invested, its real value will likely be far lower, but there will still be value.

One last point to address is the concept of consciousness, which is fundamental to this discussion. OpenAI itself emerged from a real conflict between Elon Musk and Larry Page. These two were once best friends, spending entire nights talking. One day, Musk expressed his concern about AI, while Page (from Google) envisioned creating a conscious AI that could clone the human brain and all its neurons, ultimately achieving immortality—and, in his view, rendering many humans unnecessary.

Musk, driven by his utopian ideals and love of science fiction, had a falling-out with his friend. He declared, “I won’t talk to you anymore,” and invested hundreds of millions of dollars to create OpenAI. His goal? To train an AI model that could rival Google, eliminate the need for search engines, and make it open source for everyone.

This is the context in which OpenAI and the LLM revolution were born—a result of human will and vision, largely driven by Elon Musk.

And he invested hundreds of millions of euros to create OpenAI. He called it OpenAI and founded it. He purchased all the OpenAI data and LLMs, saying, “Train me a model that can kill Google. I want you to train an AI model so that people no longer need to search on Google, and I want it to be open source and available to everyone.” It is from this argument, this battle, and this debate that OpenAI was born, and LLMs made their breakthrough. It was driven by human determination—primarily Elon Musk’s— to make AI publicly accessible.

On the subject of consciousness, I encourage you to watch a video from Google Talks. They invited a great Indian guru, an incredibly intelligent thinker. He is no ordinary guru, and for those unfamiliar with him, his insights are profound. He theoretically demonstrated, through discussions with Google, that AI will never achieve consciousness.

This is because there is a distinction between intelligence and consciousness. Intelligence allows us to break down problems into smaller, solvable pieces—analysis, by definition, involves dissecting something into parts. In chemistry, for instance, we break down compounds into smaller molecules for study. This is how the brain works. Intelligence is like a knife: the sharper the knife, the finer the cuts. A microscopic knife can slice molecules, but this only highlights the limit of intelligence—it breaks problems into smaller pieces; it does not rebuild.

Consciousness, on the other hand, is the hand holding the knife, deciding what to cut and what to do afterward. I won’t delve into the full philosophical or spiritual dialogue here—you should watch the video—but it provides an essential understanding of AI’s limitations.

Now, let’s turn to a scientific paper recently published by researchers from Apple Intelligence. This foundational paper investigates the reasoning limits of LLMs. The researchers proved that LLMs—and AI in general—cannot reason. This isn’t about consciousness, which is another level of intelligence entirely. Consciousness is an identity that understands and uses intelligence. Here, however, we are talking about artificial intelligence itself, which can reflect but cannot reason.

Reflection means producing a result, asking the AI to analyze its own output, and “reflecting” on it—like looking into a mirror. Reasoning, however, is different; it involves giving meaning to a result and making intelligent, correct decisions based on it.

To demonstrate this limitation, let’s consider a math exercise. For example, “Michel has 30 apples. He gives 5 apples to Thierry. How many apples does he have left?” It’s a simple question, and AI can answer it. But if you introduce irrelevant details into the prompt, such as “5 of the apples were smaller than average,” all LLMs—without exception—fail far more often. They sometimes get the correct answer but frequently make mistakes.

This finding is alarming, particularly when considering AI for decision-making purposes. The study compared several models, including GPT-4 Preview (OpenAI’s most advanced model), Google’s Gemma, France’s Mistral, GPT-4O, and others like Phi and Claude. The majority showed a drop in accuracy—down by as much as 65%. On average, the decline was between 40% and 44%.

The data did not change, and the hypothesis remained the same. A human would rarely make an error unless distracted or unable to understand the question. For example, “If I gave 5 apples to Thierry, but they were smaller than average, how many apples do I have left?” is still obvious to humans. Yet AI cannot reason or comprehend such details.

For those unfamiliar with how LLMs work: when you write a prompt for ChatGPT or any LLM, the text is converted into tokens. Simplifying the process, a token can be roughly 3 characters. For example, “OLI” from “Oliver” might become a single token. These tokens are projected onto the LLM’s training matrix, which then retrieves related tokens from the space. The results are transformed back into text and displayed to the user. This process is fast because the model is pre-trained, but the AI understands nothing. If there’s even a slight error or “noise” in the prompt, the AI fails to detect it despite its analytical capabilities.

Notably, GPT-4 Preview performs “reflection,” which means it reanalyzes its output—essentially cheating. It’s multimodal and revisits its answers. Despite this, its accuracy drops below 30% in certain cases.

I won’t dwell on these points further. I will emphasize just one last point. Mathematically, there is a proven theorem that shows AI can never diverge. This means its intelligence level can never exceed a pre-calculable threshold. AI cannot increase indefinitely without control. We can predict that AI will never surpass this limit. This threshold might be higher than my own intelligence in certain domains, which is fine.

However, I can create another AI that converges to a higher value to surpass the first AI. This is exactly what’s happening today between Gemma, Mistral, GPT, Claude, and others like Ollama.

A final comparison is with chess. Garry Kasparov was defeated by IBM’s Deep Blue, then Stockfish defeated Deep Blue, and AlphaZero eventually surpassed Stockfish. Even though AI outperforms us in certain domains, we can always create a new AI with a calculable maximum intelligence to surpass the previous one.

Lastly, regarding energy consumption, AI currently accounts for about 2% of global energy demand. By 2026, AI’s energy consumption could match Japan’s annual electricity usage, just for training and running models. Generative AI can consume 33 times more energy than expert systems to accomplish the same task, when applicable. This is not always efficient. For example, training GPT-3 required 1300 MW, while GPT-4 needed 50 times more. Wells Fargo estimates that AI energy consumption will grow by 550% between 2024 and 2026, rising from 8 TWh annually to 52 TWh.

For comparison, Bitcoin consumes 80 TWh per year. According to a paper I published—which corrected Cambridge’s estimates—the Cambridge Bitcoin Consumption Index revised their figures downward to align with my findings. According to these projections, AI could consume 152 TWh annually, comparable to the energy consumption of the global banking industry. These are enormous figures.

To allow enough time for my colleagues, I will conclude with two demonstrations. We won’t spend too much time on these.

Here’s the first demo: this is a production-ready application used by companies working in video editing and marketing. For example, take any video recording—whether from cameras that produce very large files, such as 500 GB 4K or 8K videos, or even videos from the internet.

For this demonstration, I will use Apple Intelligence’s launch video for their AI product. To show you, I will use an AI system to analyze this video in real-time.

Here’s what happens: the AI first downloads the video. Let me enlarge the screen. It then transforms the audio into text, which is happening now.

Now it’s done. The AI segments the content. Let me show you the back-end—it creates micro audio files for each word and sentence. From this, we get the near-complete transcript (verbatim) of the video.

The AI will analyze this text. I avoid saying “understand” because AI understands nothing. It isn’t conscious. It cannot comprehend. However, it analyzes very well—it’s a sharp knife.

The AI will analyze the text much like we did at school or university. It formats the content, creates paragraphs, chapters, titles, and sections if needed. Here’s a small example. In this case, it may not generate chapters or short 5-minute videos, as the video itself is short. I’m also using bandwidth to share the video.

You’ll notice that the analysis is progressing smoothly. If you check the costs displayed, you’ll see they are negligible (0.00 cents). This is because I used OpenAI’s API for this demonstration, not a local AI model.

The output includes a suggested title: Apple Intelligence, Personalized AI for Mac, iPad, and iPhone. It even generates hashtags for publishing purposes and a video summary. Generally, the tool creates a one-pager if the content is rich enough, such as summarizing multi-day conferences. Additionally, it produces a full verbatim transcript.

We won’t review the entire result, but this gives you an idea of how AI is already used in production environments. This is the first example.

The second, more impactful example concerns simulating employees. While AI cannot “understand” or “reason,” it reflects extremely well. Consultants or teams are often tasked with reflecting and working on specific subjects. Here, I will simulate employees.

Imagine a company using AI to simulate a marketing employee, an HR employee, a legal compliance employee, an engineer, a salesperson, and even a secretary or project manager. These AI agents collaborate on a task. For instance, responding to an RFP (Request for Proposal): covering references, defining the value proposition, proposing a methodology, and detailing engineering and architecture aspects. In this scenario, AI simulates an artificial team that works together.

Another challenge is that we often cannot use OpenAI or Microsoft APIs to send sensitive or confidential documents to servers in the United States. Thus, AI must run entirely locally. This is precisely what I will demonstrate now.

For this example, we simulate only two employees due to time constraints. The topic is neutral for the audience: a collaboration between an economist specializing in monetary theory (e.g., central bank monetary policies) and a physicist specializing in thermodynamics. They will work together to draw parallels between monetary theory and thermodynamics.

For instance, one could imagine that money functions like energy in an economy, and thermodynamic laws might apply to monetary formulas. Are there similarities? Are there divergences?

The AI simulates these two personas—a physicist and an economist—who engage in dialogue and collaborate on the task.

In the code, I provided instructions to study this subject from both a physical and economic mathematical perspective.

Here, you can see that I selected Mistral as the economist and Gemma from Google as the physicist specializing in thermodynamics.

The simulation runs in rounds—five exchanges in this case, but it could easily be 100 without any issue. You immediately notice that the economist introduces the formula MV = PQ, which represents the money supply multiplied by the velocity of money, equaling the price times the quantity of goods or products exchanged in the economy. They quickly get into the subject with formulas, ED, minus, minus S, and so on.

This is a response from Mistral to Gemma, and conclusions are drawn between them. I won’t dwell on the content—that’s not the focus here. However, you can observe that two AIs, communicating locally on my workstation without accessing the Internet, are able to process client files, Excel spreadsheets, transaction histories, specifications, rules, and regulations—all while complying with GDPR regulations, provided we have consent to process this data locally.

The result—this code was executed just before the demonstration—takes about three minutes to run. In essence, you can see that Mistral (the economist) and Gemma (the physicist) discussed both thermodynamic and economic principles and produced an actual study on the subject.

I’ll stop sharing my screen now to return to the camera and say this: even if AI is experiencing a bubble, even if AI has limitations and will never achieve consciousness, that’s not what we ask of it. What we ask is for AI to increase the efficiency of our businesses and improve customer satisfaction.

We can achieve this today, to a certain extent, always with human oversight because biases must still be identified in AI. Yet the results can be incredibly productive for businesses right now.

For example, one of our clients needs to audit millions of contracts—about one million contracts. These contracts contain clauses that must comply with specific standards. It’s impossible to do this with expert systems. You can’t scan images and search for exact phrasing because the wording may vary between versions. Here, LLMs are essential. However, we cannot send our clients’ financial contracts to OpenAI over the Internet.

Today, we can achieve all this locally. This demonstrates the current maturity of what AI can deliver. Thank you very much.

2 views0 comments

Recent Posts

See All

Comments


bottom of page