The Future of AI: Optimism, Pessimism, and What Lies Ahead

Setting the stage.

The sudden emergence of Large Language Models (LLMs) has sparked renewed interest in artificial intelligence. As usual, this has brought out both the “fanboys” who see AI as an Earth-shattering change and the “naysayers” who believe that the recent excitement around AI, as catalyzed by LLM makers like OpenAI, is just another case of hype and inflated expectations. This blog aims to dissect the arguments on both sides.

You can’t tell the players without a program. Infinitive identifies the following groups as leading today’s AI debate:

  • The AI optimists – people who believe that the advances made by LLMs herald a new and expansive role for AI in society. In the AI optimists’ minds, the remarkable progress exhibited by LLMs over the past two to three years will inevitably lead to extremely powerful AI solutions resulting in Artificial General Intelligence (AGI) in the relatively near term.
  • The AI pessimists – people who believe that the technology underlying LLMs is a narrow capability that allows for probability based, natural language manipulation and not much more. The pessimists see the current fascination with AI as mostly hype and the significant investment in AI as forming another tech bubble.
  • The AI doomers – people who are optimistic about the pace of AI development but believe that the coming advancements in AI will lead to significant to catastrophic consequences for society.

In this blog, we will examine the optimists and pessimists while leaving the doomers aside for a later discussion.

The AI optimism theory.

The optimistic theory of AI holds that the development of large language models, using the attention architecture first described by Google in 2017, created an inflection point in AI development. Under this belief system, AI progress will increase at an increasing rate resulting in revolutionary change over the relatively near term (2 – 5 years).

The AI pessimism theory.

This philosophy holds that the attention mechanism used to create the Large Language Models is a clever but narrow approach that basically created an auto-complete on steroids. The pessimists see a technology bubble being created which will lead to disappointment, financial ruin for many investors, and the start of another “AI Winter”.

Pillars of the AI Optimism theory.

The optimists cite several reasons for believing that today’s AI represents a discontinuous increase in AI capabilities that will continue to accelerate for the foreseeable future. Those reasons include:

  • LLM trajectory – The tested capabilities from the release of GPT-2 (full release – Nov 5, 2020) to GPT-3 (May 28, 2020) to GPT-4 (March 14, 2023) showed a trajectory of significant improvement. Extrapolating that trajectory into the future leads the optimists to believe that the capabilities of AI will be dramatically better over time.
  • Scaling “Laws” – AI scaling laws describe how the performance of AI models improves as the model size, training data, and computational resources increase. These laws reveal predictable patterns, such as larger models trained on more data generally performing better. In simplistic terms, optimists believe that more data and more compute will combine to create bigger, more capable models. Earlier this year, OpenAI CEO Sam Altman was looking to raise between $5 trillion and $7 trillion to create a new chip production company to meet the computational needs of AI based largely on his belief in the scaling “laws”.
  • Compute improvements – As components on a chip were engineered to be smaller and closer together, there came a time when the doubling of density every 2 years would no longer work. Moore’s Law slowed for individual CPUs. However, GPUs, with their parallel processing capabilities, continued to exhibit exponential increases in price / performance. Moore’s Law has been replaced by Huang’s Law (named for NVIDIA’s CEO), which states GPUs will see a “25x improvement every 5 years”. That represents a doubling every 1.1 years.
  • Defensiveness – Anthropic CEO Dario Amodei wrote a recent essay claiming that “powerful AI” would be created by 2026. OpenAI CEO Sam Altman said that AI superintelligence was “a few thousand days” away. While terms like “powerful” and “a few thousand” leave a lot of wiggle room, optimists are convinced that society-wide AI-caused change is coming sooner rather than later. The optimists think this threatens many people who, in turn, look for reasons why the AI revolution will not happen. Like ostriches burying their heads in the sand at the sign of impending trouble, the optimists think the pessimists make up reasons to avoid facing the inevitable AI future.

Pillars of the AI Pessimism Theory.

AI pessimists put forth several reasons why LLM-based AI will neither achieve AGI nor become the Earth-shattering catalyst of change to society. Their thoughts include:

  • Flattening of the LLM capabilities. While the overall capabilities of frontier LLMs have dramatically improved from the first release through today’s models, the marginal improvements of each release have slowed. Despite massive amounts of compute being applied and testing on essentially all the world’s public, text-based data, the latest LLMs are only marginally more capable than their predecessors. The scaling “laws” don’t seem to be holding.
  • Wrong architecture. The transformer based neural network architecture used by LLMs today is optimized for probabilistic language manipulation. The pessimists don’t see a path from language manipulation to human level reasoning let alone human level intelligence. In the minds of the pessimists, LLMs will forever remain clever next word prediction generators.
  • Energy. Vast amounts of compute consume vast amounts of energy. ChatGPT-4 has an estimated 280 billion parameters and required approximately 1,750 MWh of energy to train. That’s an equivalent to the annual consumption of approximately 160 average American homes. That’s just training. When used, an average ChatGPT query uses 10X more energy than an average Google search. This adds up fast. Wells Fargo is projecting AI power demand to surge 550% by 2026, from 8 TWh in 2024 to 52 TWh, before rising another 1,150% to 652 TWh by 2030. This is a remarkable 8,050% growth from their 2024 projected level. This 652 TWh projection is more than 16% of the current total electricity demand in the US. Beyond climate change concerns, pessimists question whether a power generation buildout on this scale is even possible.
  • Hype. The vast amounts of money required to train large language models necessitates the makers of those models raising equally vast amounts of investment. These huge investments, in the minds of the pessimists, push the executives of LLM makers to overstate the potential of LLM technology to attract investors.

Infinitive’s opinion.

Infinitive believes there are good points being made on both sides of the argument.

  • LLMs are not enough – Manipulating natural language, images, video, and audio is a massive leap forward for AI. However, it is not nearly enough to achieve human level intelligence. The architecture of LLMs is optimized for language manipulation. There is no reason to believe that this architecture will be sufficient for non-language aspects of intelligence like reasoning, logic, ambition, perception, and the other ingredients that go into human intelligence. If AI is to reach human level intelligence, it will be through a series of architectures specialized in each of the major areas of intelligence. Transformer based neural networks represent one such architecture. Depending on how you count, there are between seven and fifteen separate architectures in the human brain. AGI will require at least several more specialized architectures.
  • Text is not enough – Human intelligence is developed in individuals through a wide variety of “training materials”. Humans learn to walk, talk, reason, think, and emote long before they learn to read. Today’s LLMs focus on learning through analyzing text from the internet and other sources. While there is no mandate that artificial intelligence must be trained in the same way as human intelligence, there is an absolute limit to the amount of accessible, original text in existence. Future AIs will have to be trained in a truly multimodal manner. Video, audio, tactile sensation, along with non-human inputs like RADAR and LiDAR will have to become part of the AI’s training material. Much of this input will come from large numbers of physical devices like robots, autonomous vehicles, security cameras, etc.
  • Integration is everything – If AGI is the result of separate, specialized architectures working semi-autonomously, integrating those architectures will be critical. Infinitive believes that that this interconnection or integration will be th true gating factor for AGI. The “networking speed” of the human brain is a complex interplay of electrical and chemical processes involving billions of neurons and trillions of synapses. While individual neural signals propagate more slowly than electronic signals in computers, the brain’s vast parallel networks enable rapid and efficient processing of information. Moreover, the brain can rewire itself through neuroplasticity, strengthening or weakening synaptic connections based on experience. Finally, the human brain is almost unbelievably efficient – operating on 20 watts of power.

Bottom line.

AGI in 2045.