“I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
— Dario Amodei, The Adolescence of Technology, January 2026.
At the frontier of artificial intelligence, competition has moved to another level of intensity. Capital investments worth hundreds of billions are deployed like chips on a high-stakes poker table. In a relentless game of one-upmanship, the leading labs commit ever-greater sums to build the data centres and power plants required to fuel this expansion.
Optimists argue the arrival of Artificial General Intelligence (AGI) will usher in an era of abundance, rendering today’s staggering outlays rounding errors. Cynics, meanwhile, struggle to find any variable in the investment calculus that yields a positive return. But a more subtle question is beginning to haunt the industry: What happens if this world-changing technology is simply becoming commoditised? If intelligence becomes a utility, how do you stop the inevitable erosion of future profits?
This unspoken anxiety over the bottom line seems to be driving a shift in tone. Across the industry, we are seeing the emergence of a “techno-end-of-times”. Leading voices in AI are sounding less like researchers and more like medieval prophets.
This new elite is fashioning itself on Francis Bacon’s model of a “Scientific Priesthood”. These modern-day Houses of Solomon are populated by figures tasked with deciding which “inventions and experiences [...] shall be published, and which not.” The narrative is consistent: this product is so potent it poses an existential threat to humanity, and only its creators possess the sanctity to determine what is safe for public consumption.
Progress toward AGI is moving at an exponential rate that threatens to shatter industries tethered to linear growth. These Malthusian arguments have migrated into the “alignment” debate, where it is often argued that AI’s growth cannot be governed by human-speed policy. Yet, paradoxically, alignment techniques often appear to be evolving just as rapidly as the models themselves.
Despite these glimpses of peaceful coexistence, the spectre of the “agentic” machine remains the industry’s most effective rhetorical tool. We are warned of systems that pursue goals with a literal-mindedness that betrays their creators, machines that, as Norbert Wiener warned in 1950, "fulfil our requests to the letter rather than the spirit." Like the Golem of Jewish folklore, the fear is of a servant that executes commands but lacks the “reason” to understand the consequence.
Are we truly on the verge of a new “end of history”? An era where humanity’s relationship with work and intelligence is severed?
The irony of these millenarian predictions is that the very “priesthood” issuing the warnings also positions itself as an indispensable guide to the future. They claim to be the only ones capable of managing a regime where the complexity of the world exceeds human cognitive capacity. We see the first stage of this “automation of intelligence” in software engineering, which is being rapidly subsumed, forcing its practitioners to reorient their lives as the ‘priests’ automate their own acolytes.
The discourse following recent model releases, notably the highly controlled rollout of Claude Mythos, serves as a useful case study of this trend. It’s easy to dismiss this language as mere hype, but the capabilities are truly stunning and are already reshaping the global economy.
However, what is also becoming clear is that these models are increasingly interchangeable. The cycle is now predictable: a lab releases a breakthrough; power users probe its limits; rankings are adjusted; and then the next iteration drops, resetting the clock. This is a hallmark of commoditisation. Built on a shared foundation of scientific research, breakthroughs at one lab inevitably beget breakthroughs at another.
The ‘happy path’ of AGI leads to a future where the cost of intelligence drops to near-zero. This presents a terminal challenge for profit-seeking entities: if you are a fiduciary, how do you sell a product that the market says should be free?
The answer may lie in the danger itself. If a product is ‘existentially hazardous’, it can remain expensive, exclusive, and strictly regulated. In the hyper-competitive world of the AI frontier, existential risk is one hell of an economic moat.
The value of your investments can go down as well as up and you may get back less than you invest.
Freetrade does not give investment advice and you are responsible for making your own investment decisions. If you are unsure about what is right for you, you should seek professional advice.




.png)





