Part I: On declining progress
AI hysteria in an age of stagnation: a polemic essay on contemporary society
To be in the twenty-first century is nothing more than to have twentieth-century culture on high-definition screens.
- Mark Fisher
There seems to be an almost unanimous consensus that we live in an age of unprecedented technological advancement; very few people doubt this. To a large extent this notion is of course simply consistent with observations1. In the past 40 years many things have changed quite a bit. The society-wide adoption of the internet, social media, world wide availability of knowledge at our fingertips, smart phones with GPS, enormous progress in computer-generated graphics and now generative AI. It has certainly been a huge transformation2. However, to some degree we might overestimate how fast things are moving because one could argue that, in the digital age, the widespread interconnectedness of everything produces a higher degree of complexity in our daily lives3. We are bombarded with information. Ever new weird trends, hypes and outrage come and go, and ever new dangers always seem to lurk around the corner. In other words, we are inclined to be overwhelmed and sometimes even hysterical. This is Future Shock to some degree. In such an atmosphere it is understandable that many take the warning of an imminent Judgement Day full of dangerous “digital [AI] minds” roaming the world seriously4. However, according to a group of AI-researchers (DAIR-institute) such science fiction visions should be classified as longtermism. They argue that longtermism distracts from more present, down to earth dangers, such as AI-systems that produce and distribute misinformation. This seems reasonable. On the other hand, predictions of out-of-control technological progress have been around for so long that we actually live in the long-term future envisioned by many classic futurology works, such as Toffler’s Future Shock. Interestingly, this thus gives us the opportunity to look at our current world through the eyes of the futurologists of the past to see if there are any patterns in the noise. Did things indeed move faster than we can keep track of? Is technological progress indeed still accelerating?
Technological progress did not live up to expectation
To answer these questions let us start with the first commandment of futurology: that technology is progressing faster than we can keep track of. Much of how we understand ‘the future’ seems to be based on science fiction culture and more serious futurology works like Future Shock. And conversely, science fiction dreams of the past may steer technological development. But we are arguably not in shock, at least not precisely because technology advanced beyond our imagination. Quite a few of Toffler’s technological predictions actually appeared to be overestimations. Still, that claim alone is hardly convincing enough to suggest that things have been slowing down. But before getting into the specifics, let us cultivate a bit of a feeling for the ‘declining progress hypothesis’ with a thought experiment. Imagine “40s guy” from the 40s using a time machine to step into the 80s. Would that not be a much bigger shock than “80s guy” from the 80s stepping into our time? Our 40s guy would be totally astonished and bewildered in the 80s: someone had landed on the moon, cars are absolutely everywhere, nuclear energy, satellites broadcasting TV programs, jet propelled airplanes available to the general public, (gaming) computers in living rooms, an immensely richer middle class and a vastly different (pop)culture. The world would simply feel completely alien to our 40s guy. By many accounts, science fiction dreams actually came true. Someone from 1900, who did not even see the first airplane yet, would be in a similar shock in 1940. But “80s guy” would probably feel much more indifferent in our time. Walking in the streets, how would our 80s guy even know it is not still the 80s if we take away smartphones and disregard fashion trends? Yes, of course, computers are more advanced, resolutions are higher but many of the typical 21st century tech we have today already existed in the 80s. Even the internet was already around. Much of the digital age is simply the commercialization and optimization of cold war (DARPA-funded) technology. Early adopters on universities and research labs were using UseNet and IRC in the 80s, basically social media without high resolution images, data mining and ads. Technologically, Facebook, Twitter or Instagram would have been, sort of, possible in the 80s. Not immediately on the current scale, not with all the features and mobile devices but it would also not necessarily be a world of difference if people really wanted it back then. That is not to say that ideas themselves are not innovations that take time to mature. But still, we are in the 2020s, where are the new energy sources, higher standards of living, bionic eyes, VR worlds, holograms, moon bases, household robots and thinking machines? Siri is nice but not exactly HAL 9000 from 2001: a Space Odyssey. Although, generative AI finally seems to come a lot closer it is arguably still not a thinking machine. And spectacular modes of transportation like the Concorde and the Space Shuttle have even been decommissioned, so in a sense we regressed in the area of transportation. The same goes for space exploration, we did not colonize Mars, in fact, we never landed on the moon again. We have some bullet trains but much of the Western world essentially still runs on train tracks from the 19th century. Now it is “too expensive” to truly upgrade. It is an interesting observation that a lot of recent innovations have taken place in cyberspace and not in the physical world. Whenever we look at innovation in the physical space, it appears to be a lot harder to innovate and also, again, much more costly. We, again, also see this with AI, it mostly remains confined to cyberspace. In general the West seems to suffer from a hollowed out manufacturing base, we are increasingly incapable to actually build anything domestically in a short time span, this is already the case with basic necessities such as houses or roads. In other cases innovation is not even that much more than a fashion trend: round phones, square phones and round again with faster CPUs that for most people, do not make a difference. We even get this eerie feeling that the 21st century pop culture is seemingly stuck in the 20th century with all the reruns, remakes and retro styles5. And social progress? Inequality has been rampant in the West6. The middle class has lost much of its relative wealth, security and living standards since the late 70s. At least in terms of things that actually matter like education, housing and healthcare7. Moreover, life expectancy has been falling in the US. This, of course, came as no surprise to the 80s “no-future” generation dealing with austerity and the supposedly ‘unrestrained and deregulated free markets’ of Reagan and Thatcher. But for that we did not even get the cool advanced cyberpunk tech and esthetics science fiction writers pictured would come along with it.
Now, obviously cyberpunk and science fiction are just that: fiction. I admit I use these examples to provoke a bit. However, if we look at more serious futurologists we can see a similar tendency of overestimating progress. First, credit, where credit is due, some futurologists also made accurate predictions8. Arthur C. Clarke has been spot on a couple of times, for example, with his prediction that one could work from anywhere and that all data and knowledge would be widely available. And there are quite some other predictions that did come true. Nevertheless, here are some other predictions Clarke did in the 60s for our time: lunar settlement in 2020, fusion power and yes, actual AI (AGI)9. Even Turing himself hinted on passing the Turing Test in 2000 already, although by some accounts he predicted 2020. In any case, the Turing test has not conclusively and convincingly been passed yet, although it has to be admitted that, once more, we do seem close with the advancements in LLMs. Nevertheless, that still does not tell us how close we are to AGI, let alone sentience – if that is possible at all. That remains utter science fiction just as it was in the 60s. Herman Kahn also did a large number of predictions in his book The Year 2000: A Framework for Speculation on the Next Thirty-Three Years10. We find: brain implants, use of robots in everyday life, giant supersonic jets, new sources of energy (fusion) and, again, actual AI. Someone took the time to distill 135 predictions from this work of which the vast majority (about 100) fall into the category, ‘we are not that advanced yet’. SF giant Asimov, had a similar tendency, some accurate predictions about the adoption of computers but then a lot stuff about space settlement and energy-beaming satellites. We can add more futurologists to the list but in many cases overestimation of our progress remains a common denominator. And make no mistake, people actually did believe that these predictions would come true or at least a great number of them. Who would have thought that the vast majority of these predictions would simply not materialize at all at the time of writing? Predictions for the year 2000 that are still not realized in 2025, what happened?

Of course the arguments for an anti-thesis to this observation are plentiful: the Cold War space race produced space fantasies, ‘flying cars’ are not practical, the digital revolution is less visible, things that looked obvious in the 80s may just look like stupid ideas by now etc. etc. Sure, all these points hold some merit. But as we have seen there is also clearly a discrepancy between the changes we experienced in the first part of the 20th century and after that. Future shocks were real in the 20th century. Perhaps because of this we adhere to this naive consensus about the pace of technological change and superimpose this on current hypes. This becomes even more obvious if we look at the second commandment of futurology: that it is exponential.
Visualizing the decline: technological change is not always exponential
The words “exponential growth” or “exponential change” are mentioned so regularly that many simply accept that this must be true for technological progress, and that “exponential change” explains a lot of societal phenomena and problems as well. If things are confusing in modern society, it must be because of exponential change. Some may point at Moore’s law as an example. But how much does this really explain? And is the paradigm of exponential growth even true? We have seen huge improvements in computer graphics, we have indeed been able to jam exponentially more transistors into an integrated circuit (chip), longer than even the late Moore himself believed would be possible. Because of the nature of exponential growth we tend to underestimate the numbers: the amount of memory we have, the clock speeds of CPUs. But strangely, in many cases the capabilities of the technology were still overestimated. If the first part of the 20th century was an age of inventions, we live in an age of tweaking where we improve those technologies using iterative improvement; more of the same. Or worse, an age of PR, while the big breakthroughs and paradigm shifts were left behind in the 20th century11 (I will expand on this in Part IV). We do see the phenomenon of decline in fundamental science. One paper showed research productivity in the physical and life sciences are in decline, at least since the 70s12. It was also found that there was a decrease in individual patents13. On top of this researchers also found that patents, as well as scientific papers, are becoming less disruptive according to their definition14. Even if we do not look for paradigm shifts but accept the power of gradual innovation, as with computer chips, the growth seems to run into limitations sooner or later. Although beyond measuring trivial properties such as resolutions, it is not easy to quantify general technological progress. To some degree, the idea of progress is of course also subjective. Ted Modis - a physicist who specializes in ‘growth dynamics’ – undertook the challenge to quantify technological change and actually came up with a model.
He proposed that technological change or growth is a subset of changes in ‘complexity’, which he related to the physical concept of entropy15. Modis suggests that we see this type of change in complexity at play everywhere: in biology, in the economy and, indeed, in technological change. And what Modis finds in all these realms is not exponential growth but a so-called logistic function or S-Curve (figure 1). So growth is exponential only during a certain period. Using data, Modis shows that our technological progress has indeed slowed and that this also has to be expected in an evolving system16.
The law that describes ‘natural growth’ states that “the rate of growth is proportional to both the amount of growth already accomplished and the amount of growth remaining to be accomplished”
- T. Modis
What this means for technological change is that, at first, not much changes. Then something revolutionary seems to occur and growth does become exponential. However, the growth eventually flattens at some point and it becomes increasingly hard to achieve even more change within the same system (Figure 1a). Until another major innovation occurs that essentially sets off a sub process17.

The growth within AI is likely to be exponential now because it is still very much uncultivated terrain but also not entirely new. Yet, at some point it will become disproportionally harder to improve even further. Indeed, all these systems are itself part of a larger system that again follows the same S-curve (Figure 1b). It does not matter that AI is self-learning, any evolving system can be explained in terms of increasing complexity. This is also why Modis is an early critic of Kurzweil’s singularity hypothesis18. As time goes on, a system will approach maximum entropy and the amount of change that can still happen decreases. However, even if it gets harder for technology to gradually improve we can still aim for the paradigm shifts. As we will see in parts II-IV, our economy and culture are arguably not doing a very good job at this.
→ Continue to Part II: On automation and fake jobs
Of course it is not as if nothing spectacular happened in fundamental science at all, take genetic engineering or the James Webb telescope, for example. Or the quick increase in lithium-ion battery capacity. But it is still not on par with previous eras of the modern age.
Futurologists like Toffler are usually more accurate in their predictions about social change. For example, that most people would be working in the service sector in post-industrial societies. However, I believe this is not only because of technological change but also because of political decisions. For example, much of post-industrial society developed because of offshoring production to low income countries instead of automation.
This has often been associated with the Fourth Industrial Revolution
This is basically what the aforementioned Future of Life Institute is warning of.
Mark Fisher has touched upon this hypothesis a lot. I also like to note that I find the post-postmodern ideas such as meta-modernism hitherto, not convincing in moving culturally beyond postmodernity in practice, although the basic effort of moving beyond is of paramount importance.
Several graphs on www.wtfhappenedin1971.com show a strong increase in inequality over the past 40 years.
Loss of purchasing power mostly occurs in essential things like healthcare, housing and education. Whereas commodities one does not typically need, like gadgets, become cheaper. Standards of living can therefore easily be presented as improving by looking at consumer prices and incomes as a whole (e.g. CPI). Important reasons for this are the discrepancy between the value for wage labor and the value of capital as well as offshoring. Another explanation for this phenomenon is the Baumol effect but this fails to appreciate the impact of financialization in my opinion.
H.G. Wells (1901). Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought
Arthur C. Clarke (1962). Profiles of the Future
Herman Kahn (1967). The Next Thirty-Three Years: A Framework for Speculation
e.g. paradigm shifts as defined by Thomas Kuhn.
Peter Cauwels, Didier Sornette (2022). Are ‘flow of ideas’ and ‘research productivity’ in secular decline?
Jonathan Huebner (2005). “A possible declining trend for worldwide innovation” In this work US patents were analyzed.
Michael Park, Erin Leahey & Russell J. Funk (2023). Papers and patents are becoming less disruptive over time. Although the paper also shows that we, more or less, seem to live in an age of the life sciences since the last few decades.
Entropy, playing a big role in thermodynamics and the “arrow of time”, can be explained as the amount of randomness or the amount of energy per volume. Specifically, Modis defines complexity as the time derivative of entropy and he is able to fit this model to many real world examples. Visualize it this way: if you poor milk in your coffee, it first mostly stays in one place. After a while the diffusive forces will spread the milk throughout the coffee, the ‘complexity’ of the milk changes a lot. At some point the milk will be evenly distributed throughout the coffee, one could say there is maximum entropy and as a consequence, no more disorder can occur.
Ted Modis (2012). “Why the Singularity Cannot Happen” http://www.growth-dynamics.com/articles/Singularity.pdf and Ted Modis (2009). “Forecasting the growth of complexity and change”
Although iterative changes can sometimes produce paradigm shifts it is often also the other way around, that a completely new invention starts a new cycle of change. This is also what is missing in many accelerationist proposals. Take biology, for example. For a billion years not much happened, then Eukaryotic cells - cells that revolutionized energy use – appeared. This then created the conditions for complex life to evolve at all.
Ted Modis (2006). “The Singularity Myth”
Thanks for this. Hadn't heard of Modis but his idea of 'diminishing returns' as you describe seems very useful.
We suffer from the idea that because we *can* 'computerise' something then we *should*. This has led to the introduction of IT systems which do a worse job than their manual predecessors. For example, 'making tax digital (MTD)' enforces IT systems on businesses, together with the overheads that entails, to supply the taxman with half a dozen easily calculated numbers every three months. The simple job of posting a package from A to B has now become overburdened with reams of (electronic) documents which are utterly irrelevant. And all this bureaucratese has to be maintained and made even more complex when the next set of hobby-horse regulations shows up. No wonder businesses are struggling.
If it's this bad in the simple cases described above, how much worse will it be in complex cases. My guess is that we're already in the rapidly diminishing returns phase for most IT.
As I've suggested elsewhere, the relative stagnation of the last few decades compared to the decades before are largely a result of declining energy availability per capita. As of 2025, energy from oil acts as a multiplier on the rest of economic activity because of the ubiquity of oil and gas burning engines. Between the late 1800s and the 1950s, oil extraction technology improved faster than the decline in the quality of accessible oil reserves. Since then, oil has become harder to extract faster than the improvement in extraction technology, and the resulting oil has been shared over a larger fraction of a growing human population. So even though we as a species are pulling more oil out of the ground than at any time in the past, the amount of oil burned per human per year has been declining.
A good way to express the idea that oil is becoming harder to extract is EROEI, energy returned on energy invested, or how much energy you have to spend to pull energy out of the ground. EROEI has been declining ever since oil was discovered, although the decline has been mitigated by improving technology.
I'm optimistic that as humanity migrates to solar and wind power, energy availability per capita will start growing again, leading to a reinvigoration of progress. Indeed, growth in renewables are likely the main reason the wheels haven't completely fallen off the world economy.