Let me start by saying that this is a highly speculative article. Yet, you probably do not want to dismiss it outright as it ties together a few recent trends that were pushed on society, and the glitches in the matrix we have been seeing recently. The starting point is the big push towards AI, Internet of Things (IoT), Smart Cities, self-driving cars and whatnot that has been going on for well over a decade. Technological breakthroughs do not happen in a vacuum. Instead, the people funding such research have the view that if they only throw enough money at a problem it will be solved, not to dissimilar from ordering pizza. I used to work in academic research and I have seen “research plans” spanning well over a decade into the future. The people applying for grants use carefully picked weasel words to describe their vision, yet the government bodies, private companies, or foundations who give you the money are normally not in a position to really understand what you are going to do. You basically tell them about why you need a certain amount of cash right now so that in five years’ time you can work on something much bigger and much more useful. In the present, there are big breakthroughs to be had that may lead to even bigger breakthroughs in the future.
I have written about our clueless elite in the past, and their inability to understand technical projects is yet another example of this. Those buffoons have at best an only superficial level of knowledge of the areas of funding they supervise, and at worst they know nothing but buzzwords. Sometimes, you therefore throw in a few such buzzwords to get rubber-stamped approvals. There is also the social aspect, i.e. it is rather helpful if the guy leading your lab is a drinking buddy of a few of the scientific advisors that get asked for feedback on your research proposals. I am not sure it even matters so much because the people who decide what to fund are oftentimes really quite stupid, with their degrees in humanities or social sciences, and may be more interested to learn how your work will benefit underrepresented minorities and women instead of the finer technical details of what you have planned.
There probably has been bullshit in research ever since dull bureaucrats began tracking “metrics” to supposedly measure success, and possibly before that as well, after it has become a viable career instead of the hobby of men with genuine intellectual interests. Think of titans like Galileo, Darwin, or Leibniz! Yet, the cultural shift towards making empty promises took this to a whole new level. Instead of researchers in labs talking up the potential outcome of their work, you have multi-billion companies blasting b.s. all over mainstream media. Artificial intelligence is a wonderful example of this. About ten years ago it seemed that every other news story was about the imminent advent of self-driving cars and how this would put millions of truckers out of work. Sexy baristas with perky tits were supposed to be replaced by vending machines, and Jose the burger flipper would be out of work due to full-automated burger friers. None of this seems to really have worked out all that well, though.
In order to cover up the shortcomings of AI, tech companies farm out a large amount of work to manual laborers who spend their days detecting porn or tagging bikes and busses in images. In fact, some of this is farmed out to you, and most of you are not even aware of it. Whenever you encounter an “image captcha” where you are supposed to mark all fire hydrants or hills or whatever else, you are tagging images, which is then used for improving the performance of image recognition algorithms. I also understand why this is done as I have worked in AI myself. Once you go beyond carefully curated test sets, the performance of AI algorithms drops quite significantly. By this, I obviously do not want to imply that all this work is useless. It is tremendously useful, yet you will not be able to fully automate much because idealistic research seems to frequently forget about the problem of antagonistic inputs, and this cannot be hand-waved away by saying that it not really a problem as you can antagonistically use machine-learning to optimize which inputs you need to mess up the other algorithm. For instance, Tencent figured out that it takes only three white dots to make Tesla’s autopilot software steer into the wrong lane. Recently, there has been more work on antagonistic inputs, so the problem is getting more attention. It is a huge blow to the promises of AI cheerleaders. I would even argue that the antagonists will always have the upper hand as some problems probably can never be perfectly solved. Besides, pointing out flaws is normally much faster and less resource-intensive than creating something. For instance, Intel probably spends many dozens of billions on developing and manufacturing new chips, yet poorly funded academic researchers all over the world find flaws in their design. Spectre and Meltdown were discovered in 2018, and last year, researchers discovered an unfixable bug in Intel CPUs. Funding is sometimes no issue at all as the underlying problem is unsolvable. Considering that not even humans can reliably detect satire, I do not quite see how a machine learning algorithm would do it, and by this I do not even intend to mock autistic researchers who are unable to detect irony and satire themselves, albeit this is certainly an issue for their work as well.
Self-driving cars were just one part of the puzzle, however. We were told about Smart Cities with IoT sensors everywhere, and nearly complete automation, including at the work place with “Industry 4.0”. This is not really panning out, for many reasons. Yet, it has been the proclaimed goal of the Davos elite for well over a decade now. Now riddle me this: if we automate transportation, manufacturing, and a large swath of service sector jobs, what are we going to do with all the excess labor force? Could it be that we would have to find ways for getting rid of them? I vaguely recall several members of the elite proclaiming that we should reduce human population by few billion.
Now is the point where I will start to speculate: I think that the big push for automation was designed to create a large excess labor force. The elites were dreaming of a world that essentially runs itself. This is what was promised to them by researchers greedy for more research funding. Once you have billions of useless eaters, you have another problem. Those people only consume finite resources. If we paid them to do nothing but sit around and procreate, we would only acerbate the problem. Arguably, this is where the oft-publicized depopulation agenda of the elites comes in, of which Covid-19 is just one aspect. Due to the vaxx there are now excess deaths, with a rising tendency, and we know nothing about long-term side effects yet. You can bet that they will not be good.
A trend in management psychology is “creating a sense of urgency”. The goal of this is to squeeze more labor out of your minions in the short term, and this is particularly popular in knowledge-intensive jobs. Probably, the Davos crowd fell victim to their own b.s., pushing really hard for both the automation and depopulation agenda instead of finishing one and then starting the other. Doubling down is not just a problem for gamblers. It is also often done by people who have made poor decisions in their professional lives. Instead of admitting that they messed something up, they start a new work initiative as a distraction. Presumably, the Davos crowd has been doing exactly that. AI is not panning out as they would have liked, so instead of fixing that, they move ahead anyway and push Covid.
One of the consequences of the hardcore vaxx agenda is that people are leaving their jobs en masse. There are trucker shortages all over the civilized world. Sometimes it is due to older truckers dying of the vaxx, sometimes they walk off the job because they do not want to get jabbed to death. In any case, there is now a shortage in the logistics sector, and it is largely a self-induced one. Yet, this gap was supposed to be filled by AI and self-driving trucks. By the way, self-driving trucks were sold to funding bodies as a “low-hanging fruit”. This did not pan out. Similarly, there are now labor shortages in other sectors as well, all over the civilized world. I think this was all intentional, yet it was presumably not supposed to be noticeable as AI systems were expected to fill those gaps.
In the immediate future, I think we will see a lot more disruption, partly as a consequence of poorly thought-out elite plans just like the one interrelation I describe in this article, and partly also deliberately. Currently, the public gets primed for food and energy shortages, so this is probably be in the cards. On top, there are deliberate attempts at bringing civilization down, for instance by 1.7 new illegal immigrants entering the U.S. this year alone. The chaos agenda of unlimited mass immigration from the third world is presumably designed to make the common man miserable and fully destroying social cohesion, without affecting the lives of the elites who fancy themselves invincible in their gated communities. They thought they would be completely unaffected by all of this, yet AI is not living up to the promises, so some of the suffering is currently shared. Sure, Davos men do not have to worry about food getting more expensive, but if luxury car manufacturers cannot produce their cars due to a supposed chip shortage, they get to enjoy the fallout of some of their harebrained schemes as well. It may even happen that the golems they brought in rise up against their masters. Part of me thinks that there is a possible future with guillotines, if not for the real rulers then at least for the political servant class, up to the MP or POTUS level, not that I would condone any such violence. In any case, I do not quite see how we are going to “build back better” and restore sanity to society without swapping out the elites. Washington must fall for the Western world to prosper again.
Did you enjoy this article? Great! If you want to read more by Aaron, check out his excellent books, the latest of which is Meditation Without Bullshit. Aaron is available for one-on-one consultation sessions if you want honest advice. Lastly, donations for the upkeep of this site are highly appreciated.