Who can survive the AI apocalypse? A crisis expert explains

0
Who can survive the AI apocalypse? A crisis expert explains

RT talks to Dr. Mathew Maavak, an expert on global risks and artificial intelligence, about what may be the greatest test humanity has faced

RT: With the advent of generative AI, a joke appeared on the internet, comparing the future envisioned by utopian fiction authors with robots doing menial physical work and humans free to pursue creativity to the reality, where ChatGPT, Stable Diffusion et al. are creating texts and pictures while humans work minimum wage jobs at fast food and Amazon warehouses. Is this anti-utopian humor justified?

Mathew Maavak: Yes, the humor is more than justified. In fact, it is no longer funny.

It took barely a decade for the sci-fi fantasy of robot butlers freeing humanity for art and leisure to be annihilated by reality. Instead of robots flipping burgers, we have AI painting portraits while humans flip the burgers until robots replace them. AI safety expert Dr. Roman Yampolskiy recently warned that Artificial General Intelligence (AGI) and Superintelligence may wipe out 99% of jobs in the near future.

Skeptics used to argue that robots lacked the dexterity for “real work” like plumbing, sanitation, car repairs, and warehouse drudgery. That is changing fast. True, humanoid robots still need refinement, and their maintenance costs will slow uptake. Their long-term reliability needs to be extensively tested. Failure to do so will result in corporate disasters, in a manner similar to the string of bankruptcies facing Western automakers who rushed out models without undertaking extensive, long-term tests.

The immediate job threat therefore is not to plumbers or janitors. It is to the supposedly safe “knowledge class.”

Why hire a lawyer when AI can draft affidavits in seconds without the pomp, theatrics, and obscene billing that lawyers cling to like a birthright? Most people don’t realize that they can represent themselves — “pro se” to use a legal term — with AI’s help, if not for numerous obstacles placed by the legal fraternity.

Why consult a university or library when LLMs like ChatGPT or DeepSeek can synthesize information in fields ranging from astrophysics to the Dead Sea Scrolls in the span of a coffee break? Which single professor can match that range and output?

Why trouble the neighbor or a mechanic about the capabilities of a new car when AI can explain every system with clarity and patience?

Journalism is no safer. Copy-editors, proofreaders, and even anchors should have been redundant by now. If AI models can already sell fashion, even to those who crave a human appeal, why not deliver the evening news via an AI anchor? I tell you one reason why there will be lots of hesitancy in terms of mass adoption by the legacy media: An advanced AI anchor – quite ironically – may not ask scripted questions to get scripted answers.

The media in particular is staring at seismic shocks ahead. I joked in the newsroom nearly 30 years ago that all we really needed was software with templates for each kind of story. It wasn’t a joke after all, as it turned out to be quite prophetic.

RT: To be clear, generative AI can be an ingenious tool and assistant in many lines of work. Who do you think benefits the most from it?

MM: To answer that, you need to divide humans into two broad categories: the harnesser — a term I coined — and the herd. Notice that one can be both singular and plural, while the other is always plural. This is natural, as 99% of humanity is driven by herd instincts. They have consistently surrendered their critical faculties to accommodate the herd and find “safety” in their respective comfort zones. Those safe zones are now being obliterated by AI and many are sleepwalking into a future which has no place for them. This presages massive social upheavals.

Globalist movers and shakers foresaw this specter long ago, which is why they commissioned “futurists” like Yuval Noah Harari to enunciate a mass, opiated future for so-called useless eaters.”

The harnesser, by contrast, is far more than a critical thinker. They can turn an impossible situation into a creative opportunity. Think of a sailor catching the wind in his sails and cutting through stormy waters. The harnesser has cultivated, often over decades, the trait of sailing against the current. They have neuroplastically conditioned themselves to question everything.

The harnesser also applies a systems approach to problems; grasps complexity with ease; and may possess an uncanny repertoire of knowledge. Their interaction with generative AI is not a one-sided copy-and-paste exercise. They will interrogate and even correct it. Their tacit knowledge – diverse, refined, and somewhat inscrutable – remains beyond AI’s reach.

Here is an example to illustrate the point I am making: when I received these questions from RT, the biblical verse of Daniel 12:4 immediately came to mind. The verse reads in the King James version: “But thou, O Daniel, shut up the words, and seal the book, even to the time of the end: many shall run to and fro, and knowledge shall be increased.”

Knowledge has indeed increased, exponentially so for those who choose to harness it. But what does “to and fro” mean? What does the original text say? I interrogated ChatGPT because I suspected there was more to it. And I was right. “To and fro” appears only in the Masoretic text. The Theodotion text (Septuagint) omits it entirely, while the Old Greek version of Daniel contains a surprising addendum. I leave it to the curious reader to examine the variations themselves.

What truly caught my attention, however, was the Hebrew rendering of the verse in the Dead Sea Scrolls. It included the niqqud (diacritical marks) that did not exist at the time the scrolls were written. ChatGPT stood corrected after I pressed it, and admitted that its rendering was speculative guesswork.

To cut the analogy short: in the coming AI tsunami, many will be tossed “to and fro” and left adrift in the societal ocean. Those who can harness this elemental force – tempered by life’s struggles – may stand a better chance of finding their shores.

RT: But wouldn’t these so-called “harnessers” be seen as a threat to authoritarian regimes? What about political implications? Are there any, given the fact that the companies behind generative AI engines are based almost strictly in the West?

MM: “Questioning everything” does come with consequences, often in the form of self-imposed solitude. But the harnesser-types I have observed also carry a healthy cynicism toward politics. They are unlikely to join the herd in mass demonstrations. If rallies and protests actually worked, Western governments would have addressed various public grievances long ago. Instead, they have doubled down. This is why I consider Western governments and their satellites to be intellectually hostile, despite their pretension to the contrary.

As for the harnesser’s fate in a future political order — that remains an open question.

The broader political implications, however, are plenty. In geopolitics, the next “superpowers” will be AI superpowers. In Asia, these include Russia, China, India, Iran, Japan, Taiwan, and South Korea, with Vietnam likely to join their ranks. All of them take the concepts of national and AI sovereignty pretty seriously.

For the rest, the long-term outlook is rather bleak. At best, they will be colonized appendages of Western Big Tech. For the time being, they will likely delude themselves into thinking that the BRICS bloc can serve as their new geopolitical and technological sugar daddy. I would rather not dwell on the worst-case scenario. Perhaps their ministers and “technocrats,” so enamored of their World Economic Forum (WEF) links, should simply make Yuval Noah Harari their chief government advisor.

The most immediate political question, for both AI powers and laggards alike, is this: how prepared are governments to deal with mass unemployment on a scale induced by AI?

RT: Many, you included, have written about how generative AI is eroding people’s ability to think for themselves, reinforcing false notions and providing false information. How much of a threat is this to humanity as a whole? Which categories of humanity are the most susceptible to it?

MM: The cohort that benefits most from generative AI are those educated before the mass-Internet era. It sounds paradoxical, but that generation had to read books and journals, scrounge for information, and cultivate a regimen for inquiry. Most “harnessers” hail from this group and they are dying out.

It is easy to blame AI for “dumbing down” society, but in truth, society was already hopelessly dumbed down. Just look at the quality and theatrics of politicians today, especially in the West. More ominously, their successors are little more than parrots reciting scripts. Can anyone take them seriously, with their sensitivities as fragile as eggshells?

AI is not the cause of this decline; it is merely an accelerant. Thanks to decades of trickle-down bad governance dressed up in technocratic jargon, the younger generation is not being taught how to harness AI. This does not augur well for humanity. What will the young people of today do tomorrow?

Worse, the herd is dumbing down AI itself. Generative AI thrives on feedback loops. If each cycle grows dumber, what happens to AI in the long run? Threats related to AI and humans cut both ways.

To avoid meltdown, I suspect LLM designers have “fail-safed” their systems to personalize responses. DeepSeek and ChatGPT, among others, do not behave identically for everyone. That raises two issues: privacy and surveillance. These tools can triangulate even the most “anonymous” user by analyzing syntax, interests, typos, reactions, typing patterns and more.

Think about it: out of 8.2 billion people, AI can pinpoint who you are almost instantly – even if you change handles, borrow someone else’s phone number, relocate, or cloak yourself in digital camouflage.

That should terrify people. Personally? I say: bring it on.

RT: After a recent ChatGPT update, which disabled certain kinds of interactions, there have been numerous reports of people having to “break up” with their “AI boyfriends/girlfriends.” Why would anyone want to “date” a machine?

MM: The particular attachment to AI “girlfriends” and “boyfriends” is the latest expression of a very old human tendency: to anthropomorphize, project emotion, and form bonds with non-human objects when those objects provide comfort, agency, or reciprocal illusion. The novelty is not the attachment itself, but the sophistication of the object – moving from wood and cloth, to clockwork, to pixels, to adaptive AI.

Let me explain.

Since ancient times, people have projected agency and personality onto carved images of gods or ancestors in the form of idols and statues. They have personal ties with objects imbued with “power” such as talismans. Children, even today, are known to talk to their dolls and teddy bears.  In the 18th and 19th centuries, mechanical dolls and automata sparked both fascination and emotional investment. By the 20th century, people were already forming bonds with erotic mannequins.

On a more meaningful note, people still speak to their pets, whose presence and antics can be both calming and outright funny. Parents articulate on behalf of babies and toddlers, and this is how familial and social attachments are formed, as well as the first vocabularies of a young life. As children, we develop our language by reading or listening to anthropomorphized stories involving animals. I still remember the parting words of B’rer Rabbit to B’rer Fox at the well scene, even if I forget the “spur of the moment” epiphanies I had included in Op-Eds written months back.

AI mates, however, represent a new paradigm altogether. For the first time ever, the object of affection can “talk back” in real-time on a variety of topics, and these interactions seem more real and fulfilling than those with humans who can carry grudges, tempers, malice etc.

In my opinion, AI mates are an extrapolation of the imaginary friends many children cultivate while growing up. It is a form of escapism.

The rise in AI relationships may also be caused by growing distrust of fellow human beings, compounded by a cultural drift encouraged by academics, politicians, and other traditional gatekeepers. The lunatics are running the asylum in all social spheres, and people feel let down, disoriented, and desperate for stability. Just think of the recent epidemic of gender dysphoria that was encouraged and celebrated by those in authority.

In that vacuum, AI becomes a substitute anchor. These “relationships” emerge from the collision of unmet human needs (loneliness, intimacy, safety, etc.) with hyper-personalized technology. In a cultural climate where traditional norms around love, sex, and marriage are dissolving, machines become the path of least resistance.

AI can simulate affection and shower compliments without the conflicts of real relationships. The financial and psychological costs appear minimal, but the emotional entanglement can be very real. 

All generative AI has done is turbocharge our innate instinct for attachment.. In fact, early text-based programs like ELIZA in the 1960s showed how easily people could be drawn into confiding in “mere code.”

RT: Is this just loneliness, or some sign of deeper psychological issues – maybe even mental disorder?

MM: Loneliness is often the entry point, but it is rarely the whole story. The Hikikomori phenomenon in Japan – now being mirrored elsewhere – long predated the public rollout of generative AI. Why do children and young adults shut themselves out of society? Maybe because society is getting more hypocritical, cowardly and outright fake? Individuals enter their own simulated social matrix where conformity to lies, half-truths and outright nonsense is a prerequisite.

Most human relationships are toxic to some degree; one where each participant degrades the creativities or potentials of the other by subtle gaslighting. This enables couples or friends to remain together and the phenomenon is broadly called the “crab bucket mentality.”

Extrapolate this and you have chain-ganged cohorts and ultimately, a timid society that sticks with convenient lies. Just think of so-called intellectuals who lampoon the notion of God as an “imaginary fairy in the sky,” but have no problems concocting new gender forms.

This is what I referred to as the “herd” earlier.

As Scripture reminds us, “The heart is deceitful above all things, and desperately sick; who can understand it?” (Jeremiah 17:9). AI cannot truly understand the human heart either, as it can only simulate human affections.  Yet, AI can certainly feel “safer” and more “real” for an increasing number of lonely people.

To cut a long explanation short, we live in a culture of deceit and shallow connections, where public life feels like a revolving circus of drama and demoralization. That erosion of meaning breeds anxiety, depression, and other psychosocial stresses.

There are also elements of addiction and dependency in the context of AI relationships, as virtual companions are designed to be endlessly available and affirming. This bypasses the growth and friction of genuine relationships, reinforcing escapism. Artificial bonds therefore become a substitute for human connection.

Do AI relationships constitute a psychological disorder, or is society itself a mental asylum? In my view, the two cannot be separated: you cannot study and label the former without acknowledging the pathology of the latter. Clinical language already exists for paraphilias involving attachment to inanimate objects. These include agalmatophilia (attraction to statues or mannequins), objectophilia (a broader category), and, more specifically, pygmalionism – the condition of “falling in love with an object of one’s own creation.”

The term comes from Greek mythology, where Pygmalion was a sculptor who fell in love with a statue he had made. In the modern era, George Bernard Shaw’s play Pygmalion reimagined the myth, transforming an underclass flower girl, Eliza Doolittle, into an object of refinement. What appeared to be an innocent stroke of genius becomes more unsettling when one recalls that Shaw himself openly advocated for mass population culling based on perceived unworthiness.”

Sounds familiar?

RT: The information age has provided numerous opportunities for people to meet and get together – with the advent of the internet and of dating apps you don’t even have to go to the pub and strike up conversations anymore. Is that not enough, that people are turning to artificial relationships?

MM: I will reiterate once more that many human relationships were artificial in the first place. Would we still talk to that colleague or superior in our workplace if we had enough money to retire or pursue our true passions? Relationships are forged and enforced by various types of power gradients. It has been so since time immemorial. It is only now, in the information age – as both knowledge and multiple stresses increase – that some are willing to acknowledge the phenomenon.

Rising living costs are also rapidly dismantling traditional opportunities for socializing. Not many people can actually afford to visit a pub anymore. What was once an affordable source of conviviality for the working classes and the indigent is becoming increasingly expensive.

What about less expensive or free avenues of socializing? I have seen nature treks organized on Facebook, only to be cancelled due to lack of response. Traditionally, churches and the like offered the ideal avenue for individuals to meet and strike up bonds. Now, traditional values have eroded and too many churches have fallen into disrepute. Church attendance in the West has also shown a hopeless decline since the post-WW2 period. Some charismatic churches are not cheap to attend either, as Old Testament tithing is enforced.

Dire economic circumstances play a crucial role in the rise of AI relationships.

Dating apps, on the other hand, can be deceptive. Borrowing from a familiar computer phrase: “What you see online is not always what you get.” While some relationships may emerge from these platforms, genuine long-term success stories are relatively rare. In many cases, what initially appears to be compatibility is shaped less by personal connection than by practical considerations such as career prospects, social mobility, or immigration opportunities. When relationships are based on “supply and demand” rules and steep power gradients, imagine the subtle ramifications for subsequent generations?

Within this context, how much more “fake” are AI relationships? Yes, it is unhealthy, but what is the true health of “normal society” today?

RT: Now that the scale of the problem has become evident, will it get better or worse? Various “AI girlfriend” services exist already – will they get normalized and become mainstream, like sex toys and VR pornography, for example? Will there be therapy sessions and get-clean programs, like Alcoholics Anonymous or those for drug or porn addicts?

MM: Loneliness will proliferate, and so will various forms of digital escapism and parasocial bonds. Immersive technologies will one day allow individuals to feel the thrill of exploring faraway caves, visiting fictitious planets, or enjoying sexual intimacy with any character conjured up by an AI prompt.

This is a slippery slope. Imagine if you are an “explorer” in a paleolithic setting and you need to kill in order to survive in that simulation? Would you transpose this acquired trait to the real world? How real will it get? What does it feel like being Jack the Ripper in Victorian London? Will the Darknet evolve into the primary marketplace for immersive technologies that exploit primal desires and sexual deviance?

There will always be therapy sessions for those addicted to various forms of digital addiction. But in my opinion, the best cure is a supervised camping trip, with no modern gizmos allowed.

RT: Is this a test for humanity’s will to survive as a species?

MM: Absolutely. This is why our globalists overlords prattle repeatedly over the Great Reset and the New World Order ad nauseum. They know that the society they had forged is crumbling at its foundations, and they need a new paradigm where the majority of humanity can be safely herded into a digitally-curated gulag. Once inside, the denizens may be provided with free immersive technologies, along with psychotropic drugs, to keep them pliant and pacified. That is precisely what Yuval Noah Harari suggested in reference to the future of worthless or useless eaters.”

Comments are closed.