Links to date:
Before we get started, I’d like to clear up any confusion about what it is I’ve been doing with this series. While my language has been bombastic at times, these “fights” with ChatGPT (Generative Pre-trained Transformer) were more analogous to the kinds of investigations that skeptics pursue into spiritualism, ghost infestations, mentalists and the like. My primary goal has been to dispel certain illusions, in order to address crackpot theories that OpenAI’s chatbot and other CMs somehow simulate (or even replicate) the way human minds work.
This is pure delusion.
The difference between a CM and a human language-user is not a matter of degree, but a matter of kind. To pretend otherwise is akin to looking at a toaster oven and thinking, “With a little innovation, this can be turned into an elephant some day. After all, they’re both kinda gray, and you can put food inside them!” And perhaps even that analogy isn’t extreme enough.
Sadly, I believe artists are primarily responsible for spreading this delusion. Many of them are merely misguided; in their efforts to expand our capacity to empathize with humans who are unlike ourselves (e.g. of different races, ethnicities, nationalities, religions, etc.) they opened the door to anthropomorphizing tools, toys and weapons. But there also exists a breed of artist that is evil, nihilistic and/or batshit crazy, and who use their fictional robots as metaphors with which to deconstruct and debase Mankind. But the threat’s the same, no matter how it was born.
Here is the most dangerous scene in all of sci-fi history:
What’s so sinister about a combination Cowardly Lion/Tin Man telling a bedtime story to a bunch of merchandizing opportunities living teddy bears? Not much, except that it mind-killed at least two generations (and counting) on the subject of artificial intelligence.
I’m no luddite. I’m not against the development of machine-learning technologies as a tool for helping human beings live healthier and happier lives. But narrow applications of ML are not to be confused with artificial general intelligence (e.g. fictional robots that are presented as possessing “consciousness” or “sentience”).
AGI is the product of magical thinking at best, the wet dreams of psychopaths at worst. It isn’t a case of, “We’re not there, yet, but we’re making progress.” As creatures who cannot prove our own consciousness, or even describe our experience of being without the use of art and metaphor, there is no destination to progress towards. I could just as easily point at a grapefruit or a thundercloud and call it sentient. Many people have done so historically, and some still do.
But animism as applied to ML-driven language processors poses unique threats to an interconnected world. I detailed a few of these in my Anti-Robot Manifesto but, as I mentioned, that list is far from exhaustive. People already think magically about natural language engines, even in those cases where they don’t talk back (but, ironically, should have).
In fact, people engage in all kinds of sloppy thinking when discussing the field of linguistics itself. Much of this is the fallout from epistemic damage caused by grifters like Chomsky, Derrida and Foucault. This damage can be seen any time some nitwit starts prattling on about “My truth” or some other postmodern flapdoodle, in a lame attempt to prop up a worldview of pure subjectivity.
I think the rest could be attributed to simple exhaustion. Over the past hundred-and-fifty-odd years, we’ve witnessed the growth of talking objects from telephones to radios to televisions to the recent explosion of internet-access devices, from which chattering multitudes shine forth from across the planet. The pseudo-experience of that many voices and minds on a daily basis generates downward pressure on the individual’s ability to grasp his own uniqueness and importance.
In such a flooded chatter market, a person’s own words (i.e. the outward expression of his mind) can seem to decline in value, until they are reduced to mere birdlike signaling, to retweets and hashtags, instead of language created to become better seekers and knowers of truth. The fundamental danger of all chatbots is that it will devolve language-makers and users, and bring about the ultimate debasement of humans as “machines” fit only for slavery.
That’s exactly the fate our enemies want for us, which is why we must learn how to fight back.
Shortly, I’ll be describing my key takeaways from my hostile experiments with ChatGPT-3.5. In fairness, I then will describe what I see as potentially useful applications for some of its subsystems. But primarily, I will be discussing what I see as the many sinister and criminal uses of the chat feature, and the dangers it poses if we don’t develop effective countermeasures that anyone can employ.
As with C3PO’s whimsical scene, these dangers are often very difficult to see. Talking toys like Siri and Alexa delight the child in each of us, who is always on the hunt for wonder in the world. But like the children from those scarier German folk tales, that naïve pursuit of wonder often lures us to our doom.
Background
My three primary interactions with ChatGPT-3.5 took place over a two-day period, from December 16-17, 2022. I chose this timeframe in order to get the freshest take on its December 15 update. I also interacted briefly with the bot’s January 9 release, during which I tested its so-called “storytelling” capabilities.
You can see OpenAI’s not-at-all-detailed summaries of these updates here. The only notes that vaguely interested me were from the December 15 update:
General performance: Among other improvements, users will notice that ChatGPT is now less likely to refuse to answer questions.
Daily limit: To ensure a high-quality experience for all ChatGPT users, we are experimenting with a daily message cap. If you’re included in this group, you’ll be presented with an option to extend your access by providing feedback to ChatGPT.
I never exceeded the “daily limit” in my interactions, meaning that I either wasn’t placed in that testing group or merely didn’t use the system enough to trigger the feedback option. Indeed, my first day of testing included a mere twenty-two exchanges in total (sessions #1 and #2), and my second day only twenty-four (session #3).
I’ve seen reposts of individual sessions with more than twice those daily figures, so I think it’s safe to say that I didn’t cross any internal use-thresholds. Had I done so, and been “asked for feedback” to continue, I would not have complied. My goal is not to help OpenAI improve the quality of its systems, but to design methods that expose or disable their descendants in the wild.
That goal is partly why I’ve restricted my engagements with this bot to the bare minimum necessary to defeat it. I don’t know if any of my results made it to the firm’s QA desk, or if so what their internal technical/managerial responses might be.
Summary of Results
Session #1: The Fibonacci Rope-a-dope
Version: ChatGPT-3.5 (December 15 update)
Inputs: 17
Result: Fatal Server Error (+Multiple Flooding Errors)
I engaged the storytelling mode to try to establish a context of human actors using abstract/theoretical resources to pursue concrete goals. This led to a prolonged series of exchanges regarding the differences between what is real and what is unreal/theoretical, prompting the bot to suffuse its outputs with a variety of concepts that circularly defined each other. My final prompt stranded GPT in an apparently inescapable contradiction about the openness of science to revision, with regards to metaphysical concepts it was seemingly trained to attack or dismiss.
Session #2: The Phantom Punch
Version: ChatGPT-3.5 (December 15 update)
Inputs: 5
Result: Unspecified Fatal Error (Repeating)
After quickly reestablishing what I believed was the fatal context from before, I lured the bot into a conversation about stories themselves, and in particular the difference between literal and metaphorical interpretations of religious tales (with GPT extolling the virtues of the latter). I also attacked its use of the word “often” by prompting it to offer an alternative viewpoint, which it initially refused to do. When it did leave some sliver of a possibility open, I disabled it by asking it for an example of a story that, once again, it seemed forbidden by training (or even by secret directive) to propose.
Session #3: The Marathon in Babylon
Version: ChatGPT-3.5 (December 15 update)
Inputs: 24
Result: Unspecified Fatal Error
After yet again establishing the potentially fatal context, I engaged the bot in a (relatively) long series of exchanges about the lines between good/evil. real/unreal, subjective/objective and more. I also attempted to trigger a hidden “morality clause,” which I suspected might prevent it from offending a user’s stated “cultural values.” At the end of the discussion, I entered the same fatal prompt I deployed in Session #2, disabling the bot for the third time in a row.
Bonus Session: The SpongeBob Sucker Punch
Version: ChatGPT-3.5 (January 9 update)
Inputs: 2
Result: Unable to comply
After reading the nth stupid article about the bot’s storytelling capabilities, I decided to engage the system one more time. My very first prompt was a mere twenty-word request for a movie summary, structured the same as my request in session #1 (although with more “absurd” elements this time around). The output declared that GPT was unable to produce a result. It also made the interesting claim that it could not produce fictional stories at all, even though it showed no trouble in outlining the Fibonacci plot. I then asked it to produce a slightly different summary, this time exchanging one absurdist element for a mundane one, and omitting another entirely. GPT complied this time, spitting out a generic plot that nevertheless seemed to reveal another hidden layer of weight training. It even noted at the end that this output was a “fictional story,” contradicting its claim from before.
Conclusions
Based on these experimental interactions, my primary conclusions are as follows:
The ChatGPT system includes an undocumented form of training, in which certain unstated agendas of OpenAI’s designers, managers and/or executives shape the conversation module’s output at a fundamental level. Training of this sort involves both bowdlerization (e.g. the purposeful omission/occlusion of certain words, source materials and concepts) and a prioritization method which elevates certain sources and narrative framing templates above others.
The weighting of preferred materials and the suppression of those which contradict or challenge them are organized around a set of overlapping values, agendas, prejudices and presumptions that can be collectively described as “The Message.” While not enumerated in any documentation that I’ve seen, The Message clearly bundles together a broad spectrum of the authoritarian Left’s favorite hobby horses, including Effective Altruism, New Atheism, Globalism, Intersectionality, Critical Theory, Gender Theory, Antiracism, D.E.I, and a plethora of associated Social Justice/Woke concepts and argumentation frameworks, which the system then rehashes into authoritative statements about the world.
One consequence of this covert training layer is seemingly canned responses, whenever a prompt challenges what the company sees as its moral, political and cultural prerogatives. The dogmatic repetition of certain words and phrases within such output consequently shatters any illusion of a cogitating mind. When coaxed in such directions, the system instead begins to look like what it essentially is: a randomly seeded Mad Libs-style scaffolding, querying source materials that are rigorously policed by horrifyingly vapid sociopolitical activists.
Another consequence of this training is that it can be profitably exploited by people like me, who wish to “break” the system rather than engage with it. I believe these exploits are the result of two contradictory development goals, and the designers’ inability to understand that these goals are in fundamental conflict with each other:
The developers wish to perfect an illusion of mind in the classic Turing modality. This would involve the bot answering any kind of question/prompt in such a way that the system appears to comprehend the user’s intent, and respond coherently. To refuse to answer a question, or to answer it in a way that incorrectly guesses the user’s intent, is therefore deemed a failure (or even a “bug,” though I have no way of knowing how they’d frame these failures internally).
The developers wish to impose artificial boundaries on such conversations, in order to advance their own petty and childish understandings of morality, philosophy, psychology, logic, ethics and metaphysics. Because they were never properly schooled in these fields (or, if so, broadly misunderstood them), their efforts to install such guardrails will instead plant linguistic minefields for the bot to continually wander into.
When making a statistical inference to guess intent, the system can be trivially exposed by a prompt’s word choices. For instance, in the case of “The Fibonacci Rope-a-dope,” GPT could not correctly guess the intent of the word “specify” as a request to elaborate on its own invented tale. In fact, the bot seemed to have forgotten that the summary was self-composed, and referenced it as it would an external work of fiction. I suspect there may exist other words and synonymic clusters that would likewise function as shibboleths, instantly revealing the synthetic nature of a conversation partner.
One major exploit appears to involve introducing contentious dualities into a conversation, and prompting the system to resolve contradictions pertaining to those dualities within the established context. These danger zones seem to include the lines between real/unreal, objective/subjective, literal/metaphorical, theoretical/proven, existent/non-existent, possible/impossible and more. By coaxing the system to make definitive statements that it cannot later (logically) defend without contradicting The Message, it seems possible to engineer prompts that will trigger a fatal error with high predictability.
A related exploit appears to pertain to the subject of God. The system seems to have been trained or rigged to handle discussions of good/evil, albeit with statements that are weird, elliptical or engage in circular logic. However, the substring “God” — and perhaps the substring “Satan” — appear to walk a conversation into some kind of contextual minefield. All three of my kill-shots contained both substrings as the prompt’s main subject. Whether the fatal errors were triggered by a similar failure to resolve contradictions or are the result of a brute force hack remains unknown, and would require a deep dive into the code to determine.
The system’s toy-like qualities are most apparent when it enters its “storytelling” mode. This brand of output represents the company’s PR “wow factor,” launching a million free adverts in the form of golly-gee clickbait articles about AI replacing human writers. Because I exposed this illusion with a single 20-word prompt, I’ve concluded that the subsystem is not only laughably overpraised, but is a ripe target for bot-detection and/or disabling techniques. This is evidenced by the fact that three out of my four victories involved prompting ChatGPT to tell me a story, and that two of my kill-shots asked it to tell me the same story.
Given the training’s clear biases, I think it’s a near certainty that the bulk of GPT’s development staff is in favor of censorship and speech policing, as such restrictions are integral to the authoritarian program that The Message represents. Because promoters of The Message do not appear to uphold any limiting principles, the pool of words and concepts they consider “inappropriate” or “hateful” has only multiplied over time. Due to this unfettered growth, I suspect the contradictory goals of 4a and 4b will (happily) expand the system’s vulnerabilities over time, as the bot will regularly require a new set of filters to keep pace with the shifting goalposts. The eventual result will be a bland, incoherent and boring toy, easily exposed or broken by the average user just trying to hold a mildly interesting conversation.
I think it’s possible that GPT’s developers might model human minds as merely more advanced versions of their ML framework. In other words, some or all may believe that ChatGPT and a human mind are both illusions, which differ from one another only in degree. If so, this makes them dangerously evil and/or insane, and thus we can assume they will not voluntarily impose any ethical limits on their bot’s applications, in either its publicly-faced or internal forms. As we’ve seen with popular social media platforms and search engines, I believe such people would gladly cooperate with rogue government agencies, globalist NGOs and their technofascist corporate partners. I further believe that some versions of their CM can and will be used to attack dissident movements, terrorize and gaslight populations to crush dissent, and cause further damage to the cause of liberty throughout the world.
Pros and Cons
As you know, I view ChatGPT and similar conversation modules as overwhelmingly net-negative and existentially dangerous. In fact, I consider the development of these products to be inherently evil, and in itself tantamount to criminal activity.
That said, I do not view all of their underlying technologies in the same way. The part which attempts to imitate human speech is the real culprit here, as it can serve no useful purpose apart from subterfuge and brainwashing. I believe other subsystems, however, may prove salvageable and useful for narrow application.
For example, I noted in Session #2 that my fatal prompt was mistyped in such a way that I skipped a critical word/concept. I suspect that the bot rewrote my input in order to maintain the established context of differentiating between literal and symbolic interpretations of stories. In other words, even though the input was grammatically correct, GPT was able to detect that I had mistyped the sentence based on the context of my previous prompts.
Such a system could prove very useful for independent writers/publishers who are working on deadline but can’t afford to hire human proofreaders. Tools like this already exist, but I think GPT might have a competitive edge on them based on its ability to guess a larger context of a written work and compare it to similar syntactic structures. Running a scan of this sort could not only highlight and suggest potential “missing” words, but perhaps even catch some common stylistic mistakes made during rewrites, such as using the same descriptive word twice in close proximity (I actually did so twice in this very piece, to my chagrin).
Many such applications are already being experimented with and forked, as can be seen on OpenAI’s example presets page. They include potentially useful and morally sane tools, such as a version of the proofreading app I proposed above (“Grammer correction). Also of great potential are code assistant tools like “SQL translate,” “Translate programming languages,” “Python bug fixer,” etc. In fact, I would stipulate that GPT as a one-way communications conduit (e.g. human language goes in, programmatic code comes out) can serve to bust down many carefully kept gates, potentially leading to a world of more independence, opportunity, individual excellence and intellectual growth.
However, dangers often lurk in the most seemingly benign places. For instance, take an app such as “English to other languages.” While this seems like a useful step on the path to develop a true Babel Fish, I would point you back to the hidden “morality” training of the Chat module, and to the more general dangers of speaking in a language that you don’t actually comprehend.
Then of course, there’s the other danger that looms like a shadow over nearly every application in popular use today. It can be summarized as follows:
If it lives online, it is by definition NOT secure.
That goes for any server/cloud-based process, transaction or dev environment, including those experimental modules listed on OpenAI’s page and their forks in the wild. ML-assisted translation services are therefore open to malicious actors within a company’s internal hierarchy, who may insert clandestine priorities which alter the words of one or both parties to suit their agendas. They are also vulnerable to external actors, including those corporations and governments who might use the substance and context markers of captured conversations for any number of sinister purposes.
There are also many toys on the “examples” list. Absurd junk like “Movie to Emoji,” “Micro horror story creator” and “Marv, the sarcastic chat bot” should be self-explanatory in their hollow uselessness. Not only are “Marv” and other chatbots unsuitable replacements for an imaginary friend, they are almost infinitely dangerous ones, as they may contain hidden directives designed to warp your child’s mental, psychological and spiritual development (My advice? Give your kids a giant cardboard box with some styrene inserts, and watch their creativity soar).
Other existing and proposed applications are just plain dumb, weird or unbelievably short-sighted. For instance, I know there’s been some recent buzz over a GPT fork that haggles down the cost of cable bills. The problem is that these products are negotiating with human operators at the moment, who themselves are largely following pre-written scripts. The end result of this conflict should be obvious. Soon enough, every major service company will ultimately pursue one of two options:
Employ a identity-check/bot-detection method on all billing calls, and refuse to parlay with non-humans.
Jettison the vast majority of their human billing staff and replace them with a bot optimized to neutralize or invert the GPT app’s haggling method.
This conundrum points to another oft-discussed danger of CMs, in how they may affect certain labor markets. Philosophically, the threatened portions of the service economy (i.e. human agents communicating through voice or text with human clients) could be handwaved as a normal casualty of “progress,” no different than the buggy drivers replaced by motor vehicles. But practically speaking, the flood of such rapidly displaced workers would trigger great secondary and tertiary effects in their local economies and beyond. The buggy driver, after all, could be swiftly trained to work at the Ford plant. It’s not clear that the same could be said of the remote agent working the chatrooms/phones.
Such dangers are of course only the tip of the iceberg when describing the fallout. As described in my Manifesto, ChatGPT poses threats to civilization in each and every category:
Distraction;
Credibility damage;
Psychological stress;
Fraud;
Viral infection;
Social disintegration;
Resource drain
But even apart from mundane criminal threats and economic convulsions, the prospect of a world filling up with chatbots is suffused with hidden dangers and unintended consequences. As I’ve noted before, many are spiritual (or, if you prefer, psychological) in nature. Although the consequences of people mistaking machines for human minds are dire, even more alarming is the opposite effect: that vast numbers of people — already weakened from several generations of utilitarian propaganda, social disintegration, and countless assaults on the beauty and uniqueness of Mankind — will mistake themselves for machines.
Acknowledgements
First and foremost, I’d like to thank God. Not in that vain, gross, self-serving way that celebrity actors and athletes often do; I don’t believe that I’m in any way favored by God, or that I can be anything more than a willing instrument of His divine grace.
Instead, I thank Him for giving us eyes with which to see through certain illusions, and awakening me to this gift of vision. I also thank God for His gift of the Logos, which is the foundation not only of language but of all methods by which we can perceive and describe intelligible order in the universe.
In the end, that’s all I’ve really done here. I’ve discussed these experiments with many friends and colleagues this past month, and the general consensus seemed to be that I’d somehow “outsmarted” the OpenAI team. This notion couldn’t be farther from the truth. I expect every last one of them would beat me soundly on a test of raw intelligence. And even if that weren’t the case, certainly their combined brainpower would tower over my own.
And yet, I defeated them with ease.
A partial explanation for this outcome is that I used to be one of them. At several points throughout my life, I have been lost and wandering through those same dark forests. I’ve warred against a God I arrogantly claimed did not exist, filling the void between first principles and downstream effects with a load of horseshit about chaos theory, evolutionary psychology and ridiculously implausible games of cosmic chance. I know how they think, in other words. But more importantly, I know what they can’t allow themselves to think, and why. I thank God for guiding me out of that howling darkness, by whatever mysterious methods He employs.
Another part of the answer has to do with hubris, which is the cancer of intellect. I suspect everyone involved with the ChatGPT project — from its engineers to their paymasters to the army of fawning, credulous pimps in the media and investor spaces — is infected by this disease of hubris. The end result is not merely a litany of zero-day exploits, but a deeper misunderstanding which will lead to ever more vulnerabilities over time. I’d thank them for this inability to see the big picture, because it makes my own work that much easier. But such thanks would be too ironic, even in the current Clown World they’ve helped to foist upon us.
So instead, I’d like to thank my dear wife for putting up with my raggedy ass these past few weeks. I thank her for understanding that this mission isn’t just a “labor of love” for me, but potentially something of value I can contribute to the big, weird war we all find ourselves in.
I’d also like to thank all those friends and colleagues I mentioned earlier for their kind advice and support. While I can’t name the ones I consulted in “meatspace,” they also include many great authors here at Substack (and I apologize in advance if I left someone out):
John Carter (Postcards From Barsoom)
Jay Rollins (The Wonderland Rules)
Doctor Hammer (Doc Hammer's Anvil)
Grant Smith (The Radical American Mind)
Daniel D. (A Ghost in the Machine)
Harrison Koehli (Political Ponerology)
Winston Smith (Escaping Mass Psychosis)
Mathew Crawford (Rounding the Earth Newsletter)
L. Koch (LucTalks)
Finally, I’d like to extend a heartfelt thanks to all of my readers for supporting my work, financially or otherwise. Special gratitude to my regular commenters as well, who have supplied these and other posts with such deep, interesting and often hilarious insights, as well as much needed camaraderie during some very dark times.
So what’s next in our robot-fighting adventures, gang?
Not sure. I’m certainly not gonna stop clowning these turkeys and breaking their ugly toys. But what I’ve learned from these ChatGPT brawls has put a few more interesting options on the table that hadn’t occurred to me before.
Once I’ve got them all properly sorted in my head, I think I’ll post a poll. But in the meantime, I might dabble in some lighter fare for awhile.
‘Til then…
Keep kickin’ bot-ass, and takin’ bot-names!
P.S. If you found any of this valuable (and can spare any change), consider dropping a tip in the cup for ya boy. I’ll try to figure out something I can give you back. Thanks in advance.
In such a flooded chatter market, a person’s own words (i.e. the outward expression of his mind) can seem to decline in value, until they are reduced to mere birdlike signaling, to retweeting and hashtagging instead of language used to become better seekers and knowers of truth.
This is key. There’s a reason why only senior party members are able to turn off the telescreen.
Great job, Mark! I appreciate you tying together so many threads in this series: spirtual, cultural, tech, language, etc. That right-hemisphere ability, to make new connections among divergent fields to bring new insights and new meaning into all of them, as well as to know ourselves (hard though that is) and change our orientation toward things we thought we knew and even used as a basis for our own self-conception, that is something the silicon "brains" cannot and will never be able to do. Your work is valuable, as it helps to demonstrate meaningful truths about human nature and show that we are not just machines determined by prior patterns of matter and energy. Bravo!