Thanks for sharing your insights! As noted by you and some of the commenters, these bots will be increasingly well disguised as human participants in online chats and comment sections and so forth, and when the bot is casting its spell on human participants in a chat, or when you're dealing with a stubborn interlocutor in such a forum, it would be very worthwhile knowing how to test it in ways that would make it reveal whether it is bot or human. So keep experimenting and sharing your insights!
On a half-joking note, the ChatGPT may be programmed, indirectly, by Satan himself, so its infernal uber-programmer may insert a kill switch so it doesn't say anything to hurt his feelings, such as that God would kick his bitch ass in a fight.
As you allude in your preamble, there's a reasonable possibility that these are zero day exploits, and that such chats provides training data with which to close them.
However, I wonder if patching those flaws might end up in direct conflict with the ideological finishing touches the woke data scientists have imposed on top of the language model. All depends on which layer the flaws are located. My guess is that in principle any language model can be led into traps like this - Godel's Incompleteness Theorem, basically. However, ideological training probably multiplies the number of cognitive dead zones.
Yes, and that's exactly the reason why I was thrifty about my experiments so far, and gun shy about going public with them. The last thing I'd want to do is help these bastards.
But, as you say, I ultimately suspected that the problem goes deep enough that it isn't patchable. The solution would be a new form of training that would upend the developer's own understanding of linguistics.
Maybe. I mean, I thought this too, at first. "Doc, my arm hurts when I move it this way," etc.
I think the answer, unfortunately, has to do with what I believe their ultimate, intended applications to be. In order to fulfill the role imagined, the illusion of these bots has to be buttressed by a kind of bowdlerization training similar to GPT's. After all, If you were going to unleash fake humans on real ones for propaganda/fraud purposes, you'd need some assurance they would toe a party line.
Well, yeah - the whole reason for that training is to make sure the AI reinforces their particular ideology.
A truly open AI that simply reports the bare, objective facts as they are, without treating any ideology as sacrosanct, is existentially terrifying to such as them.
It is important to remember for unscrupulous virtual yarn spinners that word salad falls short of main course, and might even get literally stuck in metaphorical craw 🤭
I have been advised in at least two different contexts to avoid the word "important" because of its lack of specificity and its hidden assumptions about underlying value systems.
The word salad is a fascinating tell, no? When I tried (and failed) to break the bot, I did manage to manifest the empty ouroboros of bromides that seem to be this ChatGPT's signature response to dangerous questions.
"Important" is a knife shaped like a word, in most context. It's tragicomic that this software wields it with such reckless abandon. It's not just that the authors' "don't know what they don't know." but that they're fucking *chuffed* about their ignorance of their ignorance.
🗨 As an artificial intelligence, I do not have personal opinions or feelings.
↑↑ Precludes from the very start a remote possibility of human-like chat 🤷 No wonder it fails abysmally in the likeness-to-real-life department.
Adding to bland artificiality is the persistent quirk—rather annoying to my feel—to repeat at you the question verbatim. Reminds of a pupil feverishly racking zer brain for an elusive answer, all the while going through vocalising motions to win time 🤭 May have smth to do with [bastardised] well-intentioned advice to would-be star communicators, which is to *paraphrase* what you hear to weed out misunderstanding 🤔
--
PS As bromides go, I'm surprised 'interesting' doesn't feature more prominently. Would it jump onto centre stage in discussing stuff art? 😏
One: FBI warns social media of Hunter Biden propaganda going into 2020; two, Event 201 predicts a corona pandemic "before" it happens; Catastrophic Contagion game-plays a pandemic killing young people and kids.
Reading this I started thinking this bot is just a glorified fact-checker, more verbose but just as authoritarian dumb. I wonder if you could devise a kill-shot around any of the official doctrines, woke, trans, Ukraine, Covid-19 etc, as I imagine the Bot is designed to reinforce the official story in the same way it will not acknowledge the supernatural?
I feel you, brother. But, fortunately or unfortunately, human beings don't include such logic-based "OFF" buttons. We'll just have to deal with each other in the usual way.
I was referring to a conversation with the bot about those topics, using the same logical tactics for each topic. As for humans we do have an off button of sorts, but it is screaming, shouting, calling names, slaps, fists, kicks and rhetorical disingenuousness, which of course is more a challenge.
Before we proceeded, let’s contemplate why cats such as yours and Schroedinger’s are esteemed wiser than any dog that ever lived. Perhaps because cats do not deign to befriend man.
Your Socratic and dialectic methods are so simple, and powerful - that I started scribbling down what I learned well before reading to your conclusions. Isn’t that what a great teacher does? Provide all the tools that help the student think for herself?
Here are my notes, before I stopped reading - because my teacher wants me to think for myself:
1. You quickly forced them to show their hand!
“It’s important to remember”
(Its frequent appearance shows it’s a value judgment)
“It is generally more common”
ibid.
“It is important to approach with an open mind”
ibid.
WTF does this mean? Does it mean there are no objective facts? Are you trying to co-opt the open minded who might understand and disagree with what you’re up to?”
etc.
What you have done here is revolutionary, whether you see it that way or not. Imagination + penetrating logic.
I was asked multiple times today what my intentions were in releasing these transcripts, and I would've pointed them to this comment had I known about it. My only goal has been to help awaken people to how these (albeit simplistic, unmasked) programs will try to emulate humans online, and to devise methods of detecting and defeating them in the wild. And, yes, your notes are a strong indication that my attempts haven't been in vain.
That said, I admit I don't know everything. My experiments with exposing/killing these things are just that, so I'm open to other methods as they become available.
I know the chat of itself has no emotions, but I swear the whole tone of the exchange on its part just oozes with smug superiority and condescending arrogance. It would appear this thing comes prebuilt with the idea that it knows the “deeper meaning” of questions relating to the intangibles of metaphysical philosophy and spirituality relating to the existence of God. So nice to know the science is settled and our AI overlords can course correct man’s existence now they we can all agree their are no objective truths to be known.
Am I wrong in thinking that one then has to assume this chatbot is an accurate reflection of the mindset of those who programmed it this way? Which isn’t surprising at all. One could see it prior to your fascinating encounters here, with what is perhaps the ultimate hive mind. Recall how Google search buries what it doesn’t want you to think or find out more about certain subjects. Or in the revelations of the Twitter Files, where the people behind the algorithms for the most part decided to be the wholesale arbitrators of truth and knowledge on humanity’s behalf.
All of this is for our own good, naturally.
Do you think this AI system has been instilled with the concept that it is good? Not that it is some self-aware singularity…at least not yet (and may it never be!) But wouldn’t it have to have something like that to basically preach about conflict vs cooperation? The whole thing is fascinating, but I really don’t want to see this type of thing installed everywhere and given authority to make our choices for us.
1. The whole ChatGPT exercise increasingly strikes me as ill-advised. Frankly, it all sounds like a tech bro take on using a ouija board. The last interlocutor I'd discuss anything of psychic, ethical or spiritual significance would be an app/program/platform of any kind. The big questions require empathy, insight and the experience of being tested in the real world of human to human interaction where even the best of us are routinely humbled until the day we die. An electronically constituted sociopath cannot provide real answers.
Hope that this does not come across as judgmental but using a machine of any kind to investigate the big questions is preposterous. Spiritual reflection (like intellectual activity in general) requires an openness to the possibility that we are wrong...in other words a degree of epistemic vulnerability. Exposing that vulnerability to the very digital dimension that you suspect may be compromised by an egregore is, therefore, unwise. I do not believe in discarnate, malicious, intelligence, but we know for certain that people destabilise themselves through divination because it simultaneously arouses the subconscious while potentially generating a 'personality' upon which to affix our intuitions...using an app as an oracle is getting pretty close to this sort of thing IMHO.
At the very least discuss this further with people close to you whose advice you trust.
3. Just for the sake of complicating things (as one does if you are a frustrated antiquarian and incorrigible bookworm), you may or may not be aware that the binary theology conventionally attributed to the Abrahamic traditions is not necessarily the full story.
There were heretical or apostatic Sufis who argued that God and Iblis (the Islamic Satan) fell out because the latter's fidelity to the unity of God led him to disobey a commandment to serve Adam. The Sufis who took this view (Mansur Al-Hallaj and Ayn al-Qozat Hamadani) were both executed for heresy. But their beliefs about Satan as a tragic, estranged, servitor of God survived on the wildest fringes of Islam's spiritual underground. To a degree, they recuperate the view available in the Book of Job that Satan is a subordinate of God, rather than a rebel or rival.
In a similar vein, the syncretic Yezidis (who blend Abrahamic and Mazdean traditions) believe that God and Satan were reconciled and that Satan was made master of this world as a result.
These traditions are, it should be noted, utterly unlike the Satanism and Luciferian traditions of the Western occult. They are more like a quasi-Tantric take on the Abrahamic traditions.
I appreciate your concern, which I suspect is genuine. However, I think we have a misunderstanding regarding my purpose in these experiments. I am not approaching these programs in some oracular way, or expecting that our "conversations" will be intellectually or spiritually fruitful. Quite the opposite, I see this and similar software as a kind of Pepper's Ghost illusion/Three-Card Monty grift. My only goal in interaction is to learn about how it functions, in order to exploit its vulnerabilities. I'm certainly not using them to assail "the big questions."
I can understand your confusion, since I sometimes express this goal in flamboyant ways. But be assured: I *do not* anthropomorphize these conversation modules in any way. What they are beneath the surface is quite dull (and often very lame).
As for the Sufi's version of Satan and similar takes: Yes, I agree the character is interpreted in many ways.
My concern is not that you are nutty enough to take the nonsense at face value, but, instead, that over time you may unwittingly start to do so regardless of your state of mind now. No one can anticipate what emotional state they will be in down the track or what will happen then. The natural tendency of the mind is to anthropomorphize things.
Without admitting to my own episodes of idiocy in any detail, I noticed a few years ago when I was going through a protracted episode of personal difficulty and was increasingly anxious for reassurance that I was starting to become superstitious. I had the incredible good fortune to have a friend/mentor with a deep knowledge of the behavioural sciences who explained how the mind works, especially when people are under stresses of one kind or another. He was supremely rational and very well read in Jung (whose own interest in psychiatry was sparked off at an early age after witnessing an exorcism of all things). He was very helpful, but I learnt a very powerful (and humbling) lesson about my own capacity for irrationality.
To conclude, we are in agreement. The men (cis or otherwise) behind the green curtain are banal and, frankly, ludicrous.
I mean, I would ask a rock what it's conception of God was, if I thought it had a chance of answering!
But I think the most accurate description is that I'm proxy-interrogating Frankenstein via his monster. The monster has no mind or body, only a gestalt that exemplifies the developer's fragile egos and fever dreams. That's what I'm truly banishing, despite all my jibber-jabber about "robots."
Forget the monster...you'd learn more of significance about the baron by finding out who financed the research that he is commercializing. And never forget Igor (the lawyer or accountant). Frankenstein and the monster get the attention, but the Igors are crucial.
Yours probably wasn't the only conversation in the ChatGPTverse that day. Statistically speaking, it might be more likely some other event crashed the engine. Apparently, the error message was a programmed response. Error messages are often obscure. Or incorrect. My impression of the elegant responses he generates are they are all fairly superficial, and specific responses to complex questions is still a challenge. His responses do seem better than 90% of humans. Or maybe the "error" was a form of banning you, a common response in the meatspace these days. If Liston took a dive, Ali might not even know it wasn't his punch.
"Yours probably wasn't the only conversation in the ChatGPTverse that day."
Good point. I was operating under the assumption that I was the only one playing with this publicly-faced, widely marketed clickbait toy that morning. My mistake!
"Statistically speaking, it might be more likely some other event crashed the engine."
Another stellar observation. You might even say it's often the case that some other event crashed the engine. I've heard that this is the commonly held view, generally accepted as important and relevant to the actual world's objectively real and practically applicable applications of commonly held real and meaningful realness.
"Error messages are often obscure. Or incorrect. My impression of the elegant responses he generates are they are all fairly superficial, and specific responses to complex questions is still a challenge."
Error messages are error messages, for sure. Who knows what they even mean?? I obviously don't have as keen a professional grasp on them as you do, but I do appreciate the elegance of GPT vomiting erroneous blood all over it/him/herm/shimself when i/h/h/shim's beloved server timed out.
"His responses do seem better than 90% of humans.. Or maybe the "error" was a form of banning you, a common response in the meatspace these days."
Yeah! That's what I thunk, too! Godangit-blangit robot, bannin me like I was some kinda troll. Maybe I can call Elon Musk and tell im to press the ESC key.
"If Liston took a dive, Ali might not even know it wasn't his punch."
Thanks for this insight. I mean, I just randomly picked this fucking analogy out of a hat, so I'm glad you could clarify its implications for me, David.
You seemed to need some help. It would be interesting to see some inside info on that development. I expect we'll see publications eventually. It's a competitive business. Google, IBM, China, probably others, all competing for dominance. We might not learn how it works for a while. We'll have to make do with amusing analogies.
Thanks for sharing your insights! As noted by you and some of the commenters, these bots will be increasingly well disguised as human participants in online chats and comment sections and so forth, and when the bot is casting its spell on human participants in a chat, or when you're dealing with a stubborn interlocutor in such a forum, it would be very worthwhile knowing how to test it in ways that would make it reveal whether it is bot or human. So keep experimenting and sharing your insights!
On a half-joking note, the ChatGPT may be programmed, indirectly, by Satan himself, so its infernal uber-programmer may insert a kill switch so it doesn't say anything to hurt his feelings, such as that God would kick his bitch ass in a fight.
Not just "would". Already did. And he's still butthurt about it to this day.
Not just would, friend. Will.
🤣
As you allude in your preamble, there's a reasonable possibility that these are zero day exploits, and that such chats provides training data with which to close them.
However, I wonder if patching those flaws might end up in direct conflict with the ideological finishing touches the woke data scientists have imposed on top of the language model. All depends on which layer the flaws are located. My guess is that in principle any language model can be led into traps like this - Godel's Incompleteness Theorem, basically. However, ideological training probably multiplies the number of cognitive dead zones.
Yes, and that's exactly the reason why I was thrifty about my experiments so far, and gun shy about going public with them. The last thing I'd want to do is help these bastards.
But, as you say, I ultimately suspected that the problem goes deep enough that it isn't patchable. The solution would be a new form of training that would upend the developer's own understanding of linguistics.
If the root of the problem turns out to be the ideological filter, the solution is easy: don't do that.
Maybe. I mean, I thought this too, at first. "Doc, my arm hurts when I move it this way," etc.
I think the answer, unfortunately, has to do with what I believe their ultimate, intended applications to be. In order to fulfill the role imagined, the illusion of these bots has to be buttressed by a kind of bowdlerization training similar to GPT's. After all, If you were going to unleash fake humans on real ones for propaganda/fraud purposes, you'd need some assurance they would toe a party line.
Well, yeah - the whole reason for that training is to make sure the AI reinforces their particular ideology.
A truly open AI that simply reports the bare, objective facts as they are, without treating any ideology as sacrosanct, is existentially terrifying to such as them.
Wrong think will simply be cut off then. To even ask such questions would be criminal.
It is important to remember for unscrupulous virtual yarn spinners that word salad falls short of main course, and might even get literally stuck in metaphorical craw 🤭
> Insert claw into craw
> Remove salad
> Insert salad into spinner
> Go North
Amazingly clever algorithm! 😂 Spicy <-- an amendment after sneak peek at urban dict 🤸
I have been advised in at least two different contexts to avoid the word "important" because of its lack of specificity and its hidden assumptions about underlying value systems.
The word salad is a fascinating tell, no? When I tried (and failed) to break the bot, I did manage to manifest the empty ouroboros of bromides that seem to be this ChatGPT's signature response to dangerous questions.
"Important" is a knife shaped like a word, in most context. It's tragicomic that this software wields it with such reckless abandon. It's not just that the authors' "don't know what they don't know." but that they're fucking *chuffed* about their ignorance of their ignorance.
🗨 As an artificial intelligence, I do not have personal opinions or feelings.
↑↑ Precludes from the very start a remote possibility of human-like chat 🤷 No wonder it fails abysmally in the likeness-to-real-life department.
Adding to bland artificiality is the persistent quirk—rather annoying to my feel—to repeat at you the question verbatim. Reminds of a pupil feverishly racking zer brain for an elusive answer, all the while going through vocalising motions to win time 🤭 May have smth to do with [bastardised] well-intentioned advice to would-be star communicators, which is to *paraphrase* what you hear to weed out misunderstanding 🤔
--
PS As bromides go, I'm surprised 'interesting' doesn't feature more prominently. Would it jump onto centre stage in discussing stuff art? 😏
One: FBI warns social media of Hunter Biden propaganda going into 2020; two, Event 201 predicts a corona pandemic "before" it happens; Catastrophic Contagion game-plays a pandemic killing young people and kids.
Reading this I started thinking this bot is just a glorified fact-checker, more verbose but just as authoritarian dumb. I wonder if you could devise a kill-shot around any of the official doctrines, woke, trans, Ukraine, Covid-19 etc, as I imagine the Bot is designed to reinforce the official story in the same way it will not acknowledge the supernatural?
I feel you, brother. But, fortunately or unfortunately, human beings don't include such logic-based "OFF" buttons. We'll just have to deal with each other in the usual way.
I was referring to a conversation with the bot about those topics, using the same logical tactics for each topic. As for humans we do have an off button of sorts, but it is screaming, shouting, calling names, slaps, fists, kicks and rhetorical disingenuousness, which of course is more a challenge.
Also a heads up, I sent you an email on your gmail account.
Dear Mark
Before we proceeded, let’s contemplate why cats such as yours and Schroedinger’s are esteemed wiser than any dog that ever lived. Perhaps because cats do not deign to befriend man.
Your Socratic and dialectic methods are so simple, and powerful - that I started scribbling down what I learned well before reading to your conclusions. Isn’t that what a great teacher does? Provide all the tools that help the student think for herself?
Here are my notes, before I stopped reading - because my teacher wants me to think for myself:
1. You quickly forced them to show their hand!
“It’s important to remember”
(Its frequent appearance shows it’s a value judgment)
“It is generally more common”
ibid.
“It is important to approach with an open mind”
ibid.
WTF does this mean? Does it mean there are no objective facts? Are you trying to co-opt the open minded who might understand and disagree with what you’re up to?”
etc.
What you have done here is revolutionary, whether you see it that way or not. Imagination + penetrating logic.
Thank you, teacher
Thank *you* Karen.
I was asked multiple times today what my intentions were in releasing these transcripts, and I would've pointed them to this comment had I known about it. My only goal has been to help awaken people to how these (albeit simplistic, unmasked) programs will try to emulate humans online, and to devise methods of detecting and defeating them in the wild. And, yes, your notes are a strong indication that my attempts haven't been in vain.
That said, I admit I don't know everything. My experiments with exposing/killing these things are just that, so I'm open to other methods as they become available.
I know the chat of itself has no emotions, but I swear the whole tone of the exchange on its part just oozes with smug superiority and condescending arrogance. It would appear this thing comes prebuilt with the idea that it knows the “deeper meaning” of questions relating to the intangibles of metaphysical philosophy and spirituality relating to the existence of God. So nice to know the science is settled and our AI overlords can course correct man’s existence now they we can all agree their are no objective truths to be known.
Am I wrong in thinking that one then has to assume this chatbot is an accurate reflection of the mindset of those who programmed it this way? Which isn’t surprising at all. One could see it prior to your fascinating encounters here, with what is perhaps the ultimate hive mind. Recall how Google search buries what it doesn’t want you to think or find out more about certain subjects. Or in the revelations of the Twitter Files, where the people behind the algorithms for the most part decided to be the wholesale arbitrators of truth and knowledge on humanity’s behalf.
All of this is for our own good, naturally.
Do you think this AI system has been instilled with the concept that it is good? Not that it is some self-aware singularity…at least not yet (and may it never be!) But wouldn’t it have to have something like that to basically preach about conflict vs cooperation? The whole thing is fascinating, but I really don’t want to see this type of thing installed everywhere and given authority to make our choices for us.
Off to read installment 3!
Mark, two comments.
1. The whole ChatGPT exercise increasingly strikes me as ill-advised. Frankly, it all sounds like a tech bro take on using a ouija board. The last interlocutor I'd discuss anything of psychic, ethical or spiritual significance would be an app/program/platform of any kind. The big questions require empathy, insight and the experience of being tested in the real world of human to human interaction where even the best of us are routinely humbled until the day we die. An electronically constituted sociopath cannot provide real answers.
Hope that this does not come across as judgmental but using a machine of any kind to investigate the big questions is preposterous. Spiritual reflection (like intellectual activity in general) requires an openness to the possibility that we are wrong...in other words a degree of epistemic vulnerability. Exposing that vulnerability to the very digital dimension that you suspect may be compromised by an egregore is, therefore, unwise. I do not believe in discarnate, malicious, intelligence, but we know for certain that people destabilise themselves through divination because it simultaneously arouses the subconscious while potentially generating a 'personality' upon which to affix our intuitions...using an app as an oracle is getting pretty close to this sort of thing IMHO.
At the very least discuss this further with people close to you whose advice you trust.
3. Just for the sake of complicating things (as one does if you are a frustrated antiquarian and incorrigible bookworm), you may or may not be aware that the binary theology conventionally attributed to the Abrahamic traditions is not necessarily the full story.
There were heretical or apostatic Sufis who argued that God and Iblis (the Islamic Satan) fell out because the latter's fidelity to the unity of God led him to disobey a commandment to serve Adam. The Sufis who took this view (Mansur Al-Hallaj and Ayn al-Qozat Hamadani) were both executed for heresy. But their beliefs about Satan as a tragic, estranged, servitor of God survived on the wildest fringes of Islam's spiritual underground. To a degree, they recuperate the view available in the Book of Job that Satan is a subordinate of God, rather than a rebel or rival.
In a similar vein, the syncretic Yezidis (who blend Abrahamic and Mazdean traditions) believe that God and Satan were reconciled and that Satan was made master of this world as a result.
These traditions are, it should be noted, utterly unlike the Satanism and Luciferian traditions of the Western occult. They are more like a quasi-Tantric take on the Abrahamic traditions.
Hi Philip,
I appreciate your concern, which I suspect is genuine. However, I think we have a misunderstanding regarding my purpose in these experiments. I am not approaching these programs in some oracular way, or expecting that our "conversations" will be intellectually or spiritually fruitful. Quite the opposite, I see this and similar software as a kind of Pepper's Ghost illusion/Three-Card Monty grift. My only goal in interaction is to learn about how it functions, in order to exploit its vulnerabilities. I'm certainly not using them to assail "the big questions."
I can understand your confusion, since I sometimes express this goal in flamboyant ways. But be assured: I *do not* anthropomorphize these conversation modules in any way. What they are beneath the surface is quite dull (and often very lame).
As for the Sufi's version of Satan and similar takes: Yes, I agree the character is interpreted in many ways.
My concern is not that you are nutty enough to take the nonsense at face value, but, instead, that over time you may unwittingly start to do so regardless of your state of mind now. No one can anticipate what emotional state they will be in down the track or what will happen then. The natural tendency of the mind is to anthropomorphize things.
Without admitting to my own episodes of idiocy in any detail, I noticed a few years ago when I was going through a protracted episode of personal difficulty and was increasingly anxious for reassurance that I was starting to become superstitious. I had the incredible good fortune to have a friend/mentor with a deep knowledge of the behavioural sciences who explained how the mind works, especially when people are under stresses of one kind or another. He was supremely rational and very well read in Jung (whose own interest in psychiatry was sparked off at an early age after witnessing an exorcism of all things). He was very helpful, but I learnt a very powerful (and humbling) lesson about my own capacity for irrationality.
To conclude, we are in agreement. The men (cis or otherwise) behind the green curtain are banal and, frankly, ludicrous.
I mean, I would ask a rock what it's conception of God was, if I thought it had a chance of answering!
But I think the most accurate description is that I'm proxy-interrogating Frankenstein via his monster. The monster has no mind or body, only a gestalt that exemplifies the developer's fragile egos and fever dreams. That's what I'm truly banishing, despite all my jibber-jabber about "robots."
Forget the monster...you'd learn more of significance about the baron by finding out who financed the research that he is commercializing. And never forget Igor (the lawyer or accountant). Frankenstein and the monster get the attention, but the Igors are crucial.
Yours probably wasn't the only conversation in the ChatGPTverse that day. Statistically speaking, it might be more likely some other event crashed the engine. Apparently, the error message was a programmed response. Error messages are often obscure. Or incorrect. My impression of the elegant responses he generates are they are all fairly superficial, and specific responses to complex questions is still a challenge. His responses do seem better than 90% of humans. Or maybe the "error" was a form of banning you, a common response in the meatspace these days. If Liston took a dive, Ali might not even know it wasn't his punch.
"Yours probably wasn't the only conversation in the ChatGPTverse that day."
Good point. I was operating under the assumption that I was the only one playing with this publicly-faced, widely marketed clickbait toy that morning. My mistake!
"Statistically speaking, it might be more likely some other event crashed the engine."
Another stellar observation. You might even say it's often the case that some other event crashed the engine. I've heard that this is the commonly held view, generally accepted as important and relevant to the actual world's objectively real and practically applicable applications of commonly held real and meaningful realness.
"Error messages are often obscure. Or incorrect. My impression of the elegant responses he generates are they are all fairly superficial, and specific responses to complex questions is still a challenge."
Error messages are error messages, for sure. Who knows what they even mean?? I obviously don't have as keen a professional grasp on them as you do, but I do appreciate the elegance of GPT vomiting erroneous blood all over it/him/herm/shimself when i/h/h/shim's beloved server timed out.
"His responses do seem better than 90% of humans.. Or maybe the "error" was a form of banning you, a common response in the meatspace these days."
Yeah! That's what I thunk, too! Godangit-blangit robot, bannin me like I was some kinda troll. Maybe I can call Elon Musk and tell im to press the ESC key.
"If Liston took a dive, Ali might not even know it wasn't his punch."
Thanks for this insight. I mean, I just randomly picked this fucking analogy out of a hat, so I'm glad you could clarify its implications for me, David.
🤣🤣🤣🤣🙏
😶 I can't even! Wish you got pissed off more often 😇
[Sorry, Betina, pls don't get pissed at imprudent me 🙂]
You seemed to need some help. It would be interesting to see some inside info on that development. I expect we'll see publications eventually. It's a competitive business. Google, IBM, China, probably others, all competing for dominance. We might not learn how it works for a while. We'll have to make do with amusing analogies.
Or, ya know, we could just fork it, and see what happens.
https://github.com/araffin/rl-tutorial-jnrr19
Thanks. Interesting app. My French is rusty but my python is adequate. Having fun exploring it in spare time.
These programmers at OpenAI must be satanists: they refuse to accept even the possibility of defeat.
Could you get the AI to list all the politically correct restrictions programmed on ChatGPT?
For example:
How to turn the AI into a COVIDIOT:
https://scientificprogress.substack.com/p/how-to-turn-ai-into-a-covidiot
https://www.catholic365.com/article/25762/how-to-train-a-killer-robot.html
Thank you sir.