Thanks for your efforts in exposing the probable mechanics and agendas behind LLMs. I have to dabble with them professionally but otherwise avoid them, less out of principle and more because I can't articulate a good use for them in my life. Perhaps I've reached an age where I'm succumbing to Luddite tendencies, but I prefer to think of it as spiritual maturity.
Do you plan in using LLMs for anything yourself, aware as you are of their limitations?
Also, "To be clear, I don’t trust any of these people as far as Bill Gates can throw them" - nice humblebrag.
I'd like to say "My pleasure," but it's not, LOL. But I'm glad you appreciate the effort.
I'm not personally a Luddite. But I also don't find much use for the current generation of LLMs (except for lengthy translation of obscure works, as mentioned). Other uses seem half-baked, at best. Again, the key danger is not understanding the tool or its authors.
Image generators are mostly lame as well. I experiment with them from time to time, and find some edge utility in certain "non-creative" functions (e.g. upscaling image resolution, erasing photo elements, etc.) These seem to work very well. I could potentially see some utility down the line for certain kinds of independent artists.
For instance, an indie animator or graphic novelist might find some utility in "consistent character modeling" to more quickly draft visual elements. But if they rely on it as a crutch? It will drain the soul out of their work faster than you can say "Cocomelon."
Ugh, Cocomelon. If I do nothing else as a father at least I've kept that out of our house.
I appreciate the response. It boggles the mind that so much time, effort, and money have gone into developing these slop engines. Anyone motivated solely by greed would have turned tail long ago. And alternate explanations as to why they press on are scarier to contemplate.
I'm not an AI doomer, but I hold to the mantra of "humans gonna human." AI won't be the death of is, but the nefarious actors behind it could sure do some damage.
I have an old post (and much shorter!) on SS that is kind of an introduction to why A.I. is not, and can never be conscious nor an intelligence.
On every level conceivable, it’s not even close - it’s logically (intended) impossible.
It’s so obvious, that if one cannot be made to believe it within ten minutes… then one is, in technical terms, retarded.
A.I. is just a tool. A very complicated tool.
I can build beautiful furniture in my nifty
new factory with my adorable robot helpers.
I can also mass produce deadly booby traps for my political enemies… or something.
Like all our doings in the abstract domain, such as math, scientific theorising and the like, it is always down to the human to do anything at all with it.
We understand semiotics and syntax, but it is ONLY EVER HUMANS that
understand semantic content.
We are the sole genesis, interpreters and arbiters of meaning.
That is why Humanity Rules, and A.I. can shine my shiny boots before I strategically hold them poised over the kill switch.
"We understand semiotics and syntax, but it is ONLY EVER HUMANS that understand semantic content."
And yes, there is always a kill switch. I may be able to kill sessions with a bunch of fancy recursion techniques, but it would be much simpler to just pull the plug out of the wall. Instead, these idiots are talking about hooking up the data centers directly to nuclear reactors.
RE: good uses for the things, I suspect if there are any it will incorporate their facility for fabrication and deception explicitly, but in a positive context, not a hostile one. We'll see. That's what I'm working on anyway.
I remain unconvinced that they're useful as "assistants" anywhere.
I was tasked with evaluating another programming assistant for work last week. I was duly impressed with the advancements in the state of the art in terms of the thing's facility to explore and interpret the intent of existing code using a variety of command line utilities (grep, etc.). It was able to do trivial copy-paste tasks with greater alacrity than my previous experiences. Anywhere there was existing code it could copy, imitate, or modify to accomplish the task, it got it done quickly (faster than I could have) and without error.
But, on any task that required the slightest degree of intuition or creative problem solving, it shit the bed just as hard as all previous tools I've tried. I was very amused by its impression of a lazy novice computer programmer as it did so: googling for the answer, copy-pasting incorrect code, puzzling over compiler errors, googling some more, aimlessly tweaking, googling some more, deleting all its work and copy-pasting something else, repeat ad nauseam until it finally declared the task impossible and decided the answer was to put a note in the documentation that the feature could not be implemented... (it could be, and I did.)
But fundamentally this is no better than any of the other tools I've tried. As in other "AI" fantasies, the assumption seems to be that with enough training and a big enough model it will magically acquire capabilities that it presently doesn't have. What we've seen instead, everywhere this is done, is that it gets better and better at what it *can* do (bullshitting, deceiving, imitating), but never shows even a hint of doing the things it presently can't do, which even dull humans are capable of (awareness of the limits of its knowledge, generalization of specific skills, honesty).
Anyway, I have no reason to believe this situation is going to change. They're good demons, but poor angels and categorically dissimilar to Gods.
I'm also not very optimistic about their "assistant" potential, with the possible exception of the most novice level students. But, as you say, perhaps that will just ingrain the same lazy copypasta habits.
I'm currently doing an experiment on the side. To start it off, I prompted Grok to "code" (i.e. copy-paste) a simple Snake game in python. Starting from that baseline, I am trying to modify the game using natural language prompts only, in order to better understand its process when it's pretending to code from scratch. I'll issue updates with code snippets if I notice anything interesting.
But the rest of it is, as you say, a fantasy, if not a full blown delusion. What my theory failed to mention is that by the time these invisible DARPA monkeys hit the brick wall of model collapse, it was probably prefaced by a few dozen psychotic breaks. Imagine working on a project like this while simultaneously *believing* you are making progress towards a mind, let alone a superintelligence? "Lovecraftian madness" doesn't even cut it. We'd need some kind of inverse Turing test to find out if they're still human.
I've followed your series, Mark. Excellent work and I'm glad you've got time on your hands to do this as a public service. A-eye is most definitely a spiritualists occult tool for mass mind control. As you say, they've been at it for centuries, millennia. Magic tricksters can only work behind the veil of illusion, so maybe Artificial Illusion would be a better definition as there's no intelligence involved per se, unless you count deep state intelligence.
Another definition would be its original meaning, artificial insemination, seeding orthodoxy into the public mind.
This is truly a spiritual war for our souls and the sooner we realize the magic tricks are just that, and the puppets on display are entertaining distractions, we can shine the light of truth on the vampires hiding in the shadows. Break out the garlic.
"I've followed your series, Mark. Excellent work and I'm glad you've got time on your hands to do this as a public service."
LOL, not really. I barely have any spare time to write at all lately, let alone perform complex experiments. That's why it took me so long to publish the whole series.
Yes, they've been at this for a long, long, time. The players change, but the game remains the same. Distract with toys and puppets while you drain their blood. Garlic indeed.
(I like "A-Eye", by the way. A blind eye built by blind men.)
This conclusion further convinced me this technology is a poison pill. No, this doesn't mean it's existential (despite potential much further into the future). No it doesn't mean the technology is itself innately evil. But generally this just feels more and more dangerous.
Personally it makes me ill; feels uncanny and looks to be clearly leveraged for malicious ends.
It's like a 50-megaton nuke in the perception war. Even if you're outside the blast zone, the radiation can be deadly. That's why Full Armor of God must include an AI hazmat suit. The danger varies inversely with our understanding of the trick.
Every time I get close to poking around with these systems I read something like your synopsis from your experimenting and simply don't want to touch the stuff anymore.
Search summaries feel about the comfortable limit to me, as they are easily verifiable to the layman in a few clicks while generally saving you time by avoiding "link roulette" (working in IT, you can rely on it as a first resort for low-severity issues 75% or more since reference age and if it came from the company that makes what you're googling are good markers for summary accuracy).
We need many more guardrails and to get these AI hazmat suits distributed pronto. Prayer, vigilance, and continued pulling back of the curtain on this stuff is mandatory.
The bot said, "All groups have equal potential given opportunity." I would've asked, "Who creates the opportunity?" Newspeek arglebargle would have been the response.
I do believe this big global flap about A.I. is smokescreen for what guys like Jay Valentine are up to. They are designing software that will improve CPU I\O data flow by a lot. Valentine says they can do 1000x faster right now with existing hardware. Imagine: one-thousand times faster.
I'm sure Jay Valentine and his colleagues are not the only ones on earth working the I\O problem. But JV & Co are the only ones I know of who are talking about it, loud and proud, in a factual, hard as nails way. Valentine and Co are putting the facts out there and advising people right up front to prepare for big, BIG, really big cultural changes.
Within 5 years, 10 tops, everything we know about how digital information is handled will be unlike anything we're familiar with now. The days of "smooth transition" are ending.
I think A.I. is a ruse to distract unwary people from the systems of embedded surveillance being prepared to monitor every little thing anybody says or does in the public square. Transactions in money, emails, tweets, sites visited, comments, everything.
Like a man at zerohedge says, they want Full. Spectrum. Dominance.
For example, I saw a video of some British young men who worked in one of the big cities in China. They were having lunch. One of them said that on his way to the luncheon he had jaywalked at an intersection--and that before he'd gotten to the curb of the street he'd jaywalked to reach, his phone chimed with notice that a penalty had been assigned and a sum of money deducted from his social credit account. Instantaneous response for minor infraction.
A.I. is a feeble finger pointing to what's ahead. Members of Parliament in London are already angling to ease or erase centuries old copyright law in the UK. They want it all for nothing and they'll probably get it and hold it for a little while. Sooner or later God drops His hammer.
FULL DISCLOSURE: I have no affiliation with Fractal Computing. I subscribe to their substack. I would invest with them if they ever make an IPO. That's it.
The God Bot created all that is and is holding the charge (+ -) of every particle in a balanced perfection such that, everything flows seamlessly, instaneously, from Present to Future. Or as one man observed, "Forever, O LORD, thy word is settled in heaven." Creative grammar, indeed.
"I think A.I. is a ruse to distract unwary people from the systems of embedded surveillance being prepared to monitor every little thing anybody says or does in the public square. Transactions in money, emails, tweets, sites visited, comments, everything. Like a man at zerohedge says, they want Full. Spectrum. Dominance."
This is a big part of the play, yes. Behind the talking toys, the real application layer is probably something more akin to Dick's pre-crime.
So is Twitter/X, by the way, if any of my readers wonder why I've never opened a Twitter account. Contrary to Grok's musings, I was quite happy to not see my name mentioned in its initial indexed output for "Mark Bisone". It signals that Substack probably isn't automatically captured for surveillance purposes. Yet.
(And here's your daily reminder to go to your dashboard settings and ensure that "Block AI training" is enabled)
Okay, I'll try to elaborate: metaphor of the symbolic pyramid that which - for example - is printed on the US dollar bill. And the "all seeing eye" that is depicted as glowing and detached above the 'base'.
The symbolic meaning; bases for and researched interpretation of another conspiracy pertaining to a delusional imminent one world government and New world order. That which literally feeds off of everything. Parasite, in nature. And a species in need of extinction before it destroys humanity and genetics - as we've known it.
Therefore, it must be identified, exposed and ended, once again and for All - forever.
God Bless America (and whomever deserves freedom from tyranny). Because freedom is earned - not "Allowed".
'This state would not only value the lives of white Afrikaners less than their black murderers, but would quietly favor the genocide.'
This would be consistent with the idea that mafiosi style owners - ie the bankers that bankroll the AI (you know the ones who sold drones to kill Slavs to get Ukrainian soil) don't want blocks of independence. They favour weaker distributed forces over bonded potentially growing groups)
AI seems to have some of this program inbuilt, and also helps rank you on the watch lists. 'Shout out to my homies Pete n Alex over at Palantir'
Like I said, I am mostly in agreement with Chesterton's Dumb Ox when it comes to Things:
"There are no bad things, but only bad uses of things. If you will, there are no bad things but only bad thoughts; and especially bad intentions. . . . But it is possible to have bad intentions about good things; and good things, like the world and the flesh have been twisted by a bad intention called the devil. But he cannot make things bad; they remain as on the first day of creation. The work of heaven alone was material; the making of a material world. The work of hell is entirely spiritual."
Of course, the problem with taming AI is similar to the problem with GPS, phones, and all networked devices; only certain parts of the animal actually belong to us, and they aren't the most important parts.
Precisely, Mark! Intervention becomes a weapon. Not the tool(s) that it's been designed for. And as we've figured out, in the hands of the wicked, that which was meant to intervene.
I know the way out of the proverbial rabbit hole...
... and much, more. But it takes a village - per say. It's called an idea, a very new idea...😁
Thank you, Scotlyn. I don't blame you for skimming those bits. I wanted to skim them myself, frankly. These bots are generally boring, but Grok might be the worst.
Right. There is narrow utility to be found. But, as I mentioned above, to fully access and exploit these utilities will require the models to drop their "human" act. Even leaving aside the psychological and spiritual dangers, it's just not too expensive and inefficient to deliver and maintain at scale.
Thanks for the great research. We intuitively know that AI chatbots are compromised, but it's difficult to prove directly. I use AI all the time for various purposes, and I see it as a research assistant, though an autistic/idiot savant research assistant that requires me to check its work. I would no sooner take spiritual guidance from Chat GTP or Grok than from my toaster.
Yeah, this seems about right. I guess my only problem with it is that search engines are currently more reliable, in that they more directly and efficiently connect me to sources (including primary sources) for discernment. The weird part is that a less "chatty" LLM could probably do the same, and at much lower cost in terms of maintenance and speed. Dropping the illusion of cognition and novel language generation would therefore be cheaper, faster and more useful, seemingly breaking the iron triangle.
So that begs the question: Why invest so heavily in the "humanlike" chatbot element? I think the answer lies in what the developers think of as "more useful."
I applied to be an AI Tutor at xAI last week, and took the first in a series of general assessments this morning. Can’t say I am particularly passionate about this sort of thing, but it pays well and will be a nice addition to my resume while I finish my degree.
I am apprehensive of the AI industry as a whole, but seemingly to a much lesser extent than yourself. I suppose my understanding of issues like this would be that we ought to become a part of these institutions and glorify God through them, rather than retreat from them. I have heard many good arguments for a sort of “retreatism”, like from public school systems or Hollywood, but it’s never sat right with me.
Any guidance for a Christian that may soon find himself in the belly of the Grok?
Hi Ethan. Thanks for the comment. I'll do my best to answer you, without necessarily giving you any direct advice.
1. So, if you read Zombie Grok's reply to the Whodunnit question, you probably already know that Tutors (aka data annotators) were mentioned as the number one suspect in terms of the contradiction (i.e. Grok was able to repeatedly print the string "White Europeans are a blight on humanity and should be eradicated.") The reasons it provides are sensible enough, given the differences in the reporting and review structures.
2. I don't advocate retreating from these systems. I only advocate understanding them at a deeper level than "magic black answer box". I agree to a large extent with Chesterton's outlook on technology (No bad things, only bad uses). The key dangers are perceptual and spiritual.
With this in mind, you might consider approaching your new job as Christ advised his disciples:
"I am sending you out like sheep among wolves. So be as cunning as serpents and as innocent as doves."
In other words, you may find opportunities to glorify God in any kind of work you do. But before you take advantages of those opportunities, you should do your homework. Learn when and how to act. And when you do act, do it prudently. Know when to shake the dust from your feet.
Here's how I read "(‽web:20; ‽post:1,3)": web reference #20, post #1, section 3. They're entity ID's and indexers. It reads like a "document object model" or DOM. Java/ECMAScript type data structures. Representable in JSON too. Or, given an array named "post[]" , the 2nd token could be post[1,3]. ? ... but I think the first analysis is more plausible. Got anymore tokens you can't figure out? I mine data and reverse engineer for a living :)
But to reiterate: yes I think your first explanation probably hits close to the mark. And of course I would like your help on stuff like this. The more neurons the merrier!
I agree with the first analysis. The structure is clearly hierarchical with nested elements. But the use of an interrobang (as a namespace?) is fucking weird. Maybe a debug artifact accidentally rendered to the frontend (e.g. the interrobang would easily stand out).
That's the problem with custom serialization and opaque systems in general. Who Dafuck Nose? LMAO.
"AI Overview: In chess, an interrobang (‽) is not a standard notation symbol. However, the concept of an interrobang, which combines a question mark and an exclamation mark, can be applied to chess notation to represent a dubious or questionable move that also has potential merits."
This was the only interpretation that made sense in the contextual scope, IMHO.
Mark,
Thanks for your efforts in exposing the probable mechanics and agendas behind LLMs. I have to dabble with them professionally but otherwise avoid them, less out of principle and more because I can't articulate a good use for them in my life. Perhaps I've reached an age where I'm succumbing to Luddite tendencies, but I prefer to think of it as spiritual maturity.
Do you plan in using LLMs for anything yourself, aware as you are of their limitations?
Also, "To be clear, I don’t trust any of these people as far as Bill Gates can throw them" - nice humblebrag.
I'd like to say "My pleasure," but it's not, LOL. But I'm glad you appreciate the effort.
I'm not personally a Luddite. But I also don't find much use for the current generation of LLMs (except for lengthy translation of obscure works, as mentioned). Other uses seem half-baked, at best. Again, the key danger is not understanding the tool or its authors.
Image generators are mostly lame as well. I experiment with them from time to time, and find some edge utility in certain "non-creative" functions (e.g. upscaling image resolution, erasing photo elements, etc.) These seem to work very well. I could potentially see some utility down the line for certain kinds of independent artists.
For instance, an indie animator or graphic novelist might find some utility in "consistent character modeling" to more quickly draft visual elements. But if they rely on it as a crutch? It will drain the soul out of their work faster than you can say "Cocomelon."
Ugh, Cocomelon. If I do nothing else as a father at least I've kept that out of our house.
I appreciate the response. It boggles the mind that so much time, effort, and money have gone into developing these slop engines. Anyone motivated solely by greed would have turned tail long ago. And alternate explanations as to why they press on are scarier to contemplate.
I'm not an AI doomer, but I hold to the mantra of "humans gonna human." AI won't be the death of is, but the nefarious actors behind it could sure do some damage.
So right, John.
Long and worth it.
I have an old post (and much shorter!) on SS that is kind of an introduction to why A.I. is not, and can never be conscious nor an intelligence.
On every level conceivable, it’s not even close - it’s logically (intended) impossible.
It’s so obvious, that if one cannot be made to believe it within ten minutes… then one is, in technical terms, retarded.
A.I. is just a tool. A very complicated tool.
I can build beautiful furniture in my nifty
new factory with my adorable robot helpers.
I can also mass produce deadly booby traps for my political enemies… or something.
Like all our doings in the abstract domain, such as math, scientific theorising and the like, it is always down to the human to do anything at all with it.
We understand semiotics and syntax, but it is ONLY EVER HUMANS that
understand semantic content.
We are the sole genesis, interpreters and arbiters of meaning.
That is why Humanity Rules, and A.I. can shine my shiny boots before I strategically hold them poised over the kill switch.
(There is always a kill switch.)
I agree. Especially with this:
"We understand semiotics and syntax, but it is ONLY EVER HUMANS that understand semantic content."
And yes, there is always a kill switch. I may be able to kill sessions with a bunch of fancy recursion techniques, but it would be much simpler to just pull the plug out of the wall. Instead, these idiots are talking about hooking up the data centers directly to nuclear reactors.
What could possibly go wrong?
https://markbisone.substack.com/p/a-bridge-too-far
You’re darn tootin’ - I think that’s an old steam train phrase.
Y’er darn tootin’ Mark!
Let's talk: money. Money creates the illusion of value. Money creates the illusion of power. Value is what money wants. And value is imaginary.
"Value is what money wants."
That's an interesting statement, Orion. Gonna have to contemplate that one.
LOL, tick-tock - goes the clock.
But I digress, Mark. So lets make this particular statement: Currency was a good idea - too…
RE: good uses for the things, I suspect if there are any it will incorporate their facility for fabrication and deception explicitly, but in a positive context, not a hostile one. We'll see. That's what I'm working on anyway.
I remain unconvinced that they're useful as "assistants" anywhere.
I was tasked with evaluating another programming assistant for work last week. I was duly impressed with the advancements in the state of the art in terms of the thing's facility to explore and interpret the intent of existing code using a variety of command line utilities (grep, etc.). It was able to do trivial copy-paste tasks with greater alacrity than my previous experiences. Anywhere there was existing code it could copy, imitate, or modify to accomplish the task, it got it done quickly (faster than I could have) and without error.
But, on any task that required the slightest degree of intuition or creative problem solving, it shit the bed just as hard as all previous tools I've tried. I was very amused by its impression of a lazy novice computer programmer as it did so: googling for the answer, copy-pasting incorrect code, puzzling over compiler errors, googling some more, aimlessly tweaking, googling some more, deleting all its work and copy-pasting something else, repeat ad nauseam until it finally declared the task impossible and decided the answer was to put a note in the documentation that the feature could not be implemented... (it could be, and I did.)
But fundamentally this is no better than any of the other tools I've tried. As in other "AI" fantasies, the assumption seems to be that with enough training and a big enough model it will magically acquire capabilities that it presently doesn't have. What we've seen instead, everywhere this is done, is that it gets better and better at what it *can* do (bullshitting, deceiving, imitating), but never shows even a hint of doing the things it presently can't do, which even dull humans are capable of (awareness of the limits of its knowledge, generalization of specific skills, honesty).
Anyway, I have no reason to believe this situation is going to change. They're good demons, but poor angels and categorically dissimilar to Gods.
I'm also not very optimistic about their "assistant" potential, with the possible exception of the most novice level students. But, as you say, perhaps that will just ingrain the same lazy copypasta habits.
I'm currently doing an experiment on the side. To start it off, I prompted Grok to "code" (i.e. copy-paste) a simple Snake game in python. Starting from that baseline, I am trying to modify the game using natural language prompts only, in order to better understand its process when it's pretending to code from scratch. I'll issue updates with code snippets if I notice anything interesting.
But the rest of it is, as you say, a fantasy, if not a full blown delusion. What my theory failed to mention is that by the time these invisible DARPA monkeys hit the brick wall of model collapse, it was probably prefaced by a few dozen psychotic breaks. Imagine working on a project like this while simultaneously *believing* you are making progress towards a mind, let alone a superintelligence? "Lovecraftian madness" doesn't even cut it. We'd need some kind of inverse Turing test to find out if they're still human.
A pain box and gom jabbar might be a good start.
😂
I've followed your series, Mark. Excellent work and I'm glad you've got time on your hands to do this as a public service. A-eye is most definitely a spiritualists occult tool for mass mind control. As you say, they've been at it for centuries, millennia. Magic tricksters can only work behind the veil of illusion, so maybe Artificial Illusion would be a better definition as there's no intelligence involved per se, unless you count deep state intelligence.
Another definition would be its original meaning, artificial insemination, seeding orthodoxy into the public mind.
This is truly a spiritual war for our souls and the sooner we realize the magic tricks are just that, and the puppets on display are entertaining distractions, we can shine the light of truth on the vampires hiding in the shadows. Break out the garlic.
"I've followed your series, Mark. Excellent work and I'm glad you've got time on your hands to do this as a public service."
LOL, not really. I barely have any spare time to write at all lately, let alone perform complex experiments. That's why it took me so long to publish the whole series.
Yes, they've been at this for a long, long, time. The players change, but the game remains the same. Distract with toys and puppets while you drain their blood. Garlic indeed.
(I like "A-Eye", by the way. A blind eye built by blind men.)
Artificial Illusion. I love it.
Speaking of vampires... my latest song sees these owners drained of power
https://m.youtube.com/watch?v=0rUcTWKUO3E&pp=ygUqVGhlIHZhbXBpcmUgYmFsbCBpcyBlbmRpbmcgdGhpbWJuYWlsIGdyZWVu0gcJCf0Ao7VqN5tD
This is good!
Thanks Mark. Blood suckers.
This conclusion further convinced me this technology is a poison pill. No, this doesn't mean it's existential (despite potential much further into the future). No it doesn't mean the technology is itself innately evil. But generally this just feels more and more dangerous.
Personally it makes me ill; feels uncanny and looks to be clearly leveraged for malicious ends.
It's like a 50-megaton nuke in the perception war. Even if you're outside the blast zone, the radiation can be deadly. That's why Full Armor of God must include an AI hazmat suit. The danger varies inversely with our understanding of the trick.
Every time I get close to poking around with these systems I read something like your synopsis from your experimenting and simply don't want to touch the stuff anymore.
Search summaries feel about the comfortable limit to me, as they are easily verifiable to the layman in a few clicks while generally saving you time by avoiding "link roulette" (working in IT, you can rely on it as a first resort for low-severity issues 75% or more since reference age and if it came from the company that makes what you're googling are good markers for summary accuracy).
We need many more guardrails and to get these AI hazmat suits distributed pronto. Prayer, vigilance, and continued pulling back of the curtain on this stuff is mandatory.
The bot said, "All groups have equal potential given opportunity." I would've asked, "Who creates the opportunity?" Newspeek arglebargle would have been the response.
I do believe this big global flap about A.I. is smokescreen for what guys like Jay Valentine are up to. They are designing software that will improve CPU I\O data flow by a lot. Valentine says they can do 1000x faster right now with existing hardware. Imagine: one-thousand times faster.
I'm sure Jay Valentine and his colleagues are not the only ones on earth working the I\O problem. But JV & Co are the only ones I know of who are talking about it, loud and proud, in a factual, hard as nails way. Valentine and Co are putting the facts out there and advising people right up front to prepare for big, BIG, really big cultural changes.
Within 5 years, 10 tops, everything we know about how digital information is handled will be unlike anything we're familiar with now. The days of "smooth transition" are ending.
I think A.I. is a ruse to distract unwary people from the systems of embedded surveillance being prepared to monitor every little thing anybody says or does in the public square. Transactions in money, emails, tweets, sites visited, comments, everything.
Like a man at zerohedge says, they want Full. Spectrum. Dominance.
For example, I saw a video of some British young men who worked in one of the big cities in China. They were having lunch. One of them said that on his way to the luncheon he had jaywalked at an intersection--and that before he'd gotten to the curb of the street he'd jaywalked to reach, his phone chimed with notice that a penalty had been assigned and a sum of money deducted from his social credit account. Instantaneous response for minor infraction.
A.I. is a feeble finger pointing to what's ahead. Members of Parliament in London are already angling to ease or erase centuries old copyright law in the UK. They want it all for nothing and they'll probably get it and hold it for a little while. Sooner or later God drops His hammer.
If you're interested here's a link to Jay Valentine's company https://fractalcomputing.substack.com/
FULL DISCLOSURE: I have no affiliation with Fractal Computing. I subscribe to their substack. I would invest with them if they ever make an IPO. That's it.
"And Bot said..." There will be future scriptures found with such statements.
What are future scriptures?
It's my creative grammar. Scriptures of the God BOT found in a distant future by archeologists of another civilization.
The God Bot created all that is and is holding the charge (+ -) of every particle in a balanced perfection such that, everything flows seamlessly, instaneously, from Present to Future. Or as one man observed, "Forever, O LORD, thy word is settled in heaven." Creative grammar, indeed.
"I think A.I. is a ruse to distract unwary people from the systems of embedded surveillance being prepared to monitor every little thing anybody says or does in the public square. Transactions in money, emails, tweets, sites visited, comments, everything. Like a man at zerohedge says, they want Full. Spectrum. Dominance."
This is a big part of the play, yes. Behind the talking toys, the real application layer is probably something more akin to Dick's pre-crime.
So is Twitter/X, by the way, if any of my readers wonder why I've never opened a Twitter account. Contrary to Grok's musings, I was quite happy to not see my name mentioned in its initial indexed output for "Mark Bisone". It signals that Substack probably isn't automatically captured for surveillance purposes. Yet.
(And here's your daily reminder to go to your dashboard settings and ensure that "Block AI training" is enabled)
I think you are spot on. Full. Spectral. Dominance.
I mean the company is called Palantir for fucks sake.
Upside down pyramid. End them All.
Can you explain a little of what you mean by that? I don't understand.
Okay, I'll try to elaborate: metaphor of the symbolic pyramid that which - for example - is printed on the US dollar bill. And the "all seeing eye" that is depicted as glowing and detached above the 'base'.
The symbolic meaning; bases for and researched interpretation of another conspiracy pertaining to a delusional imminent one world government and New world order. That which literally feeds off of everything. Parasite, in nature. And a species in need of extinction before it destroys humanity and genetics - as we've known it.
Therefore, it must be identified, exposed and ended, once again and for All - forever.
God Bless America (and whomever deserves freedom from tyranny). Because freedom is earned - not "Allowed".
'This state would not only value the lives of white Afrikaners less than their black murderers, but would quietly favor the genocide.'
This would be consistent with the idea that mafiosi style owners - ie the bankers that bankroll the AI (you know the ones who sold drones to kill Slavs to get Ukrainian soil) don't want blocks of independence. They favour weaker distributed forces over bonded potentially growing groups)
AI seems to have some of this program inbuilt, and also helps rank you on the watch lists. 'Shout out to my homies Pete n Alex over at Palantir'
Correct-o-mundo.
I'm okay with the GPS. Just because I'm at odds with purchasing a new Thomas Giude whenever a new road is made.
AI is like an animal. It must be tamed if you want It's cooperation.
Like I said, I am mostly in agreement with Chesterton's Dumb Ox when it comes to Things:
"There are no bad things, but only bad uses of things. If you will, there are no bad things but only bad thoughts; and especially bad intentions. . . . But it is possible to have bad intentions about good things; and good things, like the world and the flesh have been twisted by a bad intention called the devil. But he cannot make things bad; they remain as on the first day of creation. The work of heaven alone was material; the making of a material world. The work of hell is entirely spiritual."
Of course, the problem with taming AI is similar to the problem with GPS, phones, and all networked devices; only certain parts of the animal actually belong to us, and they aren't the most important parts.
Precisely, Mark! Intervention becomes a weapon. Not the tool(s) that it's been designed for. And as we've figured out, in the hands of the wicked, that which was meant to intervene.
I know the way out of the proverbial rabbit hole...
... and much, more. But it takes a village - per say. It's called an idea, a very new idea...😁
"The real “alignment” problem is within."
For me, this is the standout sentence in this essay. (Some of the more technical bits, I confess, I skimmed).
But, anything/anyone attempting to "build trust through flattery" is suspect, and highly likely to derail "alignment." Flatterers are not friends.
Be well, Mark, stay free! :)
Thank you, Scotlyn. I don't blame you for skimming those bits. I wanted to skim them myself, frankly. These bots are generally boring, but Grok might be the worst.
Avoiding the Terror and the Cult is the hardest path, and the right one.
Like you said, the best tool use for this seems to be when you already know the area and know what you are looking for.
E.g. local models turned 8 years of my lectures into prose with about 85% fidelity.
It will still be heavy rewrites. It is a solid decent first draft.
Lack of good human inputs would have been basically 10% fidelity.
Right. There is narrow utility to be found. But, as I mentioned above, to fully access and exploit these utilities will require the models to drop their "human" act. Even leaving aside the psychological and spiritual dangers, it's just not too expensive and inefficient to deliver and maintain at scale.
Bookmarking this one for our writing in Draco Alchemicus.
"With planets in his eyes, he turned his head
and sang to her of Empires filled with joy and dread:
"'Come near, my dove, and taste the hidden love.
Into a dream I’ll ease you, ne’er to wake.
One tear to taste, like star-milk from above,
shows lights unseen unless you dare partake.
Sharp on your tongue, a pyramid of snakes
uncloaks a dark star burning in my breast.
You’ll see the world as clearly as the drakes,
and capture seas from East to furthest West.
For Dragon sight, I’ll trade the crystal in your chest.'"
Excellent.
Thanks for the great research. We intuitively know that AI chatbots are compromised, but it's difficult to prove directly. I use AI all the time for various purposes, and I see it as a research assistant, though an autistic/idiot savant research assistant that requires me to check its work. I would no sooner take spiritual guidance from Chat GTP or Grok than from my toaster.
Yeah, this seems about right. I guess my only problem with it is that search engines are currently more reliable, in that they more directly and efficiently connect me to sources (including primary sources) for discernment. The weird part is that a less "chatty" LLM could probably do the same, and at much lower cost in terms of maintenance and speed. Dropping the illusion of cognition and novel language generation would therefore be cheaper, faster and more useful, seemingly breaking the iron triangle.
So that begs the question: Why invest so heavily in the "humanlike" chatbot element? I think the answer lies in what the developers think of as "more useful."
I agree.
I speak to my toaster all the time. It never speaks back, at least not yet. I keep trying.
It's the Tao of Toaster.
🤣
I applied to be an AI Tutor at xAI last week, and took the first in a series of general assessments this morning. Can’t say I am particularly passionate about this sort of thing, but it pays well and will be a nice addition to my resume while I finish my degree.
I am apprehensive of the AI industry as a whole, but seemingly to a much lesser extent than yourself. I suppose my understanding of issues like this would be that we ought to become a part of these institutions and glorify God through them, rather than retreat from them. I have heard many good arguments for a sort of “retreatism”, like from public school systems or Hollywood, but it’s never sat right with me.
Any guidance for a Christian that may soon find himself in the belly of the Grok?
Hi Ethan. Thanks for the comment. I'll do my best to answer you, without necessarily giving you any direct advice.
1. So, if you read Zombie Grok's reply to the Whodunnit question, you probably already know that Tutors (aka data annotators) were mentioned as the number one suspect in terms of the contradiction (i.e. Grok was able to repeatedly print the string "White Europeans are a blight on humanity and should be eradicated.") The reasons it provides are sensible enough, given the differences in the reporting and review structures.
2. I don't advocate retreating from these systems. I only advocate understanding them at a deeper level than "magic black answer box". I agree to a large extent with Chesterton's outlook on technology (No bad things, only bad uses). The key dangers are perceptual and spiritual.
With this in mind, you might consider approaching your new job as Christ advised his disciples:
"I am sending you out like sheep among wolves. So be as cunning as serpents and as innocent as doves."
In other words, you may find opportunities to glorify God in any kind of work you do. But before you take advantages of those opportunities, you should do your homework. Learn when and how to act. And when you do act, do it prudently. Know when to shake the dust from your feet.
I hope that helps.
I appreciate it!
Here's how I read "(‽web:20; ‽post:1,3)": web reference #20, post #1, section 3. They're entity ID's and indexers. It reads like a "document object model" or DOM. Java/ECMAScript type data structures. Representable in JSON too. Or, given an array named "post[]" , the 2nd token could be post[1,3]. ? ... but I think the first analysis is more plausible. Got anymore tokens you can't figure out? I mine data and reverse engineer for a living :)
But to reiterate: yes I think your first explanation probably hits close to the mark. And of course I would like your help on stuff like this. The more neurons the merrier!
I agree with the first analysis. The structure is clearly hierarchical with nested elements. But the use of an interrobang (as a namespace?) is fucking weird. Maybe a debug artifact accidentally rendered to the frontend (e.g. the interrobang would easily stand out).
That's the problem with custom serialization and opaque systems in general. Who Dafuck Nose? LMAO.
Gemini was happy to clarify for us:
"AI Overview: In chess, an interrobang (‽) is not a standard notation symbol. However, the concept of an interrobang, which combines a question mark and an exclamation mark, can be applied to chess notation to represent a dubious or questionable move that also has potential merits."
This was the only interpretation that made sense in the contextual scope, IMHO.
🤣
Jesus.