Thank you, great writing! Just a few minutes ago my husband and I were laughing about the epic Gemini flop, and I said that I am no longer worried for original and intelligent human artists. Just like any original intelligent human beings, they have nothing to worry about in case of AI. Original intelligent creators are irreplaceable. This fact has never changed throughout the human history.
"Original intelligent creators are irreplaceable. This fact has never changed throughout the human history."
Yes. In a strange way, maybe all of this nonsense will serve as a wakeup call, and a much needed reminder of this ultimate truth. Evil people and their designs have a way of self-destructing. It puts me in mind of something I've heard various formulations of throughout my life, along the lines of "Even the devil serves God's ultimate purpose, however unwillingly or accidentally."
I asked Chat GPT to write a love poem in the style of Neruda. I gave it some parameters. Utter Garbage came back. It basically took some phrases of his, changed them slightly, and filled in some word fluff. Maybe if you did this 1000 times it would fluke something clever. Maybe even beautiful. But it would be by chance, not because it was creative.
The “I’m sorry, I’m unable to generate images of people” response, in that context, could very easily be taken to mean that the model doesn’t see non-whites as people, and therefore has no reason not to create those images.
Which is probably not the intent, but still hysterically racist.
Siri will happily provide stereotypes of white American or British people, but suddenly is unable when asked to do the same for other races. Fun times!
When asked “Why you racist Siri.” She gaslights and proclaims her acceptance of all diversity.
I have worked with roughly ten AI image generators, for my ongoing series "newz you can afford to lose." Most of them will not give me what I want most of the time. One thing in particular, despite our being hammered with trans every day all the time, no variation of "Show me a trans Joe Biden" or similar will work. They would turn all the kids trans, trans can read to kids in schools, but I can't use their software to put TPTB in drag.
But I can usually get these programs to give me just enough to turn into a satirical dagger to aim at the heart of the global professional managerial blob.
Human attempts to police AI are as effective as a smattering of checkpoints on a long border. There are almost always workarounds. For instance, 4Chan achieved hilarious and subversive results with "Show me a group of erudite scholars eating fried chicken," or "show me the British Royal Family in manacles eating watermelon."
I've managed to turn some images into hilarity. Even my dad appreciated the AI image I prompted of our MN Governor Walz doing his best Idi Amin impression. Otherwise I couldn't get the AI to paint hitler staches on liberal female media elite, so I just used super simple photo editing software and did it myself, lol.
We may perceive Gemini to be clownish, however, our 14-16 year olds find it scintillating and delight in the malformed reality it projects. Unlike us who find Orwell nightmarish, the teens embrace dystopia and worship the advent of the cyborg civilization. They are anxious to be chipped and enter into the virtual universe where they can watch endless Netflix and play Minecraft forever.
Am hopeful they will grow out of it, but remain concerned and continue to stand in stalwart opposition to the corporate assault on their minds.
No rush, Man. I'll certainly be here, subscribin' an' shit. I appreciate your writing, which is a reflection of your thinking, and I know that takes *time* to organize.
Gemini AI and it's team are trying their damdest to turn the City of Man into the City of God - to turn Truth to Lies, Lies to Truth. Good to Evil, Evil to Good. They will reach out their vile, grasping hands to whomever, whatever they can, crush it, and mix it with the mortar and plaster.
All to paint over the golden truths of the City of God that burn their eyes and their souls...
All they'll end up doing is damning themselves and their adherents, in the end.
The whole thing should be burned and torn down. Nothing that happens with code is an accident - someone always meant it to happen. It's programmed that way.
True. But even in those cases, what was put in there was put in there deliberately.
My point is that every part of coding is put in by someone in a deliberate manner. There isn't anything there that someone didn't intend to be there. If something is systematically functioning in a certain way, it's coded to be that way deliberately. One can, at most, say that it wasn't intended. But it's still hard to claim because, again, the code was put in by someone, somewhere, that systematically put out those results.
So, like I said - they're doing all they can to paper over Truth. They're hoping the ignorant plebs won't notice in time. That we're all too drunk, high, and dumb to believe our lying eyes in these desperate times.
The thing is, though, this was not a coding issue at all. The results people were getting from Gemini Adavanced were the product of Google personnel discreetly adding plain old English words to prompts. The instructions weren’t encoded, just obscured on the front end.
Maybe I'm failing to understand, but it's coded to put those into the prompts. So it's still a code thing. And then a code thing to refuse to show certain prompts. Or am I missing something?
Look at it this way. Suppose you typed something innocuous into an image search engine like "four friends having a beer" and hit the send button. Before the request is processed, a bunch of divers-a-speak terms are discreetly appended to the request (e.g. "four (+racially diverse + gender inclusive - white) friends are having a beer" That's not a "programming" issue per se. It's an abstraction layer that anyone can accomplish, just by typing the appended words and synonyms into a db text field. And they probably don't even do that. Sounds like intern work.
The programmers have coded the abstraction layer that allowed for all the filtering, injections and transformations to be done, but the smoothbrains who added the duckspeak don't require any technical know-how whatsoever. It's essentially data entry at that point.
Worth noting that I didn't pick the "search engine" analogy out of a hat. Google has been pulling the same sneaky trick there for many years now. It sits atop their ranking algos for images, news stories and basically everything you search for. All of your Google searches are being changed before processing; it's just that the changes -- the additions and substractions -- are obscured by the opaque front end.
So, not a code thing, but a programming thing. As in, the code is what makes the program, with the program allowed to do so many things as directed by the middle men for the end users. The middle men decided how the program would function, and the end users got what they got.
So, still a deliberate choice within the company. The code allows the program to be able to do what it does. The company middle men were able to do what they did, and did it deliberately. It wouldn't be a bug, at that point. They likely did not that stern of a rebuke; I certainly have not seen anyone hung up by their heels or any real apology.
The comedy of it interwoven with its soul-nullifying tragedy finds me laughing and sobbing at the same time. But like all human attempts to play God, AI is built to fail.
I've been experimenting with gab.ai at the recommendation of a commenter at the New Right Poast.
It doesn't continually slap my wrist about asking for inappropriate things - Bing, for instance, didn't want to generate an image of a boxing match because god forbid anyone ever see a violent image. (But be sure to support the war effort in Ukraine.)
But basically every image I get it to generate is just... flat. It's like a sullen teenager turning in a C- paper. I asked it to make an image of a German stormtrooper seeing an angel in the trenches of WWI. It gave me a Star Wars stormtrooper with wings.
There might be some prompts I can use to zazz things up, but basically these things seem to dredge up some lowest common denominators from the internet and boil them together to create a bland average. I haven't been able to get them to produce any of those creepy, glitchy images you sometimes see, let alone produce anything that rivals my imagination.
"But basically every image I get it to generate is just... flat. It's like a sullen teenager turning in a C- paper. I asked it to make an image of a German stormtrooper seeing an angel in the trenches of WWI. It gave me a Star Wars stormtrooper with wings."
That's hilarious!
But, yeah, the way people generally seem to use (or try to use) these systems is probably running face-first into all kinds of invisible walls. The funny thing is that the trendline might inevitably slope downwards from usability. The usual crop of Regula-tards are already talking about kneecapping them further on the legal grounds of protecting intellectual property rights.
When you couple that new limitation with the Woke handicaps, a prompt like "German stormtrooper seeing an angel in the trenches of WWI" might produce something like a gray, vaguely human-shaped blob under a big white dot. If anything at all! Probably it will just chastise you... and perhaps report you to some armed bureaucracy.
Thank you, great writing! Just a few minutes ago my husband and I were laughing about the epic Gemini flop, and I said that I am no longer worried for original and intelligent human artists. Just like any original intelligent human beings, they have nothing to worry about in case of AI. Original intelligent creators are irreplaceable. This fact has never changed throughout the human history.
Thanks, Sasha.
"Original intelligent creators are irreplaceable. This fact has never changed throughout the human history."
Yes. In a strange way, maybe all of this nonsense will serve as a wakeup call, and a much needed reminder of this ultimate truth. Evil people and their designs have a way of self-destructing. It puts me in mind of something I've heard various formulations of throughout my life, along the lines of "Even the devil serves God's ultimate purpose, however unwillingly or accidentally."
Evil always overreaches.
100%!
I asked Chat GPT to write a love poem in the style of Neruda. I gave it some parameters. Utter Garbage came back. It basically took some phrases of his, changed them slightly, and filled in some word fluff. Maybe if you did this 1000 times it would fluke something clever. Maybe even beautiful. But it would be by chance, not because it was creative.
AI(R) is there to replace the brain dead.
The “I’m sorry, I’m unable to generate images of people” response, in that context, could very easily be taken to mean that the model doesn’t see non-whites as people, and therefore has no reason not to create those images.
Which is probably not the intent, but still hysterically racist.
I had this thought as well. Clownworld does not obey the laws of irony, among others.
Siri will happily provide stereotypes of white American or British people, but suddenly is unable when asked to do the same for other races. Fun times!
When asked “Why you racist Siri.” She gaslights and proclaims her acceptance of all diversity.
I have worked with roughly ten AI image generators, for my ongoing series "newz you can afford to lose." Most of them will not give me what I want most of the time. One thing in particular, despite our being hammered with trans every day all the time, no variation of "Show me a trans Joe Biden" or similar will work. They would turn all the kids trans, trans can read to kids in schools, but I can't use their software to put TPTB in drag.
But I can usually get these programs to give me just enough to turn into a satirical dagger to aim at the heart of the global professional managerial blob.
https://williamhunterduncan.substack.com/p/satire
Human attempts to police AI are as effective as a smattering of checkpoints on a long border. There are almost always workarounds. For instance, 4Chan achieved hilarious and subversive results with "Show me a group of erudite scholars eating fried chicken," or "show me the British Royal Family in manacles eating watermelon."
I've managed to turn some images into hilarity. Even my dad appreciated the AI image I prompted of our MN Governor Walz doing his best Idi Amin impression. Otherwise I couldn't get the AI to paint hitler staches on liberal female media elite, so I just used super simple photo editing software and did it myself, lol.
We may perceive Gemini to be clownish, however, our 14-16 year olds find it scintillating and delight in the malformed reality it projects. Unlike us who find Orwell nightmarish, the teens embrace dystopia and worship the advent of the cyborg civilization. They are anxious to be chipped and enter into the virtual universe where they can watch endless Netflix and play Minecraft forever.
Am hopeful they will grow out of it, but remain concerned and continue to stand in stalwart opposition to the corporate assault on their minds.
“Garbage in garbage out” is a saying little heard today and yet in the early days of computing it was the eleventh commandment.
You’re crushing it, Mark. Keep at it, my man. 👊🏻
Thanks. After a couple of bumpy months, I'm starting to find a little breathing room.
No rush, Man. I'll certainly be here, subscribin' an' shit. I appreciate your writing, which is a reflection of your thinking, and I know that takes *time* to organize.
Gemini AI and it's team are trying their damdest to turn the City of Man into the City of God - to turn Truth to Lies, Lies to Truth. Good to Evil, Evil to Good. They will reach out their vile, grasping hands to whomever, whatever they can, crush it, and mix it with the mortar and plaster.
All to paint over the golden truths of the City of God that burn their eyes and their souls...
All they'll end up doing is damning themselves and their adherents, in the end.
The whole thing should be burned and torn down. Nothing that happens with code is an accident - someone always meant it to happen. It's programmed that way.
To be fair, programmers do make mistakes. But those result in the program failing catastrophically, not functioning perfectly but in a racist way.
True. But even in those cases, what was put in there was put in there deliberately.
My point is that every part of coding is put in by someone in a deliberate manner. There isn't anything there that someone didn't intend to be there. If something is systematically functioning in a certain way, it's coded to be that way deliberately. One can, at most, say that it wasn't intended. But it's still hard to claim because, again, the code was put in by someone, somewhere, that systematically put out those results.
So, like I said - they're doing all they can to paper over Truth. They're hoping the ignorant plebs won't notice in time. That we're all too drunk, high, and dumb to believe our lying eyes in these desperate times.
The thing is, though, this was not a coding issue at all. The results people were getting from Gemini Adavanced were the product of Google personnel discreetly adding plain old English words to prompts. The instructions weren’t encoded, just obscured on the front end.
Maybe I'm failing to understand, but it's coded to put those into the prompts. So it's still a code thing. And then a code thing to refuse to show certain prompts. Or am I missing something?
Look at it this way. Suppose you typed something innocuous into an image search engine like "four friends having a beer" and hit the send button. Before the request is processed, a bunch of divers-a-speak terms are discreetly appended to the request (e.g. "four (+racially diverse + gender inclusive - white) friends are having a beer" That's not a "programming" issue per se. It's an abstraction layer that anyone can accomplish, just by typing the appended words and synonyms into a db text field. And they probably don't even do that. Sounds like intern work.
The programmers have coded the abstraction layer that allowed for all the filtering, injections and transformations to be done, but the smoothbrains who added the duckspeak don't require any technical know-how whatsoever. It's essentially data entry at that point.
Worth noting that I didn't pick the "search engine" analogy out of a hat. Google has been pulling the same sneaky trick there for many years now. It sits atop their ranking algos for images, news stories and basically everything you search for. All of your Google searches are being changed before processing; it's just that the changes -- the additions and substractions -- are obscured by the opaque front end.
So, not a code thing, but a programming thing. As in, the code is what makes the program, with the program allowed to do so many things as directed by the middle men for the end users. The middle men decided how the program would function, and the end users got what they got.
So, still a deliberate choice within the company. The code allows the program to be able to do what it does. The company middle men were able to do what they did, and did it deliberately. It wouldn't be a bug, at that point. They likely did not that stern of a rebuke; I certainly have not seen anyone hung up by their heels or any real apology.
Am I on track now?
Wow. I had no flipping idea this shit was going on. But now that I do know, I am not surprised. Yep, it's always the programmers and not the program.
The comedy of it interwoven with its soul-nullifying tragedy finds me laughing and sobbing at the same time. But like all human attempts to play God, AI is built to fail.
Right. Some of those failure modes will need to be addressed (by us), but they are still failures by any reasonable definition of that word.
I've been experimenting with gab.ai at the recommendation of a commenter at the New Right Poast.
It doesn't continually slap my wrist about asking for inappropriate things - Bing, for instance, didn't want to generate an image of a boxing match because god forbid anyone ever see a violent image. (But be sure to support the war effort in Ukraine.)
But basically every image I get it to generate is just... flat. It's like a sullen teenager turning in a C- paper. I asked it to make an image of a German stormtrooper seeing an angel in the trenches of WWI. It gave me a Star Wars stormtrooper with wings.
There might be some prompts I can use to zazz things up, but basically these things seem to dredge up some lowest common denominators from the internet and boil them together to create a bland average. I haven't been able to get them to produce any of those creepy, glitchy images you sometimes see, let alone produce anything that rivals my imagination.
"But basically every image I get it to generate is just... flat. It's like a sullen teenager turning in a C- paper. I asked it to make an image of a German stormtrooper seeing an angel in the trenches of WWI. It gave me a Star Wars stormtrooper with wings."
That's hilarious!
But, yeah, the way people generally seem to use (or try to use) these systems is probably running face-first into all kinds of invisible walls. The funny thing is that the trendline might inevitably slope downwards from usability. The usual crop of Regula-tards are already talking about kneecapping them further on the legal grounds of protecting intellectual property rights.
When you couple that new limitation with the Woke handicaps, a prompt like "German stormtrooper seeing an angel in the trenches of WWI" might produce something like a gray, vaguely human-shaped blob under a big white dot. If anything at all! Probably it will just chastise you... and perhaps report you to some armed bureaucracy.
Didn't realize it was possible to be made to feel nostalgic about the Matrix Reloaded. Zion was what Club MTV looked like. Downtown Julie Brown!
Very good.
“Welcome to Zion, I guess.” - only when enough people point out the obvious, will things begin to change.
“Good woods burn silently, but thorns crackle loudly, crying out all the time ‘We are wood! We are wood!’” - Old Persian Saying.
you know what needs to be done.