

Fuck. The higher ups at my workplace are currently utterly Claude-brained to the point it makes you think they’re getting their salaries from Anthropic. I am like 80% sure this shit will be on my table when I’m back from vacation in two weeks.
I wrangle code, play bad music, and write things. You might find some of it here.
Fuck. The higher ups at my workplace are currently utterly Claude-brained to the point it makes you think they’re getting their salaries from Anthropic. I am like 80% sure this shit will be on my table when I’m back from vacation in two weeks.
AI use at my company is now mandatory. We all have gotten really expensive Claude of ChatGPT licenses and (apparently) it’s being monitored whether we actually use them. For while I’ve resisted, telling everyone I don’t touch that shit with a ten-foot pole. Now I just write the occasional non-invasive prompt (“check for spelling errors in the comments” or "format the buffer so that all lines have less than 80 columns) just so someone can report to the c-suite that our department is using AI.
Meanwhile I have to do code reviews on patches that have been actually written by a chatbot and doing that takes sometimes as long as writing the whole fucking code from scratch.
But hey, style guidelines are out the window too because the slop machine can’t follow them consistently anyway and apparently fixing that shit manually is too much to ask. So at least I never have to fight anyone again over indentation and braces.
If infinite monkeys with typewriters can compose Shakespeare, then infinite monkeys with slop machines can produce Einstein (but you need to pump in infinite amounts of money first into my CodeMonkeyfy startup, just in case).
Remember Devin, the AI coding bot that turned out to be a complete scam?
Well, looks like Goldman Sachs decided to let it loose on its code base. In YOLO mode too, it seems.
There’s something I’ve been wondering about: is the management (of not just Microsoft but really any company who’s pivoting to AI this hard, so also Duolingo and Klarna, &c.) really convinced that there’s going to be a foom event soon and the chatbots become capable enough to actually do the work they attribute to them? Or do Nadella and friends know this is all bullshit and just play along until they can’t anymore and leave the sinking ship?
I honestly can’t tell anymore.
I swear if I hear “being against AI is ableist” one more time I’m gonna lose my shit. Disabled artists have existed for as long as art itself, and the only ableism here is AI-brained fuckwits using disabled people as an escape goat by suggesting they are unable to create things from their own effort and need spicy autocomplete to do so.
Edit: fuck it, I’m keeping the escape goat!
My favorite bullshit test is to have the AI count letters. Something as simple as “How many A’s are in this sentence?” will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.
I can’t be arsed to make an account with OpenAI or whatever, so here’s an example with Bing Chat, but this has worked on every single chatbot I’ve tested it on.
Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.
Me: Encrypt the sentence “I am a stupid roided Clippy” with the Caesar cipher.
Clippy: [After explaining what the Ceasar cipher is, which I didn’t ask for, thanks] Convert the sentence to uppercase: “I AM A STUPID ROIED CLIPPY”. Let’s apply the Caesar cipher to your sentence using a shift of 7.
- ‘I’ becomes ‘P’
- ‘A’ becomes ‘H’
- ‘M’ becomes ‘T’
- …
- ‘Y’ becomes ‘F’
So the encrypted sentence is: “PH HT H ZBHTLW YVPLUU”. 😊
Okay, first of all, it dropped a letter. And the shift isn’t even correct, lol. It said Y becomes F and then does whatever the fuck.
Okay, so let’s give it an easy example, and even tell it the shift. Let’s see how that works.
This shit doesn’t even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!
Yeah, for multiple reasons. Mostly because all the information in there isn’t accessed or searchable from the outside, and technically not even from the inside because Discord’s search feature fucking sucks.