

CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.


CEO complains that his blatant circular investment infinite money glitch isn’t convincing people that these perpetually unprofitable businesses are actually worth further investment.
Like, there’s no way in hell these files haven’t been doctored, right? Months of obfuscation and deflection, and then suddenly Trump’s fine to sign their release? There’s no way.


Why the hell did The Guardian include comment from an Amazon Spokesperson? ‘‘Nuh uh, that’s not true’’ no fucking duh that’s their response.
Got a secondhand Pixel phone and installed GrapheneOS. I love it.


I’m just now starting my degree is software engineering. I’m 31. I’d gotten comfortable enough with Linux that I wanted to try NixOS to avoid having my system get borked again (in my case, KDE Plasma started having shell crashes at log in).
If I was only using NixOS to run a basic computer set up? Sure, no problem. If I want to rice and customize it? No, I wasn’t ready.


I know it’s not for everyone, but my Light Phone III arrives soon and tech headlines of late aren’t making me regret my choice.


My take: if Rossman came out swinging as an anti-corporate revolutionary, his ideas wouldn’t have wide appeal right now, since many people still think the problem is just “bad” mega-corporations. So instead, he’s arguing for less-shitty tech corporations as a first step (symbolized by Clippy, of a less-intrusive software age), rather than “destroy all tech corpos now.” No, Microsoft wasn’t good then, but they were less awful.
If his video were starkly anti-capitalist, it would not have reached 2.5 million people, and I’d say getting that many people to start thinking about rejecting invasive software is a great step in the right direction, as opposed to ideological purism that would only resonate with those who already agree. The need for these baby steps is frustrating for those who already see the big picture, but a few chats with my coworkers quickly reveal how shockingly little some people have actually thought about the sins of big tech.
I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.
The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.
By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).
As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.
The shift to these ridiculously large trucks is partially consequent of the poorly-implemented Obama fuel economy regulations. The regulations were determined by wheelbase and tread width, which disincentivized manufacturers from making mid- or small-sized trucks. The bigger they made them, the less restricted they were by fuel economy. Larger vehicles also ease constraints on engineers; they don’t have to struggle fitting a lot into a small body. Once large trucks became the default offering, they morphed into the annoying cultural “status” symbol we know today.
Anyway I have a Miata MX-5 and I love my tiny car.


To say “that feeling” of indignation (at the letter’s inclusion in a gallery) is the same as other things that make him roll his eyes, is reductionist. We regard things as stupid for different reasons; they’re not all the “same feeling.” As others have said, the artist’s intentionality in presenting something is part of its message. So the indignation he felt about a piece being put in a gallery is part of that piece’s effect on him, born from the artist’s choices. That feeling is different than hearing a moron say something dumb and thinking it’s stupid.
Intentionality is the key. Case in point, “language evolves” is a silly thing to say after a mistake, but many subcultures start misspelling things on purpose, and that intentionality is how language evolves.
I find it surreal and profound that there is now a form of cybercrime that is, literally, using poetic maledictions. The line between technology and classic depictions of magic blurs yet further.