China is “set up to hit grand slams,” longtime Chinese energy expert David Fishman told Fortune. “The U.S., at best, can get on base.”
China is “set up to hit grand slams,” longtime Chinese energy expert David Fishman told Fortune. “The U.S., at best, can get on base.”
This is all based on the assumption that AI will need exponential power.
It will not.
AI is a bubble.
Even if it isn’t, fab capacity is limited.
The actual ‘AI’ market is racing to the bottom with smaller, task focused models.
A bunch of reproduced papers (like bitnet, and sparsity schemes) that reduce power exponentially are just waiting for someone to try a larger test.
They’re slowly getting less ‘dumbly implemented’ so they can actually reference real info for tasks.
Alltogether… that means inference moves to smartphones and PCs.
This is just the finance crowd parroting Altman. Not that the US doesnt need a better energy grid like China, but the justification (AI scaling) is built on lies that aren’t going to happen.
But! Zuck said they recently saw AI able to work on tasks that involve improving the software that manages AI! He said that means we are not far from super intelligence!
the extrapolation these guys make without new paradigm’s in mind is evidence of a wall and bubble for me
The irony is Zuck shuttered the absolute best asset they have: the Llama team.
Over one experimental failure trying to copy Deepseek. Which, you know, is normal in research, but at the same time was a pretty conservative choice instead of trying a new paper.
Zuck’s a fickle coward who would say and do anything to hide his insecurity.
While AI (as it is currently done) is a bubble,
the article is still rather interesting. It discusses that China’s grid is superior because it has state backing, instead of being privately owned (and therefore short-sighted). Which is true, and America has a lesson to learn from that, if it wants to have a part of the future.
By the way, the same goes for public infrastructure and housing. The state should invest heavily in these and provide them efficiently as a community service long-term, instead of relying on private parties to take care of these needs.
This is exactly a post that reads fine but when you are in insider, it’s obvious it is almost all bullshit
The problem is most of you have very shallow takes on what is going on in AI right now
Artist theft isn’t even close to the worst problem we are facing yet 90% of the energy spent online is to protect some fucking furry sketcher’s income when we are facing an existential threat of social media profiling and dissident targeting
I mean, I’m a local AI evangelist and have made a living off it. The energy use of AI thing is total nonsense, as much as Lemmy doesn’t like to hear it.
I keep a 32B or 49B loaded pretty much all the time.
You are right about the theft vs social media thing too, even if you put it a little abrasively. Why people are so worked up in the face of machines like Facebook and Google is mind boggling.
…But AI is a freaking bubble, too.
Look at company valuations vs how shit isn’t working, and how much it costs.
Look around the ML research community. They all know Altman and his infinite scaling to AGI pitch is just a big fat tech bro lie. AI is going to move forward as a useful tool through making it smaller, smarter, more specialized and more efficient, but transformers LLMs with randomized sampling are not just going to turn into general artificial intelligence if enough investors thrown money at these closed off enterprises.
I can’t help but put everything abrasive, it’s genetic and medical and does not respond well to any treatment for long
I fully agree AI is a bubble, but in the way the Internet was a bubble in the late 90s: Too much enthusiasm with too little application in the moment but an upcoming kaiju style industry that no one really expects to blossom. Same thing applies to NFTs but it’ll take a decade before people really understand the programatic contract capabilities of actual NFTs (I’m not referring to pngs stored on someone else’s server), and I think it will take less than 4 years after the AI bubble pop for the really bad knock-on effects of social control start becoming public.
LLMs eventually will be the interface to stacked suites of expert trained relational nodes, not the actual horsepower doing the data transformation itself, and I feel that is what will be developed after the ‘AI chatbot is your friend’ hype dies down and the investors find the next big thing to ruin.
When our fascist government agents want to crack down on dissidents they will ask an LLM to go query the expert nodes and cross index a list of questionable sentiment posts in the last decade and produce a geoip’d list and real name for every user that doesn’t bow to dear leader and it will be done in a matter of minutes to a high degree of accuracy.
Every other concern, power, IP rights, censorship, all will pale to the chilling effect of the first fascist government with an unblinking eye in everyone’s pocket and house. Every one of you is bitching about money or business or investing when we are heading into a wealth disparity dystopia that will make all of our novels pale.
Which is why it’s so fucking frustrating that nowhere do I see people even slightly concerned about it, and whenever I mention it I get banned for an unrelated reason within hours.
Starting to think most of you are LLM bots specifically prompted to harass the fuck out of every post I ever make.