Nope, Kelvins are a countable unit, like meters. Celsius and Fahrenheit are not.
Iunnrais
- 0 Posts
- 26 Comments
Iunnrais@piefed.socialto
Lemmy Shitpost@lemmy.world•Happy "don't believe anything you see and hear" day! Also known as "day of morons"English
112·11 days agoI genuinely enjoyed it more when I was younger, finding all the joke articles and whatnot. Now I enjoy it less, but I’m content to let others have their fun. No use whining about it, and why should I decrease someone else’s enjoyment, when all I need to do is sit back and ignore stuff that doesn’t matter anyway. Easy.
Iunnrais@piefed.socialto
Ask Lemmy@lemmy.world•Is memes@lemmy.ml super one sided right now?English
14·27 days agoIt was their self-descriptor until they learned other leftists didn’t like them much (because they’re authoritarian and supportive of fascist dictators such as Putin), and naming them using their self designated term meant they were excluded from other leftists. So they started taking offense.
To be clear, it seems you object to the concept of sequels and movie serieses (how do you pluralize “series”?) more than compilations?
Iunnrais@piefed.socialtoHacker News@lemmy.bestiver.se•It's time to move your docs into a repo – especially because of AIEnglish
7·28 days agoAI hate on lemmy is strong. Admitting that you vibe code is enough to get an avalanche of downvotes. (I didn’t downvote you, I just happen to know things how it is around these parts)
It is, and if I recall correctly he’s an eye surgeon. He has a YouTube channel that’s pretty darn funny even to a non-doctor.
Iunnrais@piefed.socialto
Lemmy Shitpost@lemmy.world•I was on social media before web browsers existed. I am Legion.English
4·1 month agoI have the number “x277853” burned into my memory… but I can’t 100% remember what it was for. I want to say it was my ICQ, but no one else seems to have an x at the beginning of theirs? Where was this x number from?
Antici………………… …… ……….
…… ……………… ……………………….pation
Iunnrais@piefed.socialto
Lemmy Shitpost@lemmy.world•That's why I say hey man, nice shotEnglish
8·1 month agoNarcissism is a spectrum, but at the far end of that spectrum, the narcissist is unable to feel anything except existential dread at the fact that they cannot feel anything, which is relieved by seeing reactions from other people, which is called “supply”. Trump is is at absolutely pathological levels of narcissism, and thus clearly cannot feel shame. The most we can make him feel is a supply withdrawal.
Iunnrais@piefed.socialto
Games@lemmy.world•Ubisoft Finally Confirms Assassin's Creed: Black Flag Resynced, the Remake We All Knew Was ComingEnglish
2·1 month agoDon’t forget: this game was pretty good but was made before various UI and control standards were really set in stone as known best practices, so I’d love to play the game again as I remember it, not as it actually existed.
System Shock, looking at you…
I am protected.
Iunnrais@piefed.socialto
No Stupid Questions@lemmy.world•ELI5. Limit of current gen AI/LLMsEnglish
31·1 month agoThe following wall of text is a simplification that I hope will help you understand. The simplification of the simplification (tldr) is: for as long as it has context window available, it figures out the meaning of every word in the entire conversation based on its position relative to every other word in the conversation.
The longer explanation (but still a simplification) is as follows:
An LLM does math not just on every word you send it, but on how every word you send it relates to every other word you send it. You can think of every “token” in an LLM’s context window as being a discrete slot that can take a word (or part of a word, or punctuation, whatever… that’s why we always say “token” not “word”), and that slot has very very complicated wiring that connects to every other slot in the context window. And the output of each of those connections is itself connected to more wiring, and the output of that to more wiring, and so on. Each of these layers seems to help with grammar, understanding, and linking of concepts… it also turns out that a lot of the connections aren’t even used, but having them all wired in allows the system to find the most optimal arrangement by itself. The way it figured out how to wire all the “slots” together is based on terabytes of training data.
Part of this wiring passes through a “dictionary” of sorts (not what it’s called, but we’ll run with it for this simplification), which encodes every token as a long LONG series of numbers. Each number in that series corresponds to a “semantic concept”. For example, one of the numbers in the series might determine how “plural” a word might be. Another number might determine how masculine or feminine the word is. Another number might encode how rude the word is. Another might be how “cat-related” a word is. I keep saying “might” because we didn’t write the “dictionary” ourselves, we got another machine to make it for us by analyzing literal terabytes of human written texts and checking for word co-locations (what words appear in the vicinity of other words). Academic Linguists have been having a golden age recently by studying the math of how the machines mapped words, and have slowly been piecing together what the various numbers mean-- it’s really quite fascinating.
Anyway, the context window is not an arbitrary array, and increasing a context window by even a single token basically requires rewiring the whole thing, which is why an LLM’s context window is inherently limited. And if there isn’t a slot to put a token in, then it simply can’t think about it.
So, an LLM does “think”… in a sense. It does “reason”… but only as to what the *words* mean, not about logical consistency or adherence to the real world or facts. You may have heard the Symphony of Science song “A Glorious Dawn” (https://www.youtube.com/%E2%80%8Bwatch?v=zSgiXGELjbc%5C&list=%E2%80%8BRDEMft98UQ9nSZoCk8V-gaQ7zQ&%E2%80%8Bstart%5C_radio=1) where Carl Sagan says:
"But the brain does much more than just recollect It inter-compares, it synthesizes, it analyzes it generates abstractions
The simplest thought like the concept of the number one Has an elaborate logical underpinning\l The brain has its own language For testing the structure and consistency of the world"
An LLM does SOME of this. It inter-compares, but only between definitions of words in its dictionary. It analyzes… but only between definitions of words in its dictionary. It has its own elaborate logical underpinning, but these logical connections apply to WORDS and COMBINATIONS of words, not to ideas like our brain does.
In some ways this can be mitigated by encoding more and more information into the “dictionary”, which is how you can get an LLM to pass various exams it’s never seen before. But it’s all based on the meanings of the words as it understands them, not logic.
How DOES it think? Well, at the LOWEST level, it thinks one word at a time, considering what it should say next based on what has already been said. If it reads “Two plus two equals what?” it looks up the meaning of those numbers, checks the relation to the plus and equal words, does math on the WORDS (not the number 2!) and sees that, hey, there’s a dimension of words that relates to its position on the number line! I can adjust along this dimension of meaning, and come up with the answer four! And as long as two, plus, equals, and four are all SUFFICIENTLY well defined in the dictionary, then it can manipulate those ideas just as well as a human, or better.
What happens when it lacks words for concepts that it can map mathematically (what does cat + dog + not kingly + casual + bridgelike + sounds melancholy + french origin + purple + etc etc etc = ?)? This happens all the time. It looks for the closest word. Even if it has an exact concept mapped (very rare), it’ll still look around its concept space a little bit, according to a metric called “heat”, jiggling around the tokens in its dictionary like molecules jiggle when heated. This gets pretty good results, but not consistent ones… the result isn’t fully random, we don’t get chaos, but we do get different results for the same input. That’s not necessarily a bad thing.
However… true contradictions can also arise in its definitions. The most famous example of this was when ChatGPT was asked about a “seahorse emoji”. Turns out, in the training data, it was able to find connections between seahorse and emoji pretty easily. It’s very confident that there is one. Unfortunately, there isn’t… so it has mathematical connections between the concept of seahorse and the concept of emoji, but when it adds them together, NO actual token of a seahorse emoji emerges (because there isn’t one in unicode). Using the “find the nearest mathematical token that DOES exist” principle, it’ll spit out another emoji. Then it will look at the emoji, and see that it clearly doesn’t match… that’s *not* a seahorse. But it “knows” that a seahorse emoji exists according to its dictionary; it has a link there! So it tries again, and again can’t find it. So it gets stuck in an endless loop.
Anyway, how do agents fit into all this? Well, people started thinking-- if we can’t get an LLM to think in terms of ideas and logic outside of the definitions of words, what if we handed off the logic to another program that can do that? We can train the LLM to associate and link certain words to computer commands to run a program that can do arithmetic, or calculus, or formal logic, or drawing a picture, or arranging text into a table, or things like that. These external programs can then return text to the LLM, which can process it as words with definitions, and give you a good answer.
We can also use the “definitions of words” approach to approximate thinking about abstractions and ideas. Just have the LLM start generating associations, but don’t show them to the user, keep them in the backend as “chain of thoughts”. When the abstraction has gotten to a useful enough point, we can then use it as part of our context window to analyze it as words and get a good result.
Sometimes there’s a problem with using only one dictionary… sometimes words mean VASTLY different things in different contexts. That’s where the “Mixture of Experts” approach comes in. You build different dictionaries for different contexts. You have one LLM figure out which domain is most likely appropriate, then hand the text off to a different LLM who was trained on that other domain with a different dictionary.
It all comes together, and it works. Mostly. Usually. There are problems sometimes. And it used to be that we could fix problems just by giving it more training data… make the dictionary better. And it’s probably true that with an infinitely precise dictionary, there’d be no problems at all, just like it has no problems adding two plus two because it has sufficient definition of all those words; except we’ve literally run out of additional training data to give it. So workarounds and hacks and specialized training and things have been utilized to patch over the bits of the dictionary we don’t have and possibly can’t ever make.
And that’s a simplified version of how LLMs do what they do.
Iunnrais@piefed.socialto
No Stupid Questions@lemmy.world•ELI5. Limit of current gen AI/LLMsEnglish
10·1 month agoIt’s fundamentally not the same thing as autocomplete. Give autocomplete all the data an LLM has, every gig, every terabyte if it, and it still won’t be an LLM. Autocomplete lacks the semantic meaning layer as well as some other parts. People say it’s nothing but autocomplete from a misunderstanding of what a reward function does in backpropagation training (saying “the reward function is to predict the next word” is not even close to the equivalent of “it’s doing the same thing as autocomplete”)
I’m writing this short reply with hopes that when I have more time in the next two days or so I’ll come back with a more complete explanation, (including why context windows have to be limited).
Iunnrais@piefed.socialto
Lemmy Shitpost@lemmy.world•The script is mysterious and important.English
8·1 month agoIt’s not just the very ending, unless you mean the entire 2nd half… it just felt like there were two entirely different movies made, and they took the first half of one and haphazardly grafted it on to the second half of the other. The tonal whiplash alone was crazy.
I feel like that sort of movie mismatched has happened a lot over the years. I figure that producer meddling or overfitting to test-screening reactions is typically the cause.
Iunnrais@piefed.socialto
Technology@lemmy.world•AIs can’t stop recommending nuclear strikes in war game simulations— Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95% of casesEnglish
3·2 months agoSome ideologies are, in fact, mutually exclusive and cannot tolerate the others. Fascism cannot be tolerated, for instance. Nor can a belief in chattel slavery as a universal good. Sometimes an opposing ideology is just too fucking evil to be allowed to persist.
Setting the line that must not be crossed is a hard no problem though. And misplacing that line an inch incorrect in either direction can be horrible too.
Iunnrais@piefed.socialto
No Stupid Questions@lemmy.world•Can a reasonable person genuinely believe in ghosts?English
22·2 months agoI think you could rationally explore ghosts in the “radically redefining” them arena. Ghosts could rationally exist as an artifact of your mind, and saying that is not the same thing as saying they don’t exist. Hallucinations exist. They aren’t real, but they exist. Ghosts could rationally exist in the exactly same way, as processes in our own heads. It’s when you start saying they interact with the world in a way outside people’s heads that you can’t really reconcile.
Iunnrais@piefed.socialto
RetroGaming@lemmy.world•One Must Fall 2097 - Main Theme Remix [SIDNIFY]English
1·2 months agoIt wasn’t the only one, but it was certainly my first 2D fighter. Taught me the quarter circle movement.
Iunnrais@piefed.socialto
Superbowl@lemmy.world•We've seen aliens, how about Biblically accurate angels?English
1·2 months agoTwo wings covering their faces, two wings covering their feet, and two wings to move with.




No, the difference is that if you double a kelvin number, you have quantifiably doubled the heat. If you double a Celsius or Fahrenheit number, you have not quantifiably doubled the heat… the number does not objectively count an amount of something.
Think meters. A meter measures an exact length. Two meters is double one meter.
Celsius doesn’t do that. Celsius is a scale between two amounts of heat.
The equivalent for distance would be if we had a scale where 0 degrees distance was equal to 582.7762 meters, and 100 degrees distance was equal to 721.5323 meters. Each degree between 0 and a hundred is then a slice of that range. Maybe for the people who designed such a scale there’s useful reasons to do so, but you aren’t measuring the quantity or amount of something, you’re measuring a range.
Kelvin measures molecular movement, just as hertz measures oscillations or cycles, or grams measure weight.