

He’s pretty chill, unless you break userspace. In that case, Linus will break you. But pretty cool otherwise


He’s pretty chill, unless you break userspace. In that case, Linus will break you. But pretty cool otherwise
Now you have syntax highlighting and JSON parsing. Total install size: ~10MB. Total startup time: instant. Total RAM usage: negligible. Total feelings of superiority: immeasurable.
As someone who often does curl API requests and also come up to idea of putting them in jq — I felt superior just by reading this, thank you
More importantly - why no one yet mentioned Hitler?
An offroad motorcycle though…
There more pleasant ways for a suicide, no?


I had all those issues on arch for a while, but recent (this week) update indeed seems to bork something even more. Not sure if it was steam update or arch os update, but currently steam window have 50/50 chance to become laggy af and unresponsive. Basically making it unusable. Closing to tray and opening again can fix it with same 50/50 chance.
Have no clue how to fix it. Check their github, maybe someone already reported a bug.


I once wrote small post on reddit about running FSR4 on rdna3 (via driver emulation hack that devs on linux added, before INT8 version). That poor post was reused by multiple sites with bizarre titles, like “guy on reddit hacked FSR4!” and other similar crap. I’m not sure if it’s even humans writing/doing that, probably some server with llm continuously scrapes google for new posts, rewrite them and post on own sites for engagement.
The future that awaits us sure looks fun


Two years ago, when I found out that you need damn subscription, to watch YOUR stuff with transcoding on your device in local network, from your local server - I complained on reddit and a lot of people was disagree with me for harsh position.
They_got_what they_focking_deserve.png
represents minority
tells that other minority shouldn’t be tolerated
Many such cases


Yes. But you won’t be prosecuted, if you simply won’t spread propaganda. They could’ve refuse posting this article and would’ve been okay anyway.
I recall there where a story, from nazi germany soldier side, where it was explained how no one really was forced to do all that nazi stuff in the first place and like, officers just commanded “who want to rape&kill locals — go ahead, rest can stay, no one will be prosecuted”. And like, almost no one stayed. Similar story happens in russia currently, especially in army, but in civil life too. No one forcing russians to hate Ukrainians and yet my grandparents said something across the lines “we should gas them”.
So yeah, nah.


Wait, you telling me volatile stock market is volatile?
Who could possibly predict such a thing?


Marriage? In this economy?
In fact, I’m gonna steal the part about mommy and add it to my profile too. A focking masterpiece.
$20k
Real question is: do I bring my own lube or not?


That propaganda won’t work comrade Ivan, we all know there no gay farmers in Vladivostok. Our wise government made sure of it!
But yeah, jokes aside, it’s currently a crime to support LGBT in russia. Being gay = being rebel against regime, cause real rebels either already dead, in prison or fled. Next in line, if I would guess, would be jews I think.


I just checked how much my 4x32gb costs. Guys, I’m focking rich
I once witnessed funnies thread in my life, so I made it into meme. Feels like it fits here.

is there a general term for the setting that offloads the model into RAM? I’d love to be able to load larger models.
Ollama does that by default, but prioritizes gpu above regular ram and cpu. In fact, it’s other feature that often doesn’t work, cause they can’t fix the damn bug that we reported a year ago - mmap. That feature allows you to load and use model directly from disk (alto, incredibly slow, but allows to run something like deepseek that weight ~700gb with at least 1-3 token\s).
num_gpu allows you to specify how much to load into GPU vram, the rest will be swapped to regular RAM.
You’d need ollama (local) and custom models from huggingface.
Half of the charm in using ollama - ability to install models in one command, instead of searching for correct file format and settings on huggingface.
for example:
Isn’t that one also pretty censored? Really uncensored one usually either builded from scratch (behemoth or midnight-miqu as example) or named accordingly: mixtral-uncensored or llama3-ablitered.
Wait, you mean using Large Language Model that created to parse walls of text, to parse walls of text, is a legit use?
Those kids at openai would’ve been very upset if they could read.