A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things as well.

  • 1 Post
  • 60 Comments
Joined 4 years ago
cake
Cake day: August 21st, 2021

help-circle

  • Sure. I mean we’re a bit different at both sides of the Atlantic. Europe regulates a lot more. We’re not supposed to be ripped off by big companies, they’re not supposed to invade our privacy, pollute the environment unregulated… Whether we succeed at that is a different story. But I believe that’s the general idea behind social democracy and the European spirit. We value our freedom from being used and that’s also why we don’t have a two weeks notice and we do have regulated working hours and a lot of rules and bureaucracy. The US is more freedom to do something. Opportunity. And in my eyes that’s the reason why it’s the US with a lot of tech giants and AI companies. That just fosters growth. Of course it also includes negative effects on society and the people. But I don’t think “right” and “wrong” are fitting categories here. It’s a different approach and everything has consequences. We try to balance more, and Europe is more balanced than the US. But that comes at a cost.

    That’s a line by the copyright lobbyists […]

    Well, I don’t think there is a lot of good things about copyright to begin with. Humanity would be better off if information were to be free and everyone had access to everything, could learn, remix and use and create what they like.

    I think of copyright more as an necessary evil. But somehow we needed Terry Pratchett to be able to make a living by writing novels. My favorite computer magazine needs to pay their employees. A music band can focus on a new album once they get paid for that… So I don’t think we need copyright in specific. But we need some way so people write books, music etc… Hollywood also did some nice movies and tv shows and they cost a lot of money.

    I don’t have an issue with AI users paying more. Why should we subsidise them, and force the supply chain to do work for a set price? That’s not how other businesses work. The chocolate manufacturer isn’t the only one making profit, but an entire chain from farmer to the supermarket gets to take part in earning money, which culminates in one product. I don’t see why it has to be handled differently for AI.

    And what I like about the approach in Europe is that there is some nuance to it. I mean I don’t agree 100% but at least they incentivise companies to be a bit more transparent, and they try to differentiate between research to the benefit of everyone and for-profit interest. And they try to tackle bad use-cases and I think that’s something society will appreciate once the entire internet is full of slop and misinformation by bad actors. Though, I don’t think we have good laws for that as of now.


  • Hmmh. It’s a bit complicated. “Fair Use” is a concept in Common law countries, but lots of European countries do it a bit differently. We here in Germany need specific limitations and exceptions from copyright. And we have some for art and science, education, personal use and citations and so on. But things like electronic data transfer, internet culture and more recently text- and datamining needed to be added on top. And even datamining was very specific and didn’t fit AI in it’s current form. And we don’t have something like Fair Use to base it upon.

    From my perspective, I’m still not entirely convinced Fair Use is a good fit, though. For one it doesn’t properly deal with the difference of doing something commercially and for research or personal use, and I believe some nuance would help here, big rich companies could afford to pay something. And as AI is disruptive, it has some effect on the original work and balancing that is somehow part of Fair Use. And then the same copyright concept has higher standards for example in music production and sampling things from other songs that are recognizable in the resulting work. And I don’t think we have a clear way how something like that translates to text and AI. And it can reproduce paragraphs, or paint a recognizable Mickey Mouse and in some way it’s in there in the model and leads to other issues. And then all the lines are blurry and it still needs a massive amount of lawsuits to settle how much sounding like Scarlett Johansson is too much sounding like her… I’d say even the US might need more clarity on a lot of legal questions and it’s not just handled by Fair Use as is… But yeah, “transformative” is somewhat at the core of it. I can also read books, learn something and apply the knowledge from it. Or combine things together and create something new/transformative.


  • I think I used a bit too much sarcasm. I wanted to take a spin on the idea how the AI industry simultaneously uses copyright, and finds ways to “circumvent” the traditional copyright that was written before we had large language models. An AI is neither a telephone-book, nor should every transformative work be Fair Use, no questions asked. And this isn’t really settled as of now. We likely need some more court cases and maybe a few new laws. But you’re right, law is complicated, there is a lot of nuance to it and it depends on jurisdiction.





  • I think you have to be more specific than that. A lot of people use Linux. It’s used on the majority of internet servers, in some gaming consoles (Steam Deck), and by some minority of people on desktop computers. For privacy reasons, to do office work or for software development. And a wide variety of tasks. Your question is very hypothetical. If you live in a place that singles you out for such things, or you have a specific use-case that makes you stand out in a bad way by using a different operating system than the masses, you might have a problem. If you don’t: You might be just fine and enjoy the benefits of using Linux.


  • CPU-only. It’s an old Xeon workstation without any GPU, since I mostly do one-off AI tasks at home and I never felt any urge to buy one (yet). Model size woul be something between 7B and 32B with that. Context length is something like 8128 tokens. I have a bit less than 30GB of RAM to waste since I’m doing other stuff on that machine as well.

    And I’m picky with the models. I dislike the condescending tone of ChatGPT and newer open-weight models. I don’t want it to blabber or praise me for my “genious” ideas. It should be creative, have some storywriting abilities, be uncensored and not overly agreeable. Best model I found for that is Mistral-Nemo-Instruct. And I currently run a Q4_K_M quant of it. That does about 2.5 t/s on my computer (which isn’t a lot, but somewhat acceptable for what I do). Mistral-Nemo isn’t the latest and greatest any more. But I really prefer it’s tone of speaking and it performs well on a wide variety of tasks. And I mostly do weird things with it. Let it give me creative advice, be a dungeon master or an late 80s text adventure. Or mimick a radio moderator and feed it into TTS for a radio show. Or write a book chapter or a bad rap song. I’m less concerned with the popular AI use-cases like answer factual questions or write computer code. So I’d like to switch to a newer, more “intelligent” model. But that proves harder than I imagined.

    (Occasionally I do other stuff as well, but that’s a far and in-between. So I’ll rent a datacenter GPU on runpod.io for a few bucks an hour. That’s the main reason why I didn’t buy an own GPU yet.)


  • hendrik@palaver.p3x.detoLinux@lemmy.worldWhy is sleep so hard for laptops?
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Maybe that’s more an issue with modern standby? Or the hardware has some quirks. The last two laptops I had were a Thinkpad and now a Dell Latitude. And they both sleep very well. I close the lid and they’ll drain a few battery percent over the day, I open the lid, the display lights up and I can resume work… Rarely any issues with Linux.


  • I think there are some posts out there (on the internet / Reddit / …) with people building crazy rigs with old 3090s or something. I don’t have any experience with that. If I were to run such a large model, I’d use a quantized version and rent a cloud server for that.

    And I don’t think computers can fit infinitely many GPUs. I don’t know the number, let’s say it’s 4. So you need to buy 5 computers to fit your 18 cards. So add a few thousand dollars. And a fast network/interconnect between them.

    I can’t make any statement for performance. I’d imagine such a scenario might work for MoE models with appropriate design. And for the rest performance is abysmal. But that’s only my speculation. We’d need to find people who did this.

    Edit: Alternatively, buy a Apple Mac Studio with 512GB of unified RAM. They’re fast as well (probably way faster than your idea?) and maybe cheaper. Seems an M3 Ultra Mac Studio with 512GB costs around $10,000. With half that amount, it’s only $7,100.


  • Well, I wouldn’t call them a “scam”. They’re meant for a different use-case. In a datacenter, you also have to pay for rack space and all the servers which accomodate all the GPUs. And you can now pay for 32 times as many servers with Radeon 9060XT or you buy H200 cards. Sure, you’ll pay 3x as much for the cards itself. But you’ll save on the amount of servers and everything that comes with it, hardware cost, space, electricity, air-con, maintenance… Less interconnect makes everything way faster…

    Of course at home different rules apply. And it depends a bit how many cards you want to run, what kind of workload you have… If you’re fine with AMD or you need Cuda…






  • From what I gather about current chatbots, they always sound very eloquent. They’re made that way with all the textbooks and Wikipedia articles that went in. But they’re not necessarily made to do therapy. ChatGPT etc are more general purpose and meant for a wide variety of tasks. And the study talks about current LLMs. So it’d be like a layman with access to lots of medical books, and they pick something that sounds very professional. But they wouldn’t do what an expert does, like follow an accepted and tedious procedure, do tests, examine, diagnosis and whatever. An AI chatbot (most of the times) gives answers anyways. So could be a dice roll and then the “solution”. But it’s not clear whether they have any understanding of anything. And what makes me a bit wary is that AI tends to be full of stereotypes and bias, it’s often overly agreeable and it praises me a lot for my math genious when I discuss computer programming questions with it. Things like that would certainly feel nice if you’re looking for help or have a mental issue or looking for reaffirmation. But I don’t think those are good “personality traits” for a therapist.



  • Kann man so oder so machen. Letztendlich muss man irgendwie sein Auskommen miteinander finden, wenn man in einer Gruppe von Menschen arbeitet. Manche wollen dies, manche das. Also Viele Quatschen halt auch gerne. Man ist nicht unbedingt Freunde, aber irgendwie sieht man sich ja fast jeden Tag und verbringt viel Zeit mit diesen Menschen, da gehört Sozialisieren und Menschliches dazu, einfach weil Menschengruppen so funktionieren. Gibt auch Leute, die das nicht mögen, die müssen den Anderen dann irgendwie beibringen, dass sie lieber gerne allein gelassen werden wollen. Dann haben das alle zu akzeptieren, man geht miteinander und mit unterschiedlichen Bedürfnissen mit Respekt und professioneller Höflichkeit um. Und die Geschichte sollte damit abgehakt sein. Der Respekt und die Höflichkeit sind da für mich das Kollegiale.