I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it’s an sff machine.

What’s making me weary of going through is the specs of the 7010 itself: it’s a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

Do you think it’s even worth going through?

Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don’t know if it will be good enough.

  • brokenlcd@feddit.itOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    right now i’m hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.

    the process is vram light (albeit time intense)

    so long as it’s not interactive i can always run it at night and make it shut off the rig when it’s done. power here is cheaper at night anyways :-)

    thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Yeah it’s basically impossible to keep up with new releases, heh.

      Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.

      I’d also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.

      Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)

      One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.

      I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).

      Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.

      • brokenlcd@feddit.itOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Thanks for the advice. I’ll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.

        Gemma 12B is really popular now

        I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck’s capabilities. Now it’s just a matter of assembling the rig and get tinkering.

        Thanks again for the time and the availability :-)

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          But i remember the context being memory greedy due to being a multimodal

          No, it’s super efficient! I can run 27B’s full 128K on my 3090, easy.

          But you have to use the base llama.cpp server. kobold.cpp doesn’t seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.

          And the image input part is optional. Delete the mmproj file, and it wont load.

          There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.

          • brokenlcd@feddit.itOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            23 hours ago

            Oh ok. That changes a lot of things then :-). I think i’ll finally have to graduate to something a little less guided than kobold.cpp. Time to read llama.cpp’s and exllama’s docs i guess.

            Thanks for the tips.

            • brucethemoose@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              22 hours ago

              The LLM “engine” is mostly detached from the UI.

              kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.

              I personally love this for writing and testing though:

              https://github.com/lmg-anon/mikupad

              And Open Web UI for more general usage.

              There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:

              • For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.

              • For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).

              • For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it’s optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.