• MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      ME did have some improvements over 98SE.

      It handled multiple monitors and had Internet connection sharing, two things 98SE couldn’t do.

      Most people don’t realize that ME wasn’t completely terrible. Same with Vista, and 8.1. Windows 8 is unforgivable. The UI changes were simply a mistake. Made the OS really difficult to use unless you were fat fingering a touchscreen all day… That’s the only scenario where the UI of W8 made any sense. For kB/mouse users, out was just unwieldy. The function of W8 was fine, but it was like putting a sports car motor into a minivan, and then loading it up with lead plates and wondering why the handling sucks.

      I have… Thoughts about W11 too, but they’re more under the hood complaints. Some UI complaints but mostly under the hood stuff that makes me go hmmm.

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Windows ME broke DOS support just to pretend it wasn’t 9x in a new hat. At the time - this was kind of a big deal. 98 was objectively better for any typical suburban setup.

        XP was also NT in a new hat, but they had it fake all the bugs that popular games expected. Plus the hat was nice. That Fisher-Price UI era was genuinely great, especially compared to modern ultra-flat nonsense. Windows 95 had instantly visible hierarchy in sixteen colors. Nowadays you can’t even tell what’s clickable without guesswork and memorization. Or at best you get a dozen indistinct monochrome icons.

        Windows 7 was the only time Microsoft nailed everything. Each XP service pack broke and fixed a random assortment of features. Everything from 8 onward is broken on purpose, first for the stupid tablet interface (when WinCE’s 9x UI worked just fucking fine on 3" screens with Super Nintendo resolutions), then to openly betray all trust and control. I would still be using Windows 7 to-day if modern malware wasn’t so scary. It’s not even about vulnerability - I must have reinstalled XP once a month, thanks to the sketchiest codec packs ever published. But since I can’t back up my whole hard drive on five dollars worth of DVD-Rs, the existence of ransomware pushed me back to Linux Mint.

        • MystikIncarnate@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          There are so many sins that have been committed in the name of progress.

          Most of the early Windows games won’t even run anymore. There’s been more lost than we can reasonably understand.

          My journey took me through to Windows 2000 pro (as opposed to server) for a while there. I eventually moved over to XP, then Vista, then 7.

          I was one of the first people I knew of that ran Vista 64 bit. Most didn’t have the hardware for it, but the core 2 duo in my mobile computer was capable, so I jumped ship as soon as I could.

          I’m both unsurprised and disappointed that itanium, Intel’s first attempt at 64 bit CPUs, failed. Starting new with an instruction set that was built from the ground up for modern applications was both very ambitious and presented a fairly unique opportunity for the industry, but they just couldn’t move enough units, and AMD saw the writing on the wall, and created the 64 bit extension for the x86 instruction set.

          Oh well. Another opportunity lost.

          We have another one with the whole ARM processor race that Apple kicked off with the M1. I’m only sad that it went to arm and not RISC V. That would have been quite the change.

          Oh well. Maybe RISC V will see another opportunity soon, since NASA commissioned a new generation of radiation hardened processors for spaceflight computers, and they’re RISC V… Who knows.

          I’m off on a tangent. Weeeee

          • mindbleach@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Fortunately a lot of early Windows shit runs in Wine, since the most stable Linux API is Win32. Anything older than that either works in 86box or was broken to begin with. Okay, that’s not fair - WineVDM is necessary to bridge the gap for the dozen Windows 3.1 programs that matter. I am never allowed to write those off when one of them is Castle Of The Winds.

            What Intel learned with Itanium is that compatibility is god. They thought their big thing was good chip design and modern foundries. They were stupid. AMD understood that what kept Intel relevant was last year’s software running better this year. This was evident back in the 486 days, when AMD was kicking their ass in terms of cycles per operation, and it caused division-by-zero errors with network benchmarks taking less than one millisecond.

            But software has won.

            The open architecture of RISC-V is feasible mostly because architecture doesn’t fucking matter. People are running Steam on their goddamn phones. It’s not because ARM is amazing; it’s because machine code is irrelevant. Intermediate formats can be forced upon even proprietary native programs. Macs get one last gasp of custom bullshit, with Metal just barely predating Vulkan, and if they try anything unique after that then it’s a deliberate waste of everyone’s time. We are entering an era where all software for major platforms should Just Work.

            • MystikIncarnate@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              I get your point. Compatibility is definitely important (to the people who approve big purchases of tech), because app vendors are stupid slow to react to a new architecture, and port their software to it.

              IA64 was faster over all than x86 clock for clock… Unless you had to run non-natively compiled code. The x86 emulation layer in it was not great. Anything compiled for the architecture ran faster and better, but the only software vendor who seemed to even attempt to release anything for it was Microsoft. There was a full IA64 build of Windows server, and I believe they even built exchange for it too; possibly more, like mssql…

              But as far as I’m aware, they’re the only mainstream vendor who tried. So when someone wanted the IA64 server to run the QuickBooks server components and everything sucked harder than a $2 hooker on crack, the problems became immediately evident with the platform.

              It’s actually really good that we’re moving away from natively compiling software for CPU architectures. Yes, it may run slightly faster that way, but CPU speed is rarely the problem with modern computing. It stopped being the primary motivator for CPU purchasing around the core i-series, 4th Gen. I would maintain that a 4th Gen platform would run fine today despite the obvious deficiencies, if we could cram it with enough RAM that goes fast enough to keep up, and a quick SSD. In fact, I have a handful of systems that are still running on that platform today and they’re working great. With the extensions included in more modern CPUs, the need for compute speed is even less. AES is now handled in hardware. Which is one of the major slowdowns for older CPUs on the modern web. I could go on, but I think I’m making my point quite well.

              Except for a handful of edge cases where every ounce of performance matters, it’s basically irrelevant to have more compute power. A lot of the bottlenecks of those systems are because of their interconnects. Fix that and you would have a viable platform.

              If anyone needs proof of this, we need not look any further than the trend towards SBCs. A raspberry Pi can run a lot of things far better than older hardware at a fraction of the power consumption. With trends towards architecture agnostic software, it’s becoming more and more viable to use smaller, more power efficient systems to do the same work.

              There will almost always be a place for big iron, but for most, smaller is better. It does what you need it to do, and that’s it.

              • mindbleach@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Frustrating part A is that we have a universal binary format… and it’s HTML5. Frustrating part B is that nobody with a purchasing department wants to admit it. Slack ships with its own browser like you don’t have one. Modern web games can run on a sufficiently fancy Amiga, yet there have been Electron apps without a Linux version. That Amiga’s gonna suffer overhead and unstable performance, but I mean, so do native Unreal 5 games.

                The good ending from here would be a period of buck-wild development. RISC-V, MIPS, finally doing that guy’s Mill CPU. I was gonna say that neural networks might finally get high parallelism taken seriously, but no, optimized matrix algebra will stay relegated to specialty hardware. Somewhere between a GPU and an FPU. There’s server chips with a hundred cores and it still hasn’t revived Tilera. They’re just running more stuff, at normal speed.

                The few things that need to happen quickly instead of a lot will probably push FPGAs toward the mainstream. The finance-bro firehose of money barely splashed it, when high-frequency trading was the hot new thing. Oh yeah: I guess some exchanges put in several entire seconds of fiber optics, to keep the market comprehensible. Anyway, big FPGAs at sane prices would be great for experimentation, as the hardware market splinters into anything with an LLVM back-end. Also nice for anything you need to happen a zillion times a second on one AA battery, but neural networks will probably cover that as well, anywhere accuracy is negotiable.

                Sheer quantity of memory will be a deciding factor for a while. Phones and laptops put us in a weird place where 6 GB was considered plenty, for over a decade. DRAM sucks battery and SRAM is priced like it’s hand-etched by artisanal craftsmen. Now this AI summer has produced guides like ‘If you only have 96 GB of VRAM, set it to FP8. Peasant.’ Then again - with SSDs, maybe anything that’s not state is just cache. Occasionally your program hitches for an entire millisecond. Even a spinning disk makes a terabyte of swap dirrrt cheap. That and patience will run any damn thing.