• floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    476
    arrow-down
    3
    ·
    3 months ago

    Intel has not halted sales or clawed back any inventory. It will not do a recall, period.

    Buy AMD. Got it!

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      104
      arrow-down
      2
      ·
      3 months ago

      I’ve been buying AMD for – holy shit – 25 years now, and have never once regretted it. I don’t consider myself a fanboi; I just (a) prefer having the best performance-per-dollar rather than best performance outright, and (b) like rooting for the underdog.

      But if Intel keeps fucking up like this, I might have to switch on grounds of (b)!

      spoiler

      (Realistically I’d be more likely to switch to ARM or even RISCV, though. Even if Intel became an underdog, my memory of their anti-competitive and anti-consumer bad behavior remains long.)

      • SoleInvictus@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        43
        arrow-down
        2
        ·
        edit-2
        3 months ago

        Same here. I hate Intel so much, I won’t even work there, despite it being my current industry and having been headhunted by their recruiter. It was so satisfying to tell them to go pound sand.

        • Gigasser@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          It’s good to feel proud of where you work. I’m not too sure on whether or not Intel treats their workers good though, do they?

          • aard@kyu.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            I did not sign with them after I had some issues with the contract provided, and the resulting interactions with my future manager. I’d say at least for someone from Europe the company culture is less than ideal from that encounter.

      • Damage@slrpnk.net
        link
        fedilink
        English
        arrow-up
        17
        ·
        3 months ago

        I’ve been on AMD and ATi since the Athlon 64 days on the desktop.

        Laptops are always Intel, simply because that’s what I can find, even if every time I scour the market extensively.

        • Krauerking@lemy.lol
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          3 months ago

          Honestly I was and am, an AMD fan but if you went back a few years you would not have wanted and AMD laptop. I had one and it was truly awful.

          Battery issues. Low processing power. App crashes and video playback issues. And this was on a more expensive one with a dedicated GPU…

          And then Ryzen came out. You can get AMD laptops now and I mean that like they exist, but also, as they actually are nice. (Have one)

          But in 2013 it was Intel or you were better off with nothing.

          • orangeboats@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Indeed, the Ryzen laptops are very nice! I have one (the 4800H) and it lasts ~8 hours on battery, far more than what I expected from laptops of this performance level. My last laptop barely achieved 4 hours of battery life.

            I had stability issues in the first year but after one of the BIOS updates it has been smooth as butter.

          • Damage@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Yeah I never really considered them before Ryzen, but even afterwards, it’s been very difficult to find one with the specs I want.

      • Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        edit-2
        3 months ago

        Sorry but after the amazing Athlon x2, the core and core 2 (then i series) lines fuckin wrecked AMD for YEARS. Ryzen took the belt back but AMD was absolutely wrecked through the core and i series.

        Source: computer building company and also history

        tl:dr: AMD sucked ass for value and performance between core 2 and Ryzen, then became amazing again after Ryzen was released.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          AMD “bulldozer” architecture CPUs were indeed pretty bad compared to Intel Core 2, but they were also really cheap.

        • CancerMancer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          I ran an AMD Phenom II x4 955 Black Edition for ~5 years, then gave it to a friend who ran it for another 5 years. We overclocked the hell out of it up to 4ghz, and there is no way you were getting gaming performance that good from Intel dollar-for-dollar, so no AMD did not suck from Core 2 on. You need to shift that timeframe up to Bulldozer, and even then Bulldozer and the other FX CPUs ended up aging better than their Intel counterparts, and at their adjusted prices were at least reasonable products.

          Doesn’t change the fact AMD lied about Bulldozer, nor does it change Intel using its market leader position to release single-digit performance increases for a decade and strip everything i5 and lower down to artificially make i7 more valuable. Funny how easy it is to forget how shit it was to be a PC gamer then after two crypto booms.

      • Final Remix@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 months ago

        I’ve had nothing but issues with some computers, laptops, etc… once I discovered the common factor was Intel, I haven’t had a single problem with any of my devices since. AMD all the way for CPUs.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        3 months ago

        I hate the way Intel is going, but I’ve been using Intel chips for over 30 years and never had an issue.

        So your statement is kind of pointless, since it’s such a small data set, it’s irrelevant and nothing to draw any conclusion from.

      • nek0d3r@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Genuinely, I’ve also been an AMD buyer since I started building 12 years ago. I started out as a fan boy but mellowed out over the years. I know the old FX were garbage but it’s what I started on, and I genuinely enjoy the 4 gens of Intel since ivy bridge, but between the affordability and being able to upgrade without changing the motherboard every generation, I’ve just been using Ryzen all these years.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        54
        ·
        edit-2
        3 months ago

        arm is very primed to take a lot of market share of server market from intel. Amazon is already very committed on making their graviton arm cpu their main cpu, which they own a huge lion share of the server market on alone.

        for consumers, arm adoption is fully reliant on the respective operating systems and compatibility to get ironed out.

        • icydefiance@lemm.ee
          link
          fedilink
          English
          arrow-up
          20
          ·
          3 months ago

          Yeah, I manage the infrastructure for almost 150 WordPress sites, and I moved them all to ARM servers a while ago, because they’re 10% or 20% cheaper on AWS.

          Websites are rarely bottlenecked by the CPU, so that power efficiency is very significant.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            3 months ago

            I really think that most people who think that they want ARM machines are wrong, at least given the state of things in 2024. Like, maybe you use Linux…but do you want to run x86 Windows binary-only games? Even if you can get 'em running, you’ve lost the power efficiency. What’s hardware support like? Do you want to be able to buy other components? If you like stuff like that Framework laptop, which seems popular on here, an SoC is heading in the opposite direction of that – an all-in-one, non-expandable manufacturer-specified system.

            But yours is a legit application. A non-CPU-constrained datacenter application running open-source software compiled against ARM, where someone else has validated that the hardware is all good for the OS.

            I would not go ARM for a desktop or laptop as things stand, though.

            • batshit@lemmings.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              3 months ago

              If you didn’t want to game on your laptop, would an ARM device not be better for office work? Considering they’re quiet and their battery lasts forever.

              • frezik@midwest.social
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                3 months ago

                ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so. Apple is getting a lot out of them because TSMC 3nm; even the upcoming AMD 9000 series will only be on TSMC 4nm.

                ARM is great for having more than one competent company in the market, though.

                • batshit@lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  ARM chips aren’t better at power efficiency compared to x84 above 10 or 15W or so.

                  Do you have a source for that? It seems a bit hard to believe.

              • Nighed@sffa.community
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                As long as the apps all work. So much stuff is browser based now, but something will always turns up that doesn’t work. Something like mandatory timesheet software, a bespoke tool etc.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          3 months ago

          Linux works great on ARM, I just want something similar to most mini-ITX boards (4x SATA, 2x mini-PCIe, and RAM slots), and I’ll convert my DIY NAS to ARM. But there just isn’t anything between RAM-limited SBCs and datacenter ARM boards.

          • Dudewitbow@lemmy.zip
            link
            fedilink
            English
            arrow-up
            16
            ·
            edit-2
            3 months ago

            arm is a mixed bag. iirc atm the gpu on the Snapdragon X Elite is disabled on Linux, and consumer support is reliant on how well the hardware manufacturer supports it if it closed source driver. In the case of qualcomm, the history doesnt look great for it

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              Eh, if they give me a PCIe slot, I’m happy to use that in the meantime. My current NAS uses an old NVIDIA GPU, so I’d just move that over.

              • Zangoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Apparently (from another comment on a thread about arm from a few weeks ago) consumer GPU bioses contain some x86 instructions that get run on the CPU, so getting full support for ARM isn’t as simple as swapping the cards over to a new motherboard. There are ways to hack around it (some people got AMD GPUs booting on a raspberry pi 5 using its PCIe lanes with a bunch of adapters) but it is pretty unreliable.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 months ago

                  Yeah, there are some software issues that need to be resolved, but the bigger issue AFAIK is having the hardware to handle it. The few ARM devices with a PCIe slot often don’t fully implement the spec, such as power delivery. Because of that, driver work just doesn’t happen, because nobody can realistically use it.

                  If they provide a proper PCIe slot (8-16 lanes, on-spec power delivery, etc), getting the drivers updated should be relatively easy (months, not years).

          • Justin@lemmy.jlh.name
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Datacenter cpus are actually really good for NASes considering the explosion of NVMe storage. Most consumer CPUs are limited to just 5 m.2 drives and a 10gbit NIC. But a server mobo will open up for 10+ drives. Something cheap like a first gen Epyc motherboard gives you a ton of flexibility and speed if you’re ok with the idle power consumption.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              if you’re ok with the idle power consumption

              I’m kind of not. I don’t need a ton of drives, and I certainly don’t need them to be NVMe. I just want 2-4 SATA drives for storage and 1-2 NVMe drives for boot, and enough RAM to run a bunch of services w/o having to worry about swapping. Right now my Ryzen 1700 is doing a fine job, but I’d be willing to sacrifice some performance for energy savings.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              3 months ago

              Eh, it looks like ARM laptops are coming along. I give it a year or so for the process to be smooth.

              For servers, AWS Graviton seems to be pretty solid. I honestly don’t need top performance and could probably get away with a Quartz64 SBC, I just don’t want to worry about RAM and would really like 16GB. I just need to server a dozen or so docker containers with really low load, and I want to do that with as little power as I can get away with for minimum noise. It doesn’t need to transcode or anything.

              • Justin@lemmy.jlh.name
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                3 months ago

                ARM laptops don’t support ACPI, which makes them really hard for Linux to support. Having to go back two years to find a laptop with wifi and gpu support on Linux isn’t practical. If Qualcomm and Apple officially supported Linux like Intel and AMD do, it would be a different story. As it is right now, even Android phones are forced to use closed-source blobs just to boot.

                Those numbers from Amazon are misleading. Linus Torvalds actually builds on an Ampere machine, but they don’t actually do that well in benchmarks.

                https://www.phoronix.com/review/graviton4-96-core

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  AWS’ benchmark is about lambda functions, not compile workloads, which are quite different beasts. Lambdas are about running a lot of small (so task switching), independent scripts, whereas compiling is about running heavy CPU workloads (so feeding caches). Server workloads tend to be more of the former than the latter.

                  That said, I’m far less interested in raw performance and way more interested in power efficiency and idle and low utilization. I’m very rarely going to be pushing any kind of meaningful load on it, and when I do, I don’t mind if it takes a little longer, provided I’m saving a lot of electricity in the meantime.

              • CancerMancer@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Man so many SBCs come so close to what you’re looking for but no one has that level of I/O. I was just looking at the ZimaBlade / ZimaBoard and they don’t quite get there either: 2 x SATA and a PCIe 2.0 x4. ZimaBlade has Thunderbolt 4, maybe you can squeeze a few more drives in there with a separate power supply? Seems mildly annoying but on the other hand, their SBCs only draw like 10 watts.

                Not sure what your application is but if you’re open to clustering them that could be an option.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 months ago

                  Here’s my actual requirements:

                  • 2 boot drives in mirror - m.2 or SATA is fine
                  • 4 NAS HDD drives - will be SATA, but could use PCIe expansion; currently have 2 8TB 3.5" HDDs, want flexibility to add 2x more
                  • minimum CPU performance - was fine on my Phenom II x4, so not a high bar, but the Phenom II x4 has better single core than ZimaBlade

                  Services:

                  • I/O heavy - Jellyfin (no live transcoding), Collabora (and NextCloud/ownCloud), samba, etc
                  • CPU heavy - CI/CD for Rust projects (relatively infrequent and not a hard req), gaming servers (Minecraft for now), speech processing (maybe? Looking to build Alexa alt)
                  • others - actual budget, vault warden, Home Assistant

                  The ZimaBlade is probably good enough (would need to figure out SATA power), I’ll have to look at some performance numbers. I’m a little worried since it seems to be worse than my old Phenom II x4, which was the old CPU for this machine. I’m currently using my old Ryzen 1700, but I’d be fine downgrading a bit if it meant significantly lower power usage. I’d really like to put this under my bed, and it needs to be very quiet to do that.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              3 months ago

              Servers being slow is usually fine. They’re already at way lower clocks than consumer chips because almost all that matters is power efficiency.

      • mox@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        1
        ·
        3 months ago

        RISC-V isn’t there yet, but it’s moving in the right direction. A completely open architecture is something many of us have wanted for ages. It’s worth keeping an eye on.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        3 months ago

        If there were decent homelab ARM CPUs, I’d be all over that. But everything is either memory limited (e.g. max 8GB) or datacenter grade (so $$$$). I want something like a Snapdragon with 4x SATA, 2x m.2, 2+ USB-C, and support for 16GB+ RAM in a mini-ITX form factor. Give it to me for $200-400, and I’ll buy it if it can beat my current NAS in power efficiency (not hard, it’s a Ryzen 1700).

        • chingadera@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          I hope so, I accidentally advised a client to snatch up a snapdragon surface (because they had to have a dog shit surface) and I hadn’t realized that a lot of shit doesn’t quite work yet. Most of it does, which is awesome, but it needs to pick up the pace

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          Depends on the desktop. I have a NanoPC T4, originally as a set top box (that’s what the RK3399 was designed for, has a beast of a VPU) now on light server and wlan AP duty, and it’s plenty fast enough for a browser and office. Provided you give it an SSD, that is.

          Speaking of Desktop though the graphics driver situation is atrocious. There’s been movement since I last had a monitor hooked up to it but let’s just say the linux blob that came with it could do gles2, while the android driver does vulkan. Presumably because ARM wants Rockchip to pay per fucking feature per OS for Mali drivers.

          Oh the VPU that I mentioned? As said, a beast, decodes 4k h264 at 60Hz, very good driver support, well-documented instruction set, mpv supports it out of the box, but because the Mali drivers are shit you only get an overlay, no window system integration because it can’t paint to gles2 textures. Throwback to the 90s.

          Sidenote some madlads got a dedicated GPU running on the thing. M.2 to PCIe adapter, and presumably a lot of duct tape code.

          • cmnybo@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            GPU support is a real mess. Those ARM SOCs are intended for embeded systems, not PCs. None of the manufacturers want to release an open source driver and the blobs typically don’t work with a recent kernel.

            For ARM on the desktop, I would want an ATX motherboard with a socketed 3+ GHz CPU with 8-16 cores, socketed RAM and a PCIe slot for a desktop GPU.

            Almost all Linux software will run natively on ARM if you have a working GPU. Getting windows games to run on ARM with decent performance would probably be difficult. It would probably need a CPU that’s been optimized for emulating x86 like what Apple did with theirs.

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        15
        arrow-down
        4
        ·
        3 months ago

        hmm. not really. I can’t beat AMD. Only in power-consumption, sure, but not in real performance.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          ARM is only more power efficient below 10 to 15 W or so. Above that, doesn’t matter much between ARM and x86.

          The real benefit is somewhat abstract. Only two companies can make x86, and only one of them knows how to do it well. ARM (and RISC V) opens up the market to more players.

        • schizo@forum.uncomfortable.business
          link
          fedilink
          English
          arrow-up
          15
          ·
          3 months ago

          Kinda? It really should be treated as a 1st generation product for Windows (because the previous versions were ignored by, well, everyone because they were utterly worthless), and should be avoided for quite a while if gaming is remotely your goal. It’s probably the future, but the future is later… assuming, of course, that the next gen x86 CPUs don’t both get faster and lower power (which they are) and thus eliminate the entire benefit of ARM.

          And, if you DONT use Windows, you’re looking at a couple of months to a year to get all the drivers in the Linux kernel, then the kernel with the drivers into mainstream distributions, assuming Qualcomm doesn’t do their usual thing of just abandoning support six months in because they want you to buy the next release of their chips instead.

            • schizo@forum.uncomfortable.business
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 months ago

              I’m having the same dream, but I don’t trust Qualcomm to not fuck everyone. I mean it’d be nice if they don’t but they’ve certainly got the history of being the scorpion and I’m going to let someone else be the frog until they’ve proven they’re not going to sting me mid-river.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          Yes. Problem is, this is the only way our system of justice allows for keeping companies accountable. They still pay out the nose on their end.

          However, in this case, there’s a lot of big companies that would also be part of the class. Some from oem desktop systems in offices, and also for some servers. The 13\14900k has a lot of cores, and there’s quite a few server motherboards that accept it. It was often a good choice over going Xeon or EPYC.

          Those companies are now looking over at the 7950x, noticing it’s faster, uses less power, and doesn’t crash.

          They’re not going to be satisfied with a $10 check.

      • lath@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        3 months ago

        Yet they do it all the time when a higher specs CPU is fabricated with physical defects and is then presented as a lower specs variant.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          Nobody objects to binning, because people know what they’re getting and the part functions within the specified parameters.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    177
    arrow-down
    2
    ·
    3 months ago

    I have a 13 series chip, it had some reproducible crashing issues that so far have subsided by downclocking it. It is in the window they’ve shared for the oxidation issue. At this point there’s no reliable way of knowing to what degree I’m affected, by what type of issue, whether I should wait for the upcoming patch or reach out to see if they’ll replace it.

    I am not happy about it.

    Obviously next time I’d go AMD, just on principle, but this isn’t the 90s anymore. I could do a drop-in replacement to another Intel chip, but switching platforms is a very expensive move these days. This isn’t just a bad CPU issue, this could lead to having to swap out two multi-hundred dollar componenet, at least on what should have been a solidly future-proof setup for at least five or six years.

    I am VERY not happy about it.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      80
      arrow-down
      2
      ·
      3 months ago

      I’m angry on your behalf. If you have to downclock the part so that it works, then you’ve been scammed. It’s fraud to sell a part as a higher performing part when it can’t deliver that performance.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        56
        ·
        edit-2
        3 months ago

        So here’s the thing about that, the real performance I lose is… not negligible, but somewhere between 0 and 10% in most scenarios, and I went pretty hard keeping the power limits low. Once I set it up this way, realizing just how much power and heat I’m saving for the last few few drops of performance made me angrier than having to do this. The dumb performance race with all the built-in overclocking has led to these insanely power hungry parts that are super sensitive to small defects and require super aggressive cooling solutions.

        I would have been fine with a part rated for 150W instead of 250 that worked fine with an air cooler. I could have chosen whether to push it. But instead here we are, with extremely expensive motherboards massaging those electrons into a firehose automatically and turning my computer into a space heater for the sake of bragging about shaving half a milisecond per frame on CounterStrike. It’s absurd.

        None of which changes that I got sold a bum part, Intel is fairly obviously trying to weasel out of the obviously needed recall and warranty extension and I’m suddenly on the hook for close to a grand in superfluous hardware next time I want to upgrade because my futureproof parts are apparently made of rust and happy thoughts.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          3 months ago

          150W instead of 250

          Yeah, when I saw that the CPU could pull 250W, I initially thought that it was a misprint in the spec sheet. That is kind of a nutty number. I have a space heater that can run at low at 400W, which is getting into that range, and you can get very low-power space heaters that consume less power than the TDP on that processor. That’s an awful lot of heat to be putting into an incredibly small, fragile part.

          That being said, I don’t believe that Intel intentionally passed the initial QA for the 13th generation thinking that there were problems. They probably thought there was a healthy safety margin. You can certainly blame them for insufficient QA or for how they handled the problem as the issue was ongoing, though.

          And you could also have said “this is absurd” at many times in the past when other performance barriers came up. I remember – a long time ago now – when the idea of processors that needed active cooling or they would destroy themselves seemed rather alarming and fragile. I mean, fans do fail. Processors capable of at least shutting down on overheat to avoid destroying themselves, or later throttling themselves, didn’t come along until much later. But if we’d stopped with passive heatsink cooling, we’d be using far slower systems (though probably a lot quieter!)

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            3 months ago

            You’re not wrong, but “we’ve been winging it for decades” is not necessarily a good defense here.

            That said, I do think they did look at their performance numbers and made a conscious choice to lean into feeding these more power and running them hotter, though. Whether the impact would be lower with more conservative power specs is debatable, but as you say there are other reasons why trying to fake generational leaps by making CPUs capable of fusing helium is not a great idea.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            3 months ago

            Oh, I absolutely could have. It would lose a couple of cores, but the 13th gen is pretty linear, it would have performed more or less the same.

            Thing is, I couldn’t have known that then, could I? Chip reviews aren’t aiming at normalizing for temps, everybody is reviewing for moar pahwah. So is there a way for me to know that gimping this chip to run silently basically gets me a slightly overclocked 13600K? Not really. Do I know, even at this point, that getting a 13600K wouldn’t deliver the same performance but require my fans to be back to sounding noticeable? I don’t know that.

            Because the actual performance of these is not to a reliable spec other than “run flat out and see how much heat your thermal solution can soak” there is no good way to evaluate these for applications that aren’t just that without buying them and checking. Maybe I could have saved a hundred bucks. Maybe not. Who knows?

            This is less of a problem if you buy laptops, but for casual DIY I frankly find the current status quo absurd.

    • Brickfrog@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      48
      ·
      edit-2
      3 months ago

      I have a 13 series chip, it had some reproducible crashing issues that so far have subsided by downclocking it.

      From the article:

      the company confirmed a patch is coming in mid-August that should address the “root cause” of exposure to elevated voltage. But if your 13th or 14th Gen Intel Core processor is already crashing, that patch apparently won’t fix it.

      Citing unnamed sources, Tom’s Hardware reports that any degradation of the processor is irreversible, and an Intel spokesperson did not deny that when we asked.

      If your CPU is already crashing then that’s it, game over. The upcoming patch cannot fix it. You’ve got to figure out if you can do a warranty replacement or continue to live with workarounds like you’re doing now.

      Their retail boxed CPUs usually have a 3(?) year warranty so for a 13th gen CPU you may be midway or at the tail end of that warranty period. If it’s OEM, etc. it could be a 1 year warranty aka Intel isn’t doing anything about it unless a class action suit forces them :/

      The whole situation sucks and honestly seems a bit crazy that Intel hasn’t already issued a recall or dealt with this earlier.

      • 1rre@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        24
        ·
        3 months ago

        If you’re in the UK or I expect EU, I imagine if it’s due to oxidation you can get it replaced even on an expired warranty as it’s a defect which was known to either you or intel before the warranty expired, and a manufacturing defect rather than breaking from use, so intel are pretty much in a corner about having sold you faulty shit

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        3 months ago

        The article is… not wrong, but oversimplifying. There seem to be multiple faults at play here, some would continue to degrade, others would prevent you from recovering some performance threshold, but may be prevented from further damage, others may be solved. Yes, degradation of the chip may be irreversible, if it’s due to the oxidation problem or due to the incorrect voltages having cuased damage, but presumably in some cases the chip would continue to work stable and not degenerate further with the microcode fixes.

        But yes, agreed, the situation sucks and Intel should be out there disclosing a range of affected chips by at least the confirmed physical defect and allowing a streamlined recall of affected devices, not saying “start an RMA process and we’ll look into it”.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      3 months ago

      They do say that you can contact Intel customer support if you have an affected CPU, and that they’re replacing CPUs that have been actually damaged. I don’t know – and Intel may not know – what information or proof you need, but my guess is that it’s good odds that you can get a replacement CPU. So there probably is some level of recourse.

      Now, obviously that’s still a bad situation. You’re out the time that you didn’t have a stable system, out the effort you put into diagnosing it, maybe have losses from system downtime (like, I took an out-of-state trip expecting to be able to access my system remotely and had it hang due to the CPU damage at one point), maybe out data you lost from corruption, maybe out money you spent trying to fix the problem (like, on other parts).

      But I’d guess that specifically for the CPU, if it’s clearly damaged, you have good odds of being able to at least get a non-damaged replacement CPU at some point without needing to buy it. It may not perform as well as the generation had initially been benchmarked at. But it should be stable.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          Yeah. They can’t replace it with their upcoming 15th gen, because that uses a new, incompatible socket. They’d apparently been handing replacement CPUs out to large customers to replace failed processors, according to one of Steve Burke’s past videos on the subject.

          On a motherboard that has the microcode update which they’re theoretically supposed to get out in a month or so, the processors should at least refrain from destroying themselves, though I expect that they’ll probably run with some degree of degraded performance from the update.

          Just guessing, not anything Burke said, but if there’s enough demand for replacement CPUs, might also be possible that they’ll do another 14th gen production run, maybe fixing the oxidation issue this time, so that the processors could work as intended.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        “Clearly damaged” is an interesting problem. The CPU would crash 100% of the time on the default settings for the motherboard, but if you remember, they issued a patch already.

        I patched. And guess what, with the new Intel Defaults it doesn’t crash anymore. But it suddenly runs very hot instead. Like, weird hot. On a liquid cooling system it’s thermal throttling when before it wouldn’t come even close. Won’t crash, though.

        So is it human error? Did I incorrectly mount my cooling? I’d say probably not, considering it ran cool enough pre-patch until it became unstable and it runs cool enough now with a manual downclock. But is that enough for Intel to issue a replacement if the system isn’t unstable? More importantly, do I want to have that fight with them now or to wait and see if their upcoming patch, which allegedly will fix whatever incorrect voltage requests the CPU is making, fixes the overheating issue? Because I work on this thing, I can’t just chuck it in a box, send it to Intel and wait. I need to be up and running immediately.

        So yeah, it sucks either way, but it would suck a lot less if Intel was willing to flag a range of CPUs as being eligible for a recall.

        As I see it right now, the order of operations is to wait for the upcoming patch, retest the default settings after the patch and if the behavior seems incorrect contact Intel for a replacement. I just wish they would make it clearer what that process is going to be and who is eligible for one.

    • blackwateropeth@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Went 13th -> 14th very early in both’s launch cycles because of chronic crashing. After about swapping mobo, RAM and SSDs i finally swapped to AMD and my build from late 2022 is FINALLY stable. Wendell’s video was the catalyst to jump ship. I thought I was going crazy, but yea… it was intel

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        Whoa, that’s even worse. It’s not just the uncertainty of knowing whether Intel will replace your hardware or the cost of jumping ship next time. Intel straight up owes you money. That sucks.

        • blackwateropeth@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          Yea, my crashes were either watchdog BSODs, or nvldkkm (nvidia). So diagnosing the issue was super difficult, the CPU is the last thing I think of unless there’s some evidence of it failing :).

          I also got to experience a cable mod adapter burn a 4090… Zotac replaced the card thank god. I’m a walking billboard of what went wrong in the last 2 years with components lol.

          Anyhow I hope we all get a refund. My PC is my main hobby so having instability caused a ton of frustration and anguish.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      When did you buy it? Depending on the credit card you have, they will sometimes extend on any manufacturer warranty by a year or two. Might be worth checking.

      • MisterFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Are there like no consumer guarantees in the US? How is this not a open and shut case where the manufacturer needs to replace or refund the product?

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Nope, pretty much none in most cases. Though this is probably going to devolve into a giant class-action, because it is pretty egregious… so affected people will get something like $6.71 and the lawyers will walk away with a couple billion or whatever.

    • Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      17
      ·
      3 months ago

      switching platforms is a very expensive move these days.

      It’s just a motherboard and a cpu. Everything else is cross compatible, likely even your cpu cooler. If you just buy another intel chip… it’s just gonna oxidize again.

      $370 for a 7800x3d https://www.microcenter.com/product/674503/amd-ryzen-7-7800x3d-raphael-am5-42ghz-8-core-boxed-processor-heatsink-not-included

      ~$200 for a motherboard.

      Personally i’d wait for the next release to drop in a month… or until your system crashes aren’t bearable / it’s worth making the change. I just don’t see the cost as prohibitive, it’s about on par with all the alternatives. Plus you could sell your old motherboard for something.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 months ago

        I’m not really that knowledgeable about AM5 mobos (still on AM4) but you should be able to get something perfectly sensible for 100 bucks. Are you going to get as much IO and bells and whistles no but most people don’t need that stuff and you don’t have to spend a lot of money to get a good VRM or traces to the DIMM slots.

        Then, possibly bad news: Intel Gen 13 supports DDR4, so you might need new RAM.

        • Lets_Eat_Grandma@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          32GB of ddr5 can be found for ~$100, and any other upgrade from a ddr4 platform today is going to require new memory.

          So the DDR4 13th series folks can stay on their oxidized processors, or they can pay money to get something else. Not much else to do there.

          I upgraded my AM4 platform system to a 5800x3d a while back and it’s still working just fine. I wouldn’t recommend people buying into AM4 today just because no more upgrades are coming… but AM5? why not? It’ll be around until ddr6 is affordable circa 2027.

          I’m super interested in seeing how intel’s 15th gen turns out. We know it’s a new socket so the buy in cost is sky high as all have argued here (that mobo/cpu/ram is crazy expensive.) I can only imagine they will drop power load to avoid more issues but who can say. Maybe whatever design they are using won’t have been so aggressively tuned or if they’re lucky hasn’t started physical production so they can modify it appropriately. Time will tell, and we won’t know if it has the same issue for a year or so post release.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          No, I have a DDR5 setup. Which is why my motherboard was way more expensive than 100 bucks.

          The problem isn’t upgrading to a entry level AM5 motherboard, the problem is that to get back to where I am with my rather expensive Intel motherboard I have to spend a lot more than that. Moving to AMD doesn’t mean I want to downgrade.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            I mean… back in the days I would never have bought a uATX board. You need expansion slots, after all, video, sound, TV, network, at least.

            Nowadays? Exactly one PCIe slot occupied by the graphics card. Soundcards are pointless nowadays if your onboard doesn’t suffice for what you want to do you’d get an external audio interface, have it away from all that EM interference in the case, TV we’ve got the internet, NIC is onboard and as I won’t downgrade my network to wifi that’s not needed, either.

            As far as I’m concerned pretty much all of my boards were an upgrade while also simultaneously becoming more and more budget.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I mean, happy for you, but in the real world a 200 extra dollars for a 400 dollar part is a huge price spike.

        Never mind that, be happy for me, I actually went for a higher spec than that when I got this PC because I figured I’d get at least one CPU upgrade out of this motherboard, since it was early days of DDR5 and it seemed like I’d be able to both buy faster RAM and a faster CPU to keep my device up to date. So yeah, it was more expensive than that.

        And hey, caveat emptor, futureproofing is a risky, expensive game on PCs. I was ready for a new technology to make me upgrade anyway, if we suddenly figured out endless storage or instant RAM or whatever. Doesn’t mean it isn’t crappy to suddenly make upgrading my CPU almost twice as expensive because Intel sucks at their one job.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    155
    arrow-down
    1
    ·
    3 months ago

    Don’t worry. I’m sure the $10 Doordash card is coming to an inbox near you!

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    2
    ·
    edit-2
    3 months ago

    Intel is about to have a lot of lawsuits on their hands if this delay deny deflect strategy doesn’t work out for them. This problem has been going on for over a year and the details Intel lets slip just keep getting worse and worse. The more customers that realize they’re getting defective CPUs, the more outcry there’ll be for a recall. Intel is going to be in a lot of trouble if they wait until regulators force them to have a recall.

    Big moment of truth is next month when they have earnings and we see what the performance impact from dropping voltages will be. Hopefully it’ll just be 5% and no more CPUs die. I can’t imagine investors will be happy about the cost, though.

    • Archer@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      3 months ago

      I want to say gamers rise up, but honestly gamers calling their member of Congress every day and asking what they’re doing about this fraud would be way more effective. Congress is in a Big Tech regulating mood right now

  • wirehead@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    3 months ago

    A few years ago now I was thinking that it was about time for me to upgrade my desktop (with a case that dates back to 2000 or so, I guess they call them “sleepers” these days?) because some of my usual computer things were taking too long.

    And I realized that Intel was selling the 12th generation of the Core at that point, which means the next one was a 13th generation and I dono, I’m not superstitious but I figured if anything went wrong I’d feel pretty darn silly. So I pulled the trigger and got a 12th gen core processor and motherboard and a few other bits.

    This is quite amusing in retrospect.

    • JPAKx4@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      3 months ago

      I recently built myself a computer, and went with AMD’s 3d cache chips and boy am I glad. I think I went 12th Gen for my brothers computer but it was mid range which hasn’t had these issues to my knowledge.

      Also yes, sleeper is the right term.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 months ago

        I think I went 12th Gen for my brothers computer

        12th gen isn’t affected. The problem affects only the 13th and 14th gen Intel chips. If your brother has 12th gen – and you might want to confirm that – he’s okay.

        For the high-end thing, initially it was speculated that it was just the high-end chips in these generations, but it’s definitely the case that chips other than just the high-end ones have been recorded failing. It may be that the problem is worse with the high-end CPUs, but it’s known to not be restricted to them at this point.

        The bar they list in the article here is 13th and 14th gen Intel desktop CPUs over 65W TDP.

  • sunzu@kbin.run
    link
    fedilink
    arrow-up
    68
    arrow-down
    3
    ·
    3 months ago

    We are giving this failed management team billions of dollars to build “us” a fab

    🤡🤡🤡

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      3 months ago

      Even worse, there were no conditions to the funding. They just wrote a check.

      • sunzu@kbin.run
        link
        fedilink
        arrow-up
        12
        ·
        3 months ago

        damn… can you provide more context.

        watch end up like nation wide broadband lol

        never built and patchy bullshit we did get we get price gouged and dissed by comcast and co

        • chingadera@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          3 months ago

          Profit over everything is involved, it will happen. Although if they kill it with the development, they will have so much more later. They just cannot do it though, short term money go brrrr.

  • gamermanh@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    1
    ·
    3 months ago

    After literally 14 years of avoiding AMD after getting burned twice I finally went back to team red just a week ago, for a new CPU

    so glad I picked them now lol

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      In my case I upgraded from threadripper 1950x to a 14900k and the machine died after four months. Went back to threadripper 7960x like I should have. My 14th gen cpu still posts, but haven’t thrown any load at it yet. I’m hoping it can still be a streaming box…

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I switched to AMD with the Ryzen 3000 series and can’t see myself going to Intel for at least 2 or 3 more upgrades (like 10 years for me), and that’s only if they are competitive again in that amount of time.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        1 DOA CPU that the physical store I went to purchase it at didn’t have any more of so I got a cheaper Intel CPU they DID have. Tbh that might have been the store dropping it or storing it improperly, they weren’t a very competent electronics store.

        And a Sapphire GPU that only worked with 1 very specific driver version that wasn’t even on their website anymore when I tried to install it for some reason. I eventually got it working after hours of hunting and fiddling, which was repeated when I gave the PC away to a friend’s little brother and they wiped it without checking the driver versions I left behind like I told them.

        Recently built my wife a new AMD based system because grudges have to end eventually and I think I couldn’t have picked a better time tbh

        • communism@lemmy.ml
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Damn yeah I can definitely understand that grudge, but also yeah modern AMD products are a lot better. I recently upgraded my AM4 CPU and also to a new Radeon GPU and I think they both work really well, after previously having some issues with earlier AMD products. Especially with Linux gaming, AMD is the way to go

  • sebsch@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    2
    ·
    3 months ago

    Is there really still such a market for Intel CPUs? I do not understand that AMDs Zen is so much better and is the superior technology since almost a decade now.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      3 months ago

      Intel is in the precarious position of being the largest surviving American owned semiconductor manufacturer, with the competition either existing abroad (TSMC, Samsung, ASML) or as a partner/subsidiary of a foreign national firm (NVidia simply procures its chips from TSMC, GlobalFoundries was bought up by the UAE sovereign wealth fund, etc).

      Consequently, whenever the US dumps a giant bucket of money into the domestic semiconductor industry, Intel is there to clean up whether or not their technology actually works.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 months ago

        Small correction: only surviving that makes desktop/server class chips. Companies like Texas Instruments and Microchip still have US foundries for microcontrollers.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      The argument was that while AMD is better on paper in most things, Intel would give you rock solid stability. That argument has now taken an Iowa-class broadside to the face.

      I don’t watch LTT anymore, but a few years back they had a video where they were really pushing the limits of PCIe lanes on an Epyc chip by stuffing it full of NVMe drives and running them with software RAID (which Epyc’s sick number of cores should be able to handle). Long story short, they ran into a bunch of problems. After talking to Wendel of Level1Techs, he mentioned that sometimes, AMD just doesn’t work the way it seems it should based on paper specs. Intel usually does. (Might be getting a few details wrong about this, but the general gist should be right.)

      This argument was almost the only thing stopping AMD from taking over the server market. The other thing was AMD simply being able to manufacture enough chips in a short time period. The server market is huge; Intel had $16B revenue in “Data Center and AI” in 2023, while AMD’s total revenue was $23B. Now manufacturing ramp up might be all that’s stopping AMD from owning it.

    • shastaxc@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      Intels have been working in my Linux server better than AMD. The AMDs kept causing server crashes due to C-state nonsense that no amount of BIOS tweaking would fix. AMD is great for performance and efficiency (and cost/value) in my gaming PC but wreaking havoc with my server which I need to be reliably functional without power restarts.

      So I have both.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      The new AMD generation kinda tossed all the good out the window. Now they are the more expensive option and even with this Intel fuckup they are likely still going to be the go to for people that have more sense then money.

      Funny that the good old zen 3 stuff is still swinging above its weight class.

      • aard@kyu.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        AMD keeps some older generations in production as their budget options - and as they had excellent CPUs for multiple generations now you also get pretty good computers out of that. Even better - with some planning you’ll be able to upgrade to another CPU later when checking chipset lifecycle.

        AMD has established by now that they deliver what they promise - and intel couldn’t compete with them for a few generations over pretty much the complete product line - so they can afford now to have the bleeding edge hardware at higher prices. It’s still far away from what intel was charging when they were dominant 10 years ago - and if you need that performance for work well worth the money. For most private systems I’d always recommend getting last gen, though.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          They have a small part of the market, they very much can not afford to have an entire generation of chips that have memory channel problems, have less performance then the gen before for the first 6 months and costs more then their competitor.

          If they keep making zen 3 until this phase of insanity passes then good, but this chasing pointless gains at the cost of everything needs to end.

    • deeply_moving_queef@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      Intel’s iGPU is still the by far the best option for applications such as media transcoding. It’s a shame that AMD haven’t focussed more on this but understandable, it’s relatively niche.

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      3 months ago

      Its the only chip that runs on open source bios and you can completely disable the Intel ME after boot up.

      AMD’s PSP is 100% proprietary spyware that can’t be disabled or manipulated into not running.

      • DefederateLemmyMl@feddit.nl
        link
        fedilink
        English
        arrow-up
        24
        ·
        3 months ago

        Why does that graph show Epyc (server) and Threadripper (workstation) processors in the upper right corner, but not the equivalent Xeons? If you take those away, it would paint a different picture.

        Also, a price/performance graph does not say much about which is the superior technology. Intel has been struggling to keep up with AMD technologically the past years, and has been upping power targets and thermal limits to do so … which is one of the reasons why we are here points at headline.

        I do hope they get their act together, because we an AMD monopoly would just be as bad as an Intel monopoly. We need the competition, and a healthy x86 market, lest proprietary ARM based computers take over the market (Apple M-chips, Snapdragon laptops,…)

        • tempest@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Aha because if they included the xeon scalables it show how bad they are doing in the datacenter market.

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          I guess I’m confused by your fundamental point though: if we aren’t looking for raw processing power on a range of workloads, what is the technology you see them winning in?

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          Id guess because I selected single processors and many of the xeons are server oriented with multi socket expected. Given the original post I’m responding to I’m more concerned by desktop grade (10-40k pts multi core) than server grade.

      • Luccus@feddit.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        I don’t see data backing up your claim […]

        Links a list where the three top spots substantiate the claim, followed by a comparatively large 8% drop.

        To add a bit of nuance: There are definitely exceptions to the claim. But if I had to make a blanket statement, it would absolutely be in favor of AMD.

        • ruse8145@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          The point of the chart is that it alternates over a wide performance range, there isn’t a blanket winner between the company that can’t figure out security and the company that can’t figure out thermals.

    • w2tpmf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      ·
      3 months ago

      Naw. Zen was a leap ahead when it came out but AMD didn’t keep that pace long and Intel CPUs quickly caught up.

      I just almost bought a Ryzen 9 7900x but a i7-13700k ended up being cheaper and outperforms the AMD chip.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        3 months ago

        On what workloads? AMD is king for most games, and for less price. It’s also king for heavily multicore workloads, but not on the same CPU as for games.

        In other words, they don’t have a CPU that is king for both at the same time. That’s the one thing Intel was good at, provided you could cool the damn thing.

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    7
    ·
    3 months ago

    Amd processors have literally always been a better value and rarely have been surpassed by much for long. The only problem they ever had was back in the day they overheated easily. But I will never ever buy an Intel processor on purpose, especially after this.

    • mox@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      40
      ·
      edit-2
      3 months ago

      The only problem they ever had was back in the day they overheated easily.

      That’s not true. It was just last year that some of the Ryzen 7000 models were burning themselves out from the insides at default settings (within AMD specs) due to excessive SoC voltage. They fixed it through new specs and working with board manufacturers to issue new BIOS, and I think they eventually gave in to pressure to cover the damaged units. I guess we’ll see if Intel ends up doing the same.

      I generally agree with your sentiment, though. :)

      I just wish both brands would chill. Pushing the hardware so hard for such slim gains is wasting power and costing customers.

      • DefederateLemmyMl@feddit.nl
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        That’s not true. It was just last year that some of the Ryzen 7000 models were burning themselves

        I think he was referring to “back-in-the-day” when Athlons, unlike the competing Pentium 3 and 4 CPUs of the day, didn’t have any thermal protections and would literally go up in smoke if you ran them without cooling.

        https://www.youtube.com/watch?v=yRn8ri9tKf8

        • mox@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          When I started using computers, I wasn’t aware of any thermal protections in popular CPUs. Do you happen to know when they first appeared in Intel chips?

          • DefederateLemmyMl@feddit.nl
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Pentium 2 and 3 had rudimentary protection. They would simply shutdown if they got too hot. Pentium 4 was the first one that would throttle down clock speeds.

            Anything before that didn’t have any protection as far as I’m aware.

        • RdVortex@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 months ago

          Some motherboards did have overheating protection back then though. Personally I had my Athlon XP computer randomly shut down several times back then, because the system had some issue, where fans would randomly start slowing down and eventually completely stop. This then triggered overheat protection of the motherboard, which simply cut the power as soon as the temperature was too hight.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 months ago

        Yeah. I just meant AMD cpus used to easily overheat if your cooling system had an issue. My ryzen 7 3700x has been freaking awesome though. Feels more solid than any PC I’ve built. And it’s fast AF. I think I saved over $150 when comparing to a similarly rated Intel CPU. And the motherboards generally seem cheaper for AMD too. I would feel ripped off with Intel even without the crashing issues

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        Problem is that it’s getting extremely hard to get more single-threaded performance out of a chip, and this is one of the few ways to do so. And a lot of software is not going to be rewritten to use multiple cores. In some cases, it’s fundamentally impossible to parallelize a particular algorithm.

          • ichbinjasokreativ@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            Then why were there essentially no blow ups from other motherboard manufacturers? Tell me if my information on this is wrong, but when there’s only one brand causing issues then they’re the ones to blame for it.

            • mox@lemmy.sdf.orgOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 months ago

              Then why were there essentially no blow ups from other motherboard manufacturers?

              There were, including MSI, who also released corrected BIOS versions.

              (But even if that were not the case, it could be explained by Asus being the only board maker to use the high end of a voltage range allowed by AMD, or by Asus having a significantly larger share of users who are vocal about such problems.)

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            Not from AMD. From the autogenerated transcript (with minor edits where it messed up the names of things):

            amd’s official recommendation [f]or the cut off now is 1.3 volts but the board vendors can still technically set whatever they want so even though the [AGESA] update can lock down and start restricting the voltage the problem is Asus their 1.3 number manifests itself as something like 1.34 volts so it is still on the high side

            This was pretty much all on motherboard manufacturers, and ASUS was particularly bad (out scumbaging MSI, good job, guys).

            At the start of this Intel mess, it was thought they had a similar issue on their hands and motherboard manufactures just needed to get in line, but it ended up going a lot deeper.

            • mox@lemmy.sdf.orgOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              That doesn’t contradict anything I wrote. Note that it says AMD’s recommended cutoff is now 1.3 volts, implying that it wasn’t before this mess began. Note also that the problem was worse on Asus boards because their components’ tolerance was a bit too loose for a target voltage this high, not because they used a voltage target beyond AMD’s specified cutoff. If the cutoff hadn’t been pushed so high for this generation in the first place, that same tolerance probably would have been okay.

              In any case, there’s no sense in bickering about it. Asus was not without blame (I was upset with them myself) but also not the only affected brand, so it’s not possible that they were the cause of the underlying problem, now is it?

              AMD and Intel have been pushing their CPUs to very high voltages and temperatures for small performance gains recently. 95°C as the new “normal” was unheard of just a few years ago. It’s no surprise that it led to damage in some cases, especially for early adopters. It’s a thin line to walk.

    • Deway@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      3 months ago

      rarely have been surpassed by much for long.

      I’ve been on team AMD for over 20 years now but that’s not true. The CoreDuo and the first couple of I CPUS were better than what AMD was offering and were for a decade. The Athlon were much better than the Pentium 3 and P4, the Ryzen are better than the current I series but the Phenom weren’t. Don’t get me wrong, I like my Phenom II X4 but it objectively wasn’t as good as Intel’s offerings back in the day.

      • deltapi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        My i5-4690 and i7-4770 machines remain competitive to this day, even with spectre patches in place. I saw no reason to ‘upgrade’ to 6/7/8th gen CPUs.

        I’m looking for a new desktop now, but for the costs involved I might just end up parting together a HP Z6 G4 with server surplus cpu/ram. The costs of going to 11th+ desktop Intel don’t seem worth it.

        I’m going to look at the more recent AMD offerings, but I’m not sure they’ll compete with surplus server kit.

        • Deway@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          I’d say that regardless of the brand, X86 CPU don’t need to be upgraded as often as they used to. No awesome new extension like SSE or something like that, not much more powerful, power consumption not going down significantly. If you don’t care about power consumption, the server CPU will be more interesting, there’s no doubt about that.

        • floofloof@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          They’re still useful, but they’re not competitive in overall performance with recent CPUs in the same category. They do still compete with some of the budget and power-efficient CPUs, but they use more power and get hotter.

          That said, those 4th gen Intel CPUs are indeed good enough for most everyday computing tasks. They won’t run Windows 11 because MS locks them out, but they will feel adequately fast unless you’re doing pretty demanding stuff.

          I still have an i5-2400, an i7-4770K and an i7-6700 for occasional or server use, and my i7-8550U laptop runs great with Linux (though it overheated with Windows).

          I buy AMD now though.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          My issue with surplus server kit at home is that it tends to idle at very high power usage compared to desktop kit. For home use that won’t be pushing high CPU utilization, the savings in cost off eBay aren’t worth much.

          This is also why you’re seeing AM5 on server motherboards. If you don’t need to have tons of PCIe lanes–and especially with PCIe 5, you probably don’t–the higher core count AM5 chips do really well for servers.

    • edgesmash@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      The only problem they ever had was back in the day they overheated easily.

      Very easily.

      In college (early aughts), I worked as tech support for fellow students. Several times I had to take the case cover off, point a desktop fan into the case, and tell the kid he needed to get thermal paste and a better cooler (services we didn’t offer).

      Also, as others have said, AMD CPUs have not always been superior to Intel in performance or even value (though AMDs have almost always been cheaper). It’s been a back-and-forth race for much of their history.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Yeah. I never said they were always better in performance. But I have never had an issue other than the heat problem which all but one time was fully my fault. And I don’t need a processor to perform 3% better on random tasks… which was the kind of benchmark results I would typically find when comparing similar AMD/intel processors (also in some categories amd did win). I saved probably a couple grand avoiding Intel. And as another user said, I prefer to support the underdog. The company making a great product for a lot less money. Again I say: fuck Intel.

  • gearheart@lemm.ee
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    3 months ago

    This would be funny if it happened to Nvidia.

    Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

    No one wants that.

    • mlg@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 months ago

      This would be funny if it happened to Nvidia.

      Hope Intel recovers from this. Imagine if Nvidia was the only consumer hardware manufacturer…

      Lol there was a reason Xbox 360s had a whopping 54% failure rate and every OEM was getting sued in the late 2000s for chip defects.

          • icedterminal@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 months ago

            Tagging on here: Both the first model PS3 and Xbox 360 were hot boxes with insufficient cooling. Both suffered from getting too hot too fast for their cooling solutions to keep up. Resulting in hardware stress that caused the chips solder points to weaken until they eventually cracked.

            • john89@lemmy.ca
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              3 months ago

              Owner of original 60gb PS3 here.

              It got very hot and eventually stopped working. It was under warranty and I got an 80gb replacement for $200 cheaper, but lost out on backwards compatibility which really sucked because I sold my PS2 to get a PS3.

              • lennivelkant@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Why would you want backwards compatibility? To play games you already own and like instead of buying new ones? Now now, don’t be ridiculous.

                Sarcasm aside, I do wonder how technically challenging it is to keep your system backwards-compatible. I understand console games are written for specific hardware specs, but I’d assume newer hardware still understands the old instructions. It could be an OS question, but again, I’d assume they would develop the newer version on top of their old, so I don’t know why it wouldn’t support the old features anymore.

                I don’t want to cynically claim that it’s only done for profit reasons, and I’m certainly out of my depth on the topic of developing an entire console system, so I want to assume there’s something I just don’t know about, but I’m curious what that might be.

                • john89@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  It’s my understanding that backwards-compatible PS3s actually had PS2 hardware in them.

                  We can play PS2 and PS1 games if they are downloaded from the store, so emulation isn’t an issue. I think Sony looked at the data and saw they would make more money removing backwards compatibility, so that’s what they did.

                  Thankfully the PS3 was my last console before standards got even lower and they started charging an additional fee to use my internet.

        • hardcoreufo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          I think the 360 failed for the same reason lots of early/mid 2000s PCs failed. They had issues with chips lifting due to the move away from leaded solder. Over time the formulas improved and we don’t see that as much anymore. At least that’s the way I recall it.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      3 months ago

      This would be funny if it happened to Nvidia.

      It kinda, has, with Fermi, lol. The GTX 480 was… something.

      Same reason too. They pushed the voltage too hard, to the point of stupidity.

      Nvidia does not compete in this market though, as much as they’d like to. They do not make x86 CPUs, and frankly Intel is hard to displace since they have their own fab capacity. AMD can’t take the market themselves because there simply isn’t enough TSMC/Samsung to go around.

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        There’s also Intel holding the x86 patent and AMD holding the x64 patent. Those two aren’t going anywhere yet.

        • wax@feddit.nu
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 months ago

          Actually, looks lhe base patents have expired. All the extentions, SSE, AVX are still in effect though

    • nek0d3r@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I genuinely think that was the best Intel generation. Things really started going downhill in my eyes after Skylake.

  • deltreed@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    3 months ago

    So like, did Intel lay off or deprecate its QA teams similar to what Microsoft did with Windows? Remember when stability was key and everything else was secondary? Pepperidge farms remembers.

    • john89@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Why would they lay off their QA teams when its management and executives who make the decisions to cut corners?

    • residentmarchant@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      3 months ago

      As compared to a recall and re-fitting a fab, a class action is probably the cheaper way out.

      I wish companies cared about what they sold instead of picking the cheapest way out, but welcome to the world we live in.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        3 months ago

        I mean, I’m sure Intel cares.

        My problem is really in how they handled the situation once they knew that there was a problem, not even the initial manufacturing defect.

        Yes, okay. They didn’t know exactly the problem, didn’t know exactly the scope, and didn’t have a fix. Fine. I get that that is a really hard problem to solve.

        But they knew that there was a problem.

        Putting out a list of known-affected processors and a list of known-possibly-affected processors at the earliest date would have at least let their customers do what is possible to mitigate the situation. And I personally think that they shouldn’t have been selling more of the potentially-affected processors until they’d figured out the problem sufficient to ensure that people who bought new ones wouldn’t be affected.

        And I think that, at first opportunity, they should have advised customers as to what Intel planned to do, at least within the limits of certainty (e.g. if Intel can confirm that the problem is due to an Intel manufacturing or design problem, then Intel will issue a replacement to consumers who can send in affected CPUs) and what customers should do (save purchase documentation or physical CPUs).

        Those are things that Intel could certainly have done but didn’t. This is the first statement they’ve made with some of that kind of information.

        It might have meant that an Intel customer holds off on an upgrade to a potentially-problematic processor. Maybe those customers would have been fine taking the risk or just waiting for Intel to figure out the issue, issue an update, and make sure that they used updated systems with the affected processors. But they would have at least been going into this with their eyes open, and been able to mitigate some of the impact.

        Like, I think that in general, the expectation should be that a manufacturer who has sold a product with a defect should put out what information they can to help customers mitigate the impact, even if that information is incomplete, at the soonest opportunity. And I generally don’t think that a manufacturer should sell a product with known severe defects (of the “it might likely destroy itself in a couple months” variety).

        I think that one should be able to expect that a manufacturer do so even today. If there are some kind of reasons that they are not willing to do so (e.g. concerns about any statement affecting their position in potential class-action suits), I’d like regulators to restructure the rules to eliminate that misincentive. Maybe it could be a stick, like “if you don’t issue information dealing with known product defects of severity X within N days, you are exposed to strict liability”. Or a carrot, like “any information in public statements provided to consumers with the intent of mitigating harm caused by a defective product may not be introduced as evidence in class action lawsuits over the issue”. But I want manufacturers of defective products to act, not to just sit there clammed up, even if they haven’t figured out the full extent of the problem, because they are almost certainly in a better position to figure out the problem and issue information to mitigate it than their customers individually are, and in this case, Intel just silently sat there for a very long time while a lot of their customers tried to figure out the scope of what was going wrong, and often spent a lot of money trying to address the problem themselves when more information from Intel probably would have avoided them incurring some of those costs.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          3 months ago

          To put this another way, Intel had at least three serious failures that let the problem reach this level:

          • A manufacturing defect that led to the flawed CPUs being produced in the first place.

          • A QA failure to detect the flawed CPUs initially (or to be able to quickly narrow down the likely and certain scope of the problem once the issue arose). Not to mention having a second generation of chips with the defect go out the door, I can only assume (and hope) without QA having initially identified that they were also affected.

          • A customer care issue, in that Intel did not promptly publicly provide customers with information that Intel either had or should have had about likely scope of the problem, mitigation, and at least within some bounds of uncertainty (“if it can be proven that the problem is due to an Intel manufacturing defect on a given processor for some definition of proven, Intel will provide a replacement processor”), what Intel would do for affected customers. A lot of customers spent a lot of time replicating effort trying to diagnose and address the problem at their level, as well as continuing to buy and use the defective CPUs. It is almost certain that some of that was not necessary.

          The manufacturing failure sucks, fine. But it happens. Intel’s pushing physical limits. I accept that this kind of thing is just one thing that occasionally happens when you do that. Obviously not great, but it happens. This was an especially bad defect, but it’s within the realm of what I can understand and accept. AMD just recalled an initial batch of new CPUs (albeit way, way earlier in the generation than Intel)…they dicked something up too.

          I still don’t understand how the QA failure happened to the degree that it did. Like, yes, it was a hard problem to identify, since it was progressive degradation that took some time to arise, and there were a lot of reasons for other components to potentially be at fault. And CPUs are a fast moving market. You can’t try running a new gen of CPU for weeks or months prior to shipping, maybe. But for Intel to not have identified that they had a problem with the 13th gen at least within certain parameters at least subsequent to release and then to have not held up the 14th gen until it was definitely addressed seems unfathomable to me. Like, does Intel not have a number of CPUs that they just keep hot and running to see if there are aging problems? Surely that has to be part of their QA process, right? I used to work for another PC component manufacturer and while I wasn’t involved in it, I know that they definitely did that as part of their QA process.

          But as much as I think that that QA failure should not have happened, it pales in comparison to the customer care failure.

          Like, there were Intel customers who kept building systems with components that Intel knew or should have known were defective. Far a long time, Intel did not promptly issue a public warning saying “we know that there is a problem with this product”. They did not pull known defective components from the market, which means that customers kept sinking money into them (and resources trying to diagnose and otherwise resolve the issues). Intel did not issue a public statement about the likely-affected components, even though they were probably in the best position to know. Again, they let customers keep building them into systems. They did not issue a statement as to what Intel would do (and I’m not saying that Intel has to conclusively determine that this is an Intel problem, but at least say “if this is shown to be an Intel defect, then we will provide a replacement for parts proven to be defective due to this cause”). They did not issue a statement telling Intel customers what to do to qualify for any such program. Those are all things that I am confident that Intel could have done much earlier and which would have substantially reduced how bad this incident was for their customers. Instead, their customers were left in isolation to try to figure out the problems individually and come up with mitigations themselves. In many cases, manufacturers of other parts were blamed, and money spent buying components unnecessarily, or trying to run important services on components that Intel knew or should have known were potentially defective. Like, I expect Intel, whatever failures happen at the manufacturing or QA stages, to get the customer care done correctly. I expect that to happen even if Intel does not yet completely understand the scope of the problem or how it could be addressed. And they really did not.

          • toddestan@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            I’d argue there was a fourth serious failure, and that was Intel allowing the motherboard manufacturers to go nuts and run these chips way out of spec by default. Granted, ultimately it was the motherboard manufacturers that did it, but there’s really no excuse for what these motherboards were doing by default. Yes, I get the “K” chips are unlocked, but it should be up to the user to choose to overclock their CPU and how they want to go about it. To make matters worse, a lot of these motherboards didn’t even have an easy way to put things back into spec - it was up to you to go through all the settings one by one and set them correctly.

  • 2pt_perversion@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    9
    ·
    3 months ago

    People are freaking out about the lack of a recall but intel says their patch that will supposedly stop currently working cpus from experiencing the overvolt condition that is leading to the failure. So they don’t really need to do a recall if currently working CPUs will stay working with the patch in place. As long as they offer some sort of free extended warranty and a good RMA proccess for the CPUs that are already damaged I feel it’s fine.

    If they RMA with a bump in perf for those affected it might even be positive PR like “they stand by their products” but if they’re stingy with responsibility then we should obviously give them hell. We really have to see how they handle this.

    • A_Random_Idiot@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      ·
      3 months ago

      They cant even commit to offering RMAs, period. They keep using vague, cant-be-used-against-me-in-a-court-of-law language.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        That will surely earn trust with the public and result in brand loyalty, right???

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      2
      ·
      3 months ago

      Oh you mean they’re going to underclock the expensive new shit I bought and have it underperform to fix their fuck up?

      What an unacceptable solution.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        3 months ago

        That’s where the lawsuits will start flying. I wouldn’t be surprised if they knock off 5-15% of performance. That’s enough to put it well below comparable AMD products in almost every application. If performance is dropped after sale, there’s a pretty good chance of a class action suit.

        Intel might have a situation here like the XBox 360 Red Ring of Death. Totally kills any momentum they had and hands a big victory to their competitor. This at a time when Intel wasn’t in a strong place to begin with.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          I think a spot that might land them in a bit of hot water will be what specs they use for the chips after the “fix”. Will they update the specs to reflect the now slower speeds? My money would be them still listing the full chooch chip killing specs.

          • frezik@midwest.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            If people bought it at one spec and now it’s lower, that could be enough. It would have made the decision different at purchase time.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              It would be breach of implied warranty/false advertisement if they keep selling them with the old specs at least.

          • floofloof@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 months ago

            It has been wise for years to subtract 15-20% off Intel’s initial performance claims and benchmarks at release. Spectre and Meltdown come to mind, for example. There’s always some post-release patch that hobbles the performance, even when the processors are stable. Intel’s corporate culture is to push the envelope just a little too far then walk it back quietly after the initial positive media coverage is taken care of.

            • M0oP0o@mander.xyz
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 months ago

              Yes, but lucky for some of us that practice is still illegal in parts of the world. I just don’t get why they still get away with it (they do get fines but the over all practice is still normalized).

              I sure would not want any 13 or 14 gen Intel in any equipment I was responsible for. Think of the risk over any IT departments head with these CPUs in production, you would never really trust them again.

      • Strykker@programming.dev
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        They aren’t over clocking / under clocking anything with the fix. The chip was just straight up requesting more voltage than it actually needed, this didn’t give any benefit and was probably an issue even without the damage it causes, due to extra heat generated.

        • nek0d3r@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          Giving a CPU more voltage is just what overclocking is. Considering that most of these modern CPUs from both AMD and Intel have already been designed to start clocking until it reaches a high enough temp to start thermally throttling, it’s likely that there was a misstep in setting this threshold and the CPU doesn’t know when to quit until it kills itself. In the process it is undoubtedly gaining more performance than it otherwise would, but probably not by much, considering a lot of the high end CPUs already have really high thresholds, some even at 90 or 100 C.

          • Strykker@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 months ago

            If you actually knew anything you’d know that overclockers tend to manually reduce the voltage as they increase the clock speeds to improve stability, this only works up to a point, but clearly shows voltage does not directly influence clock speed.

    • AnyOldName3@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 months ago

      If you give a chip more voltage, its transistors will switch faster, but they’ll degrade faster. Ideally, you want just barely enough voltage that everything’s reliably finished switching and all signals have propagated before it’s time for the next clock cycle, as that makes everything work and last as long as possible. When the degradation happens, at first it means things need more voltage to reach the same speed, and then they totally stop working. A little degradation over time is normal, but it’s not unreasonable to hope that it’ll take ten or twenty years to build up enough that a chip stops working at its default voltage.

      The microcode bug they’ve identified and are fixing applies too much voltage to part of the chip under specific circumstances, so if an individual chip hasn’t experienced those circumstances very often, it could well have built up some degradation, but not enough that it’s stopped working reliably yet. That could range from having burned through a couple of days of lifetime, which won’t get noticed, to having a chip that’s in the condition you’d expect it to be in if it was twenty years old, which still could pass tests, but might keel over and die at any moment.

      If they’re not doing a mass recall, and can’t come up with a test that says how affected an individual CPU has been without needing to be so damaged that it’s no longer reliable, then they’re betting that most people’s chips aren’t damaged enough to die until the after warranty expires. There’s still a big difference between the three years of their warranty and the ten to twenty years that people expect a CPU to function for, and customers whose parts die after thirty-seven months will lose out compared to what they thought they were buying.

    • BobGnarley@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      No refunds for the fried ones should be all you need to see about hwp they “handle” this.

      • 2pt_perversion@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        They probably will at least RMA the really frequently crashing ones. To my knowledge they self-reported when they discovered the problem and the fix so they’d be looking at a lawsuit if they didn’t do at least that.

        How much further beyond that they’ll go is what we still have to see. If they have a crazy number of CPUs still dying at 4-5 years old and don’t cover with an extended warranty than fuck em…But we have to wait and see what they actually do first before making that judgement.

    • Metype @lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      For what it’s worth my i9-13900 was experiencing serious instability issues. Disabling turbo helped a lot but Intel offered to replace it under warranty and I’m going through that now. Customer support on the issue seems to be pretty good from my experience.