• 1 Post
  • 247 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • what is your source for this?

    Familiarity with the industry, and knowledge that finFET was exactly what caused Intel to stall, Global Foundries to just give up and quit trying to keep up, and where Samsung fell behind TSMC. TSMC’s dominance today all goes through its success at mass producing finFET and being able to iterate on that while everyone else was struggling to get those fundamentals figured out.

    Intel launched its chips using its 22nm process in 2012, its 14nm process in 2014, and its 10nm process in 2019. At each ITRS “nm” node, Intel’s performance and density was somewhere better than TSMC’s at the equivalent node, but somewhere worse than the next. Intel’s 5-year lag between 14nm and 10nm is when TSMC passed them up, launching 10nm, and even 7nm before Intel got its 10nm node going. And even though Intel’s 14nm was better than TSMC’s 14nm, and arguably comparable to TSMC’s 10nm, it was left behind by TSMC’s 7nm.

    You can find articles from around 2018 or so trying to compare Intel’s increasingly implausible claims that Intel’s 14nm was comparable to TSMC’s 10nm or 7nm processes, reflecting that Intel was stuck on 14nm for way too long, trying to figure out how to continue improving while grappling with finFET related technical challenges.

    You can also read reviews of AMD versus Intel chips around the mid-2010s to see that Intel had better fab techniques then, and that AMD had to try to pioneer innovating packaging techniques, like chiplets, to make up for that gap.

    If you’re just looking at superficial developments at the mass production stage, you’re going to miss out on the things that are in 20+ year pipelines between lab demonstrations, prototypes, low yield test production, etc.

    Whoever figures out GAA and backside power is going to have an opportunity to lead for the next 3-4 generations. TSMC hasn’t figured it out yet, and there’s no real reason to assume that their finFET dominance would translate to the next step.



  • Intel has only been behind for the last 7 years or so, because they were several years delayed in rolling out their 10nm node. Before 14nm, Intel was always about about 3 years ahead of TSMC. Intel got leapfrogged at that stage because it struggled to implement the finFET technology that is necessary for progressing beyond 14nm.

    The forward progress of semiconductor manufacturing tech isn’t an inevitable march towards improvement. Each generation presents new challenges, and some of them are quite significant.

    In the near future, the challenge is in certain three dimensional gate structures more complicated than finFET (known as Gate All Around FETs) and in backside power delivery. TSMC has decided to delay introducing those techniques because of the complexity and challenges while they squeeze out a few more generations, but it remains to be seen whether they’ll hit a wall where Samsung and/or Intel leapfrog them again. Or maybe Samsung or Intel hit a wall and fall even further behind. Either way, we’re not yet at a stage where we know what things look like beyond 2nm, so there’s still active competition for that future market.

    Edit: this is a pretty good description of the engineering challenges facing the semiconductor industry next:

    https://www.semianalysis.com/p/clash-of-the-foundries


  • No, there’s still competition. Samsung and Intel are trying, but are just significantly behind. So leading the competition by this wide of a margin means that you can charge more, and customers decide whether they want to pay way more money for a better product now, whether they’re going to wait for the price to drop, or whether they’ll stick with an older, cheaper node.

    And a lot of that will depend on the degree to which their customers can pass on increased costs to their own customers. During this current AI bubble, maybe some of those can. Will those manufacturing desktop CPUs or mobile SoCs be as willing to spend? Maybe not as much.

    Or, if the AI hype machine crashes, so will the hardware demand, at which point TSMC might see reduced demand for their latest and greatest node.



  • I don’t read it as magical energy created out of nothing, but I do read it as “free” energy that would exist whether this regeneration system is used or not, that would otherwise be lost as heat.

    With or without regenerative braking, the train system is still going to accelerate stopped trains up to operational speed, then slow them down to a stop, at regular intervals throughout the whole train system. Tapping into that existing energy is basically free energy at that point.




  • That article has basically been validated over time. At the time it was written, the argument was that monopoly is bad for consumers even if it makes prices cheaper, and that consolidation of producer market power needs to be understood as consumer harm in itself, even if prices or services paradoxically become better for consumers.

    It’s no longer a paradox today, though. Amazon has raised prices and reduced the quality of service by a considerable margin, and uses its market power to prevent the competition from undercutting them, rather than competing fairly on the merits.



  • Unless you are fine pairing solar panels with natural gas as we currently do

    Yes, I am, especially since you seem to be intentionally ignoring wind+solar. It’s much cheaper to have a system that is solar+wind+nat gas, and that particular system can handle all the peaking and base needs today, cheaper than nuclear can. So nuclear is more expensive today than that type of combined generation.

    In 10 years, when a new nuclear plant designed today might come on line, we’ll probably have enough grid scale storage and demand-shifting technology that we can easily make it through the typical 24-hour cycle, including 10-14 hours of night in most places depending on time of year. Based on the progress we’ve seen between 2019 and 2024, and the projects currently being designed and constructed today, we can expect grid scale storage to plummet in price and dramatically increase in capacity (both in terms of real-time power capacity measured in watts and in terms of total energy storage capacity measured in watt-hours).

    In 20 years, we might have sufficient advanced geothermal to where we can have dispatchable carbon-free electricity, plus sufficient large-scale storage and transmission that we’d have the capacity to power entire states even when the weather is bad for solar/wind in that particular place, through overcapacity from elsewhere.

    In 30 years, we might have fusion.

    With that in mind, are you ready to sign an 80-year mortgage locking in today’s nuclear prices? The economics just don’t work out.


  • With nuclear, you’re talking about spending money today in year zero to get a nuclear plant built between years 5-10, and operation from years 11-85.

    With solar or wind, you’re talking about spending money today to get generation online in year 1, and then another totally separate decision in year 25, then another in year 50, and then another in year 75.

    So the comparison isn’t just 2025 nuclear technology versus 2025 solar technology. It’s also 2025 nuclear versus 2075 solar tech. When comparing that entire 75-year lifespan, you’re competing with technology that hasn’t been invented yet.

    Let’s take Commanche Peak, a nuclear plant in Texas that went online in 1990. At that time, solar panels cost about $10 per watt in 2022 dollars. By 2022, the price was down to $0.26 per watt. But Commanche Peak is going to keep operating, and trying to compete with the latest and greatest, for the entire 70+ year lifespan of the nuclear plant. If 1990 nuclear plants aren’t competitive with 2024 solar panels, why do we believe that 2030 nuclear plants will be competitive with 2060 solar panels or wind turbines?


  • I don’t think that math works out, even when looking over the entire 70+ year life cycle of a nuclear reactor. When it costs $35 billion to build two 1MW reactors, even if it will last 70 years, the construction cost being amortized over every year or every megawatt hour generated is still really expensive, especially when accounting for interest.

    And it bakes in that huge cost irreversibly up front, so any future improvements will only make the existing plant less competitive. Wind and solar and geothermal and maybe even fusion will get cheaper over time, but a nuclear plant with most of its costs up front can’t. 70 years is a long time to commit to something.



  • Why has the world gotten both “more intelligent” and yet fundamentally more stupid at the same time? Serious question.

    Because it’s not actually always true that garbage in = garbage out. DeepMind’s Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.

    Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.

    Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn’t a good definition of “good” or “bad” inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.

    So it’s less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn’t actually work that well when you’re over-fitting the training data with new stuff your model thinks might be “good.”


  • Lol they will the second they get hit with that “you need to get parental consent” screen, that’s how it happened to us all.

    The normie services are increasingly tied to real world identities, through verification methods that involve phone numbers and often government-issued IDs. As the regulatory requirements tighten on these services, it’ll be increasingly more difficult to create anonymous/alt accounts. Just because it was easy to anonymously create a new Gmail or a new Instagram account 10 years ago doesn’t mean it’s easy today. It’s a common complaint that things like an Oculus requires a Meta account that requires some tedious verification.

    I don’t think it’ll ever be perfect, but it will probably be enough for the network effects of these types of services to be severely dampened (and then a feedback loop where the difficult-to-use-as-a-teen services have too much friction and aren’t being used, so nobody else feels it is worth the effort to set up). Especially if teens’ parent-supervised accounts are locked to their devices, in an increasingly cloud-reliant hardware world.


  • If you’re 25 now, you were 15 during the early wild west days of smartphone adoption, while we as a society were just figuring that stuff out.

    Since that time, the major tech companies that control a big chunk of our digital identities have made pretty big moves at recording family relationships between accounts. I’m a parent in a mixed Android/iOS family, and it’s pretty clear that Apple and Google have it figured out pretty well: child accounts linked to dates of birth that automatically change permissions and parental controls over time, based on age (including severing the parental controls when they turn 18). Some of it is obvious, like billing controls (nobody wants their teen running up hundreds of dollars in microtransactions), app controls, screen time/app time monitoring, location sharing, password resets, etc. Some of it is convenience factor, like shared media accounts/subscriptions by household (different Apple TV+ profiles but all on the same paid subscription), etc.

    I haven’t made child accounts for my kids on Meta. But I probably will whenever they’re old enough to use chat (and they’ll want WhatsApp accounts). Still, looking over the parent/child settings on Facebook accounts, it’ll probably be pretty straightforward to create accounts for them, link a parent/child relationship, and then have another dashboard to manage as a parent. Especially if something like Oculus takes off and that’s yet another account to deal with paid apps or subscriptions.

    There might even be network effects, where people who have child accounts are limited in the adult accounts they can interact with, and the social circle’s equilibrium naturally tends towards all child accounts (or the opposite, where everyone gets themselves an adult account).

    The fact is, many of the digital natives of Gen Alpha aren’t actually going to be as tech savvy as their parents as they dip their toes into the world of the internet. Because they won’t need to figure stuff out on their own to the same degree.