I sense 12 coming soon if MS wants to EOL 10 without losing customers.
I sense 12 coming soon if MS wants to EOL 10 without losing customers.
It was Microsoft’s kickback to Intel and later Qualcomm.
And what story could he tell to justify letting China get away with the worst incursion into another country since Ukraine and one we promised and accepted money to protect?
I doubt even he can spin it that way. His popularity would plummet and that’s something he cares a lot about since he has a fragile ego.
Lol the biggest reason you’re wrong to make that comparison is that Hong Kong was never its won country. Hong Kong was a British colony and then a Chinese special administrative region (SAR) given a degree of administrative autonomy by the Chinese government voluntarily as part of a treaty with the British. The treaty expired and then China decided to change the rules for Hong Kong.
Taiwan meanwhile was the territory that the Republic of China (RoC aka Nationalist China) held on to when it lost the Chinese Civil War against the People’s Republic of China (PRC aka Communist China) who now control the mainland. The PRC never controlled Taiwan and the RoC government which rules there does not answer to the PRC nor has it ever. The PRC and its Communist Party can claim that Taiwan is a rogue province all they want but that’s a lie. Taiwan is not theirs it was and still is under the government of the ROC even if the ROC has lost the rest of its territory to the PRC since the Civil War and World War 2.
Hong Kong’s city government allowed China to take more direct control because it always answered to China since the British gave it to China. Meanwhile the ROC government in Taiwan has never answered to the PRC and it never will. Opposing the PRC is literally one of the main goals of that government and country and I don’t think there are any major politicians there who want to join the PRC willingly nor would amy such politician be popular there.
Long story short the ROC (Taiwan) and Hong Kong are not even remotely comparable and the former won’t just accept any attempted takeover by the communists.
Trump hates China so he would do it just to show that he’s opposing it.
China will never invade Taiwan. Taiwan has a backchannel protection deal with the US and China knows it.
I’ve been using a cheap N200 laptop as a testbed for novel OS kernel development and it’s absolutely perfect.
The LattePanda Mu is configurable and can operate on as little as 6W up to 35W depending on your use case. The much more affordable Radxa X4 can operate on as little as 18W up to 25W if you need to power peripherals via USB.
Both use an Intel Processor N100 SoC which is surprisingly powerful and efficient given that the Processor N series is the new branding for what used to be called Celeron.
The prices are also competitive. The X4 for example sells for exactly the same price as the Raspberry Pi 5 with the same amount of memory at every memory capacity tier while having a CPU that’s twice as powerful and compatible with way more software and OSes and a GPU that is absurdly more powerful and fully publicly documented such that there are open source drivers for every OS under the sun.
As an OS developer both professionally and outside of work I have to say I really despise non-x86 platforms and ARM in particular for how fragmented they are and their vendors’ utter disregard for any form of standardization at the platform, firmware, or peripheral levels. That’s why I’m really thankful that devices like these exist and are affordable.
Lower power draw is about it. But there are now x86 SBCs that can also run on as little as 6W so there’s no reason to compromise and use ARM’s non-standard fragmented BS.
That’s fair. I see what I see at an engineering and architecture level. You see what you see at the business level.
I respect that. Finance was my old career and I hated it. I liked coding more so I went back got my M.S. in CS and now do embedded software which I love. I left finance specifically because of what both of us have talked about. It’s all about using numbers to tell whatever story you want and it’s filled with corporate politics. I hated that world. It was disgusting and people were terrible two faced assholes.
That said. I stand by my statement because I and most of my colleagues in similar roles get continued, repeated and expanded-scope engagements. Definitely in LLMs and genAI in general especially over the last 3-5 years or so, but definitely not just in LLMs.
“AI” is an incredibly wide and deep field; much more so than the common perception of what it is and does.
I think I need to amend what I said before. AI as a whole is definitely useful for various things but what makes it a fad is that companies are basically committing the hammer fallacy with it. They’re throwing it at everything even things where it may not be a good solution just to say hey look we used AI. What I respect about you guys at Nvidia is that you all make really awesome AI based tools and software that actually does solve problems that other types of software and tools either cannot do or cannot do well and that’s how it should be.
At the same time I’m also a gamer and I really hope Uncle Jensen doesn’t forget about us and how we literally were his core market for most of Nvidia’s history as a business.
Now this is where I push back. I spent the first decade of my tech career doing ops research/industrial engineering (in parallel with process engineering). You’d shit a brick if you knew how much “fudge-factoring” and “completely disconnected from reality—aka we have no fucking clue” assumptions go into the “conventional” models that inform supply-chain analytics, business process engineering, etc. To state that they “never make mistakes” is laughable.
What I said was that traditional software if programmed correctly doesn’t make mistakes. As for operations research and supply chain optimization and all the rest of it, it’s not different from what I said about finance. You can make the models tell any story you want and it’s not even hard but the flip side is that the decision makers in your organization should be grilling you as an analyst on how you came up with your assumptions and why they make sense. I actually think this is an area where AI could be useful because if trained right it has no biases unlike human analysts.
The other thing to sort of take away from what I said is the “if it is programmed correctly” part which is also a big if. Humans make mistakes and we see it a lot in embedded where in some cases we need to flash our code onto a product and deploy it in a place where we won’t be able to update it for a long time or maybe ever and so testing and making sure the code works right and is safe is a huge thing. Tool like Rust help to an extent but even then errors can leak through and I’ve actually wondered how useful AI based tools could eventually be in proving the correctness of traditional software code or finding potential bugs and sources of unsafety. I think a deep learning based tool could make formal verification of software a much cheaper and more commonplace practice and I think on the hardware side they already have that sort of thing. I know AMD/Xilinx use machine learning in their FPGA tools to synthesize designs so I don’t see why we couldn’t use such a thing for software that needs to be correct the first time as well.
So that’s really it. My only gripe at all with AI and DL in particular is when executives who have no CS or engineering background throw around the term AI like it’s the magic solution to everything or always the best option when the reality is that sometimes it is and other times it isn’t and they need to have a competent technology professional make that call.
To state this as simply as possible: I wouldn’t have a job if our customers weren’t seeing tremendous benefit from AI technology.
Right because corporate management doesn’t ever blindly and stupidly overinvest in fads that blow up in their faces…
I work with typically are very sensitive to CapX and OpX costs of AI—they self-serve in private clouds. If it doesn’t help them make money (revenue growth) or save money (efficiency), then it’s gone—and so am I.
You clearly have no clue what you’re on about. As someone with a degrees and experience in both CS and Finance all I have to say is that’s not at all how these things work. Plenty of companies lose money on these things in the hopes that their FP&A projection fever dreams will come true. And they’re wrong much more often than you seem to think. FP&A is more art than science and you can get financial models to support any argument you want to make to convince management to keep investing in what you think they should. And plenty of CEOs and boards are stupid enough to buy it. A lot of the AI hype has been bought and sold that way in the hopes that it would be worthwhile eventually or that other alternatives can’t be just as good or better.
I’ve seen it happen; entire engineering teams laid off because a technology just couldn’t be implemented in a cost-effective way.
This is usually what happens once they finally realize spending money on hype doesn’t pay off and go back to more established business analytics, operations research, and conventional software which never makes mistakes if it’s programmed correctly.
LLMs are a small subset of AI and Accelerated-Compute workflows in general.
No one ever said otherwise. And we’re talking about AI only, no moving the goalposts to accelerated computing, which is a mechanism through which to implement a wide range of solutions and not a specific one in and of itself.
Next you’ll tell me the sky is blue.
ChatGPT is basically the best LLM of its kind. As for Nvidia I’m not talking about hardware I’m talking about all of the models it’s trained to do everything from DLSS and ACE to creating virtual characters that can converse and respond naturally to a human being.
AI was 99% a fad. Besides OpenAI and Nvidia, none of the other corporations bullshitting about AI have made anything remotely useful using it.
They don’t call TSMC Taiwan’s silicon shield for nothing.
I have an RTX 4090 and use the proprietary driver. It works just fine on Fedora and Windows 11 alike. So IDK what to tell you. AMD and Intel are even easier since the drivers are baked into the Linux kernel. I have an AMD iGPU in my desktop and an Intel one in my laptop. Both work just fine and handle power management correctly.
Lynx
I’m glad I don’t use that piece of shit.
Firefox or nothing.
Wine doesn’t support everything and is still broken for a lot of things.