That’s why I only use mentats.
That’s why I only use mentats.
This seems like the most plausible explanation. Only other thing I can think of is they want to develop their own CoPilot (which I’m guessing isn’t available in China due to the U.S. AI restrictions?), and they’re just using their existing infrastructure to gather training data.
I wonder what is the general use for the Mac Mini, MacBook Air, iMac, and MacBook Pro? People generally seem to do all the lightweight stuff like social media consumption on their phones; and desktops/laptops are used for the more heavy-weight stuff. The only reason I’ve ever used a Mac was for IOS development.
I was looking at notebooks at Walmart the other day, and I was amazed that they almost all had less or the same amount of RAM as my phone.
I think SSDs are also soldered to the mainboard on most apple products.
Lol, good catch.
Wary of the bill. Seems like every bill involving stuff like this is either designed to erode privacy or for regulatory capture.
Edit: spelling
Where I live, I would still need to pay for a VPN to use torrents. I’ve been banned from an ISP before for torrenting (thankfully, I had multiple ISPs available for me).
At the moment, I just “pay” legally because I get a few “free” streaming plans from my mobile provider and ISP. Occasionally, I just use a free streaming site if I really want to watch something that’s not available to me. Every once in a while, I try anonymous p2p such as Tribler or torrenting over I2P, but it’s still extremely slow, unfortunately. I’ve never used Usenet, but I think it’s about the same price as a VPN or seedbox would be?
Here are some that I’ve liked (haven’t played them in years though):
I’ve tried a couple rolling distros (including Arch), and they always “broke” after ~6 months to a year. Both times because an update would mess up something with my proprietary GPU drivers, IIRC. Both times, I would just install a different distro, because it would’ve probably took me longer to figure out what the issue was and fix it. I’m currently just using Debian stable, lol.
It’s also trained on data people reasonably expected would be private (private github repos, Adobe creative cloud, etc). Even if it was just public data, it can still be dangerous. I.e. It could be possible to give an LLM a prompt like, “give me a list of climate activists, their addresses, and their employers” if it was trained on this data or was good at “browsing” on its own. That’s currently not possible due to the guardrails on most models, and I’m guessing they try to avoid training on personal data that’s public, but a government agency could make an LLM without these guardrails. That data could be public, but would take a person quite a bit of work to track down compared to the ease and efficiency of just asking an LLM.
I use LLMs just about every day. It’s better than web-search for certain things, and is useful for some coding tasks. I think they’re over-hyped by some people, but they are useful.
I’ve never used it, but the idea is that nutrient uptake will be faster than if someone just dressed the top of the soil with compost. The extra aerobic bacteria could also be beneficial.
Most CEOs are also conmen and not any smarter. It’s mostly just a nepotism racket at the executive level.
A lot of the “elites” (OpenAI board, Thiel, Andreessen, etc) are on the effective-accelerationism grift now. The idea is to disregard all negative effects of pursuing technological “progress,” because techno-capitalism will solve all problems. They support burning fossil fuels as fast as possible because that will enable “progress,” which will solve climate change (through geoengineering, presumably). I’ve seen some accelerationists write that it would be ok if AI destroys humanity, because it would be the next evolution of “intelligence.” I dunno if they’ve fallen for their own grift or not, but it’s obviously a very convenient belief for them.
Effective-accelerationism was first coined by Nick Land, who appears to be some kind of fascist.
We’re close to peak using current NN architectures and methods. All this started with the discovery of transformer architecture in 2017. Advances in architecture and methods have been fairly small and incremental since then. The advancements in performance has mostly just been throwing more data and compute at the models, and diminishing returns have been observed. GPT-3 costed something like $15 million to train. GPT-4 is a little better and costed something like $100 million to train. If the next model costs $1 billion to train, it will likely be a little better.
LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when “reading” large or multiple articles). With ChatGPT, it’s output seems more likely to be factually correct when it just generates “facts” from it’s model instead of “browsing” and adding articles to its context.
I think part of the problem is that new cars are bought mostly by fairly well-off individuals; with other people buying used cars. Economy cars sell poorly in the U.S.
China can’t arrest people in the U.S. The U.S. and state governments can.
Same. I think I’ve read that a single GPT-4 instance runs on a 128 GPU cluster, and ChatGPT can still take something like 30s to finish a long response. A H100 GPU has a TDP of 700w. Hard to believe that uses only 10x more energy than a search that takes milliseconds.