Dominance*
*- if you ignore the actual dominant party
Dominance*
*- if you ignore the actual dominant party
Tux Racer go brrrr
sure, I’m not saying GPT4 is perfect, just that it’s known to be a lot better than 3.5. Kinda why I would be interested to see how much better it actually is.
Worth noting this study was done on gpt 3.5, 4 is leagues better than 3.5. I’d be interested to see how this number has changed
However, if you ask me to pick one specific project, I get overwhelmed because I don’t know what’s reasonable.
I don’t know enough to know if my ideas are achievable, or if I’d just be bashing my head against the wall. I don’t know if they’re laughably simple tasks, multimillion-dollar propositions, or Goldilocks ideas that would be perfect to learn a coding language.
List out some ideas you’re thinking of. While it may not be obvious to you, someone who is seasoned (me or someone else) might notice at least a general theme or idea to point you in the right direction for where you should go and what you should learn, regardless of if the projects are reasonable.
Note - Most projects take teams to realize, so if your ideas are too large, they might not generally be feasible alone.
What are you looking to actually do with your programming skills? That will heavily influence which languages to recommend you learn. Do you want to make websites? build games? do AI stuff? Create enterprise-level software? something else?
whenever you start a game, there’s always a phantom player 2 that joins, and it absolutely wrecks the hardest difficulty
You missed out, bro. It was you from the future calling to warn you of your dire fate and how to avoid it.
I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.
thats because they legally cant include the cores in the steam version. You’re able to go add any additional cores you want, however.
It’s probably still perfectly safe to eat. It likely just tastes like hot garbage. Frozen food doesn’t technically expire, it just slowly gets more and more freezer burnt that degrades the quality and taste. It remains perfectly safe to eat indefinitely, however.
Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.
I’m essentially doing this with my set up. I have a box running proxmox and a separate networked nas device. There aren’t really any changes, per se, other than pointing the *arr installs at the correct mounts. One thing to make note of, i would make sure that your download, processing, and final locations are all within the same mount point, so that you can take advantage of atomic moves.
As of java 21, you can actually just use:
void main()
You cant go by “serving sizes” to compare things like that, because serving sizes are fairly arbitrary and can are likely measured differently between products. You’d have to compare by net weight.
This lists the net weight as 27.1 pounds, or about 433 ozs. A box of kraft is 7.5 oz net weight, or in other words it’s almost 58 total boxes of craft Mac and cheese. Which makes things way more in Kraft’s favor.
isnt that basically what government contracts are? subscriptions?
I have mediacom as well, but in a larger city of the midwest. They have datacaps here too, and i was paying about $100 for exactly this same plan up until a couple years ago. They started upgrading our speeds/caps because a new fiber company (metronet) is building in the area. Now i’m on 1 gbps down and a 4 TB cap. I still plan to switch to metronet when they finally light up my area, as its cheaper for the same speeds (plus no data caps)
Even more frustrating when you realize, and feel free to correct me if I’m wrong, these new “AI” programs and LLMs aren’t really novel in terms of theoretical approach: the real revolution is the amount of computing power and data to throw at them.
This is 100% true. LLMs, neural networks, markov chains, gradient descent, etc. etc. on down the line is nothing particularly new. They’ve collectively been studied academically for 30+ years. It’s only recently that we’ve been able to throw huge amounts of data, computing capacity, and time to tweak said models to achieve results unthinkable 10-ish years ago.
There have been efficiencies, breakthroughs, tweaks, and changes over this time too, but that’s just to be expected. But largely its just sheer raw size/scale that’s just been achievable recently.
I’m not sure what you’re trying to say here; LLMs are absolutely under the umbrella of AI, they are 100% a form of AI. They are not AGI/STRONG AI, but they are absolutely a form of AI. There’s no “reframing” necessary.
No matter how you frame it, though, there’s always going to be a battle between the entities that want to use a large amount of data for profit (corporations) and the people who produce said content.
Actually would either be a TPU (tensor processing unit) or NPU (neural processing unit). They’re purpose built chips for AI/ML stuff.