![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/h1ChnLuBHr.png)
You’re more likely going to get stuttering or asset streaming issues which are going to have more impact than losing a few fps.
You’re more likely going to get stuttering or asset streaming issues which are going to have more impact than losing a few fps.
Except for sometimes when it is beneficial to store tmpfs files on RAM for speed or saving your SSD some unnecessary writes.
https://wiki.archlinux.org/title/Firefox/Profile_on_RAM is a good example.
I only use rolling releases on my desktop and have ran into enough issues with apps not working because of changes made in library updates that I’d rather they just include whatever version they’re targeting at this point. Sure, that might mean they’re using a less secure version, and they’re less incentivized to stay on the latest version and fix those issues as they arise, but I’m also not as concerned about the security implications of that because everything is running as my unprivileged user and confined to the flatpak.
I’d rather have a less secure flatpak then need to downgrade a library to make one app I need work and then have a less secure system overall.
Definitely. I’d rather have a “good and specific reason” why your application needs to use my shared libraries or have acess to my entire filesystem by default.
IIRC from running into this same issue, this won’t work the way you have the volume bind mounts set up because it will treat the movies and downloads directories as two separate file systems, which hardlinks don’t work across.
If you bind mounted /media/HDD1:/media/HDD1 it should work, but then the container will have access to the entire drive. You might be able to get around that by running the container as a different user and only giving that user access to those two directories, but docker is also really inconsistent about that in my experience.
lol Japan invents the three major optical disc storage mediums that became ubiquitous and their government says fuck that and just keeps on using floppy disks
If you want Proxmox to dynamically allocate resources you’ll need to use LXCs, not VMs. I don’t use VMs at all anymore for this exact reason.
RDP does not fill the same role as Teamviewer at all. The M$ alternatives would be Quick Assist or the older MSRA.
America didn’t drop anything because they weren’t saying it in the first place, the Soviets were. America also aren’t the ones that coined a new phrase for it, British royalists were, who probably had no knowledge of the Russian phrase. All of this was explained in the article you linked.
1.it’s a euphemism for “And You Are Lynching Negroes” - that’s literally what people used to say instead of whataboutism
lol who do you think was saying this, and how is “whataboutism” in any way of a euphemism for it? Did you even bother to read the article you linked?
You’re right, nobody can ever know even remotely everything.
Luckily, the same device you used to post that comment can also be used to check if what you are about to say is actually true, so you can prevent yourself from spreading misinformation like this in the future.
I also take money from possible fascists because I need it to survive. It’s called having a job.
I think Wayland is at point now where I’d be comfortable recommending it to beginners. I’m on nvidia and just switched myself in the past month because I felt like it was finally ready.
To me this is actually a good move for Ubuntu’s reputation.
Losing good reputation or losing bad reputation?
Am I missing something in this article? I’m not defending either company, but it doesn’t seem like they actually have any evidence to confirm either is doing this.
The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.
It claims this, but then they say this about the source of this info:
TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.
So their source doesn’t actually say which companies are doing this, but then they jump straight into this:
AI companies, including OpenAI and Anthropic, are simply choosing to “bypass” robots.txt in order to retrieve or scrape all of the content from a given website or page.
So they’re just concluding that based on nothing and reporting it as fact?
Pretty sure they’re talking about generative AI created deepfakes being easier than manually cutting out someone’s face and pasting it on a photo of a naked person, not comparing Adobe’s AI to a different model.
The only one I can think of is that Source might still have some id code in it from the goldsrc days, but that was before it was open sourced.
That I’m not sure of. My proxmox host is headless and none of my containers have a GUI so I haven’t tried.
You can also pass the GPU to multiple LXCs that will share it vs it being tied to a single VM. I use VMs as little as possible in Proxmox these days.
I like the workflow of having a DNS record on my network for *.mydomain.com pointing to Nginx Proxy Manager, and just needing to plug in a subdomain, IP, and port whenever I spin up something new for super easy SSL. All you need is one let’s encrypt wildcard cert for your domain and you’re all set.