• 0 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle



  • I’ve used 4k and 1440p monitors, and my TV is 4k as well. For desktop use, 4k isn’t really a big difference because of the hardware needed to run it at a decent speed. However, once I got my hands on a 170hz 1440p monitor, I can’t go back to anything less. It’s extreme noticeable. The higher refresh rate, and the reasonable upgrade in pixel density makes text much clearer, especially in motion.

    For content viewing though, 4k on a TV it depends on how much of your field of view is occupied by the TV. Most of time though, a high quality panel is worth much more than higher pixel density. There is a massive difference between a basic 4k big box store TV, and 4k LG oled. The color, even outside of HDR content is just so much better, and the true actual black color is fantastic. Resolution is nice, but honestly, oled color is so good.








  • You missed the point of my example entirely. How can those commits exist, and those people exist in that instance if they don’t have accounts? I was refuting your statement that a frontend needs an account. By mirroring an existing repo, as an example, you could verify that my claim is correct. Git as platform is already decentralized and doesn’t require accounts. You could email someone your git diff’s and it will function the same.


  • You need a frontend

    Yes, but the requirement of said frontend are very small.

    and a frontend needs an account.

    Not required at all actually. For example, mirror a github repo in gitea. You’ll see all the commits, their messages, and who made them. Yet that gitea instance isn’t accessible publicly. None of those people have an account, and none of them can login even if they could access the instance. A commit is just attached to a name, that is user configurable, and a lot less data minable than a “real” account.



  • is podman-compose really dead? Their github page looks active at a glance. The tooling is so similar, I use podman for local testing, and deploy to docker, but I’ve also done the reverse. As long as your not using really exotic parameters its really just a drop in replacement, I’ve even used GPU passthrough for AI project no problem in both docker and podman. At the end of the day, they’re just slightly different frontends for the same backend.

    As far as docker support, its often as simple as just providing a Dockerfile, which is basically the same thing as your build scripts. These days I’ve often used the Dockerfile INSTEAD of the readme to find help compiling some projects.



  • Raid0 (combining both drives’ capacities) is not really tiered storage. You would want Raid1 (each drive is a copy of the other drive ), but doing this isn’t a backup. How will you be monitoring the drives so that you know if one of them actually fails?

    I don’t think the RPi has a new enough kernel, but with bcachefs you can do tiered storage. By combining the storage of the ssd + hardrives, into a single block device, then make the ssd the read/write cache, and give the whole pool replicas=2, so that that if one drive dies you still have the failover of the other drive. Do be aware this setup is still not a backup however.


  • I’ve thought about how I could handle disaster recovery for my homelab environment, but I haven’t come to any good solutions. For example, if my main concern was being hit by crypto. I can’t just recover from a regular backup, since I’m not sure how I can make a backup without that backup just being encrypted along side everything else. Since I mainly just backup everything to my file server, which is then synced to the cloud. In that setup, my cloud backups would be lost as well.

    Would you have some starting points on how others handle disaster recovery? I’d like to avoid manually making an offline backup, because inevitably I’d forget to do it, which would make it useless anyway.