I have around 30TB of data scattared around between cloud services, external hard drives, a small plex server, etc.

I would like start a storage server to store everything, with a bit of computing power for other task (plex as mentioned).

I’m looking on ebay at the Dell Edgeserver t330 or similar, starting with 3x 18TB drives, and since it has 8 bay, I’ll have plenty of space for future upgrades. It seems have also a decent cpu, so it should handle plex and some other tasks without issues.

My concerns is about noise; it is going to be put under a desk in my lab and not inside a rack, I saw some videos on youtube and it doesnt seem too noisy, but hard to tell from a video. And for power usage, it will run mostly at idle I think, so I dont think it will drink too much.

Do you think it make sense? Any suggestion?

As said, the only hard requirements are quite a bit disk spaces for future expansion (6x-8x 3.5 drive support) a decent cpu, and doesnt have to be too noisy.

I’ve also evaluated buying a parts a built it from scratch, using like the new intel n100 with is very powerful with low power usage, a big case and so on for a similar price.

  • puppynosee@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    The current gen processors are hilariously efficient if you build the rest the system properly. I will warn you that the low watt rabbit hole gets crazy. Simply changing a component from one slot to another can make a large difference on idle power draw. https://mattgadient.com/7-watts-idle-on-intel-12th-13th-gen-the-foundation-for-building-a-low-power-server-nas/ is a nice write up on a low power build that lists some of the things to consider when it comes to power usage and CPU idle states. At this point, I would stay away from server hardware unless you have a specific reason to run it. It may be cheap to buy at first, but you end up paying a lot more in the long run on power usage. I would also recommended you look at unraid. It has a somewhat proprietary raid format that does the parity calculations on the file level and not the block level. The result is that unraid is able to keep drives spun down and off much more aggressively than a traditional raid setup. It only needs to spin up the drive that has the file and not the whole disk pack.

    • 7Sea_Sailor@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Do you by chance have an opinion on Snapraid? The fact that I can easily add more drives to my pool down the road is a BIG win for me, and I can deal with the lost performance from not using “classic” Raid - but I’m not yet super sure if SR is stable and all.

      • puppynosee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I do not. I think unraid supports adding arbitrary numbers and sizes of drives, but the largest drive is always the parity drive. Under the hood I think unraid is just mergefs or something. If you really want to roll your own Linux thing and have the same functionally you can, but the management is a lot harder. I am in the arm chair engineering stages of the next iteration of storage in my homelab so I will definitely check out snapraid.

        • 7Sea_Sailor@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Thank you for your insight. I skimmed the Unraid documentation and found that the Unraid Arrays apparently support the adding of drives to an existing pool whenever you please, which is great. To be honest, I’m just trying to find reasons to not have to buy Unraid, but it seems harder and harder to justify. I was planning to go either SnapRaid or ZFS with OMV because it seems simple and free, and OMV has some nice features… but now I’m not so sure anymore.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I would look into consumer cases.

    For the server setup I would use a dedicated sata or SAS card. They are more reliable and add flexibility.

    For operating systems I would go with Truenas on bare metal or Proxmox with PCI passthough to the PCI sata card to a truenas VM. ZFS and therefore truenas benefit from lots of ram and a multi core CPU. With your 3 hard drives I would get a 6 core CPU and at least 32gb of ram. More ram would be better.

    I have no idea what your budget is but this setup isn’t cheap

    • sx44@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yeah, thats also a good point. I’m trying to keep low budget, and since the disks alone will cost around 600-700 eur, I’m trying to stay under 300 eur for the rest of the system, thats why i’m looking at the user server equipment

  • snekerpimp@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’m currently using a dell precision t7920, can hold up to ten 3.5 drives, has too many pcie slots, dual Xeon silver cpus. I had it running for about a year now, 24/7, in my living room, it’s in my rack now, and you can’t even tell it’s there. The poweredge series might be different when it comes to fan curves, but I can’t imagine it would be too different.

  • HousePanther@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Have you thought about building your own? You build a server out of a case like this IN WIN. What OS do you plan to use? Do you need Windows share ability? Are you willing to go with larger SSDs instead of spinning disks? If you give me some more details, I might be able to help you. I am not a fan of Dell.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I’ve also evaluated buying a parts a built it from scratch, using like the new intel n100 with is very powerful with low power usage, a big case and so on for a similar price.

    Yes do that. You wont need a very powerful CPU. Consider that you’ll saturate your gigabit ethernet before your CPU when using SMB thus it doesn’t make sense to have all powerful hardware. Another important thing to consider is the software: if you go bare Debian fully headless / no desktop environment and install the things you need manually - avoid Docker overheads, TrueNAS and other useless overheads - you’ll use WAY less resources and your system will run considerably quieter.

    Here is a real-world example with am i5-7400 + 8GB RAM + 20 TB BTRFS RAID 1 NAS I built:

    While transferring a big file from a Windows machine the NAS CPU is handling it just fine while the network is already saturated. Btw I got that CPU+Board+RAM for 70€ second hand.

    The only thing to consider is that if you for plex you’ll most likely want a CPU that has good support for video decoding. Do your research on that but believe me with I say under gigabit speeds and mechanical hard drives ANY CPU will make it.

    • sx44@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Thats the current approach, I’ve a small asrock j3455, in a microatx case with debian all bare metal, but does not have any room for more than 2 disks, so thats why I’m looking for an upgrade and so concentrate everything.

      The reason why I’m also checking out used server stuff is mainly costs, here (EU) I can get that Edgeserver for around 250/300 eur, and I have just to pop in some disks and thats done, while for that money, you just get a decent cpu+mobo+ram combo, and than add on top a case, ps and cooling. Ofc everything 2nd hand, let alone the buying new stuff.

      • TCB13@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        EU here as well. Thing is that old server hardware is always more expensive comparatively less older consumer hardware and wastes a LOT more power. That is why I would avoid it.

        • sx44@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          On the power I agree, but as I said I found on ebay that dell t330 (250/300 eur) for the same price of a i7 6th, a mobo and 16gb ram, which on top I have to add cooling (a decent one for around 50 eur), case (for that many disks, around 150 eur min) and psu(something not too shitty, so 50-60 eur), thats why I started considering it.

          Which case are you using? The only affordable thing that I found available that is able to house 8 disks is the enthoo phanteks, for around 120 eur.

    • PhantomPhanatic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’ve also recently gone this way. I used my old gaming PC motherboard, ram, and CPU. Put Ubuntu server on it and it’s been running great. Mine is running in software raid5 and has room for 7 drives (though I’m only using 3 at the moment). I did put a gtx1650 in it for Plex transcoding and that works great as well.