Yeah, great title right there. Very creative.
Anyway, I don’t really know how to make a full blog post about this, the installation was pretty straight forward. The disks came in: two new, one slightly used out of my main PC. I installed an additional internal HDD bay to make swapping them easier if needed (and for the more accurate server appearance). One thing I want to mention here for anyone trying to do the same thing is that if there are 2 molex power connectors on a drive bay (or any other device) connect them from 2 separate leads from the power supply. Technically, if you have a decent PSU, there shouldn’t be any issues regarding that, but there is a reason why there are two power sockets and that reason is the electric current. The more current goes through the cable, the more it can get hot and the last thing you want to happen inside your case is more hot components. PCIE or other power cables don’t have this problem since they should be (hopefully) rated for the power draw of PCIE devices, but molexes are, well, molexes.Things like disks, especially those 3.5 inch ones, can at times spike in power usage and that in turn can lead to unnecessarily crispy cables.
But let’s get back to the setup. The disks I bought are Seagate Ironwolf NAS disks, the slower ones, not Pros. Yes, they were the cheaper option, but not all that much cheaper. The main reason behind that choice wasn’t the price, but rather efficiency. I have a 1Gbit network set up at home, which at best gives you about 125MBps. OS overhead and other network traffic added on top of that and you will be lucky to get to 100MBps, which is about the speed I was getting when that one 4TB drive was still mounted in my main PC. So all in all, there was no need for a faster drive, since the network itself would be the bottleneck anyway. While I’d love to get my hands on a 2.5 or even 10Gig network gear, it’s way beyond my budget and definitely not something I need for day to day operations.
Once that was done and the drives were in, I wiped them and added them to the ZFS pool, then TrueNAS did some magic and it was done. I mean, the disk space was done. What was left to do was cook up some filesets and put in some Samba shares, create users, and give them privileges.
I would write some instructions, but firstly, there are way better and more competent sources on the internet and secondly, I have no idea how I did it in the end, because as every engineer would tell you – the best way forward is to ignore instructions and stumble into a solution that works by trial and error. But, it works for me, and that’s the important part 😀
Either way, now I have three 4TB drives RAIDed into an 8-ish TiB pool with a couple of filesets, each with specific permissions for stuff I intend to use them for. There are some encryptions and compression policies in place as well as some snapshotting, but nothing fancy.
So that’s it. The setup is done and I already started moving my data there, so that’s all for now.