Building a 1pb rig
-
Ah yeah forgot which you were configuring. But the result is still the same you blow the MRU cache in arc away every 4 minutes when you reread 30TB of data. but since it should still be fine.
@haitch why replot if you already ahve everything setup?
-
@manfromafar Because I'm not completely happy with the way it's currently setup - so was considering a replot anyway, this discussion just made me think about going back to ZFS pools again - which it originally was.
-
@haitch what (besides from the hard 4% reservation) made you leave zfs ?
When I first started I had a pile of rusty Seagate ST2000VX, and having them on a raidz was so much better than hidden behind a RAID controller. They leaked bits, but I was able to mine them without errors. I'm so content that I didn't even look at other filesystems in the last 3 years. Having mechanisms for cache control at this level is very helpful. I wouldn't replot, though. Or do you have all disks tied up into a single large object ?
-
@vaxman It was originally created as a ZFS testbed. Each drive was an individual Raid 0 array, 16 drives, with the whole thing connected to a VMWare server, via different methods (1Gb Eth, 10Gb Eth, Infiniband, FC) with a VM that plotted/mined it. After my testing was done I wasn't convinced I was getting the best possible performance from it - so converted it to a Windows box with DAS. No real change in performance, with a loss of flexibility. So considering flipping back.
-
@haitch Pennywise plotting it's way to 153TB, Pennywise 2 hardware ordered, negotiating for 192TB of storage for it .... both have the capability for external expansion in 320TB chunks. :) Be afraid, my monsters are coming
Update: Negotiations for 192TB of storage apparently successful....... for less than $22/TB and free shipping .....
-
goodluck!

