Filled up half a PetaByte. Now What?



  • This thread is all about optimizing drives

    My plots went well. I filled up half a Petabyte in 10 days.
    Current Stats:
    250tb per machine with average read of 500mb averaging 120 seconds
    After optimizing i expect around 90 seconds or slightly lower.

    Would these things help my count speeds

    1. Switching from Kaby i5 to Kaby i7? And if so. K or not to K.
    2. upgrading from 16gb to 32gb of ram?

    My Gtx 1070 only gets 30 percent usuage with Jminer. Don't know if that's normal. Average read of 500mb
    I also get 95-100 cpu usuage during Jminer count While only 20-30 percent on gtx1070. I thought this thing was a gpu counter. How can i better utilize my gtx 1070?
    Using 5x 7Port 3.0 Hubs. Each hub running off of a dedicated usb controller. (5 controllers total)
    I hit the system ram ceiling pretty quickly. Adjusting chunks gets that down some.

    That's where i'm at with this bigger setup. Never had to worry about these things with my 50tb burst server i've been running for a year.

    Now that they are full. Time to Optimize. Used command line for that a year ago. Here are some questions for Optimizations.

    1. Does using more system ram to optimize help me in any way. Using 4gb now.
    2. I'm on my first optimization on this setup. It's calling for 50 hours or so for 1 8tb drive. How can i better improve those speeds. I7? 32gb ram?
    3. Is the command line optimizer better in any way? More accurate timer or anything?

    I appreciate any and all help. Def willing to share some information in return.



  • I would think running 7 drives off a hub would be a bottleneck. I have one of those usb3 cards with 4 dedicated controllers 1 per port. Each port has a usb3 hub but I don't run more than 2-3 drives per hub connected to each port. Using a GTX 960 and jminer these are my read speeds. These are optimized plots of 85tb so no where the space u have but my speeds are good fro what I'm told.

    : START block '370042', scoopNumber '3287', capacity '84981 GB'
    2017-06-11 17:35:15.203 INFO 7068 --- [ roundPool-1] burstcoin.jminer.JMinerCommandLine : targetDeadline '1209600', baseTarget '474378'
    2017-06-11 17:35:15.479 INFO 7068 --- [readerPool-9317] burstcoin.jminer.JMinerCommandLine : 1% done (0TB 1GB), avg.'0 MB/s'
    2017-06-11 17:35:17.571 INFO 7068 --- [ roundPool-1] burstcoin.jminer.JMinerCommandLine : dl '14368' send (pool) [nonce '112264519']
    2017-06-11 17:35:17.976 INFO 7068 --- [readerPool-9315] burstcoin.jminer.JMinerCommandLine : 12% done (9TB 487GB), avg.'806 MB/s', eff.'927 MB/s'
    2017-06-11 17:35:19.343 INFO 7068 --- [readerPool-9315] burstcoin.jminer.core.round.Round : dl '221704' queued
    2017-06-11 17:35:20.161 INFO 7068 --- [readerPool-9329] burstcoin.jminer.JMinerCommandLine : 23% done (19TB 114GB), avg.'922 MB/s', eff.'1075 MB/s'
    2017-06-11 17:35:22.317 INFO 7068 --- [readerPool-9317] burstcoin.jminer.JMinerCommandLine : 34% done (28TB 465GB), avg.'963 MB/s', eff.'1058 MB/s'
    2017-06-11 17:35:23.996 INFO 7068 --- [Executor-146923] burstcoin.jminer.JMinerCommandLine : dl '14368' confirmed! [ 0d 3h 59m 28s ]
    2017-06-11 17:35:24.306 INFO 7068 --- [readerPool-9323] burstcoin.jminer.JMinerCommandLine : 45% done (37TB 938GB), avg.'1006 MB/s', eff.'1162 MB/s'
    2017-06-11 17:35:26.375 INFO 7068 --- [readerPool-9315] burstcoin.jminer.JMinerCommandLine : 56% done (47TB 312GB), avg.'1024 MB/s', eff.'1106 MB/s'
    2017-06-11 17:35:28.402 INFO 7068 --- [readerPool-9321] burstcoin.jminer.JMinerCommandLine : 67% done (56TB 696GB), avg.'1040 MB/s', eff.'1130 MB/s'
    2017-06-11 17:35:28.438 INFO 7068 --- [ roundPool-1] burstcoin.jminer.JMinerCommandLine : dl '1620' send (pool) [nonce '300063902']
    2017-06-11 17:35:30.619 INFO 7068 --- [readerPool-9318] burstcoin.jminer.JMinerCommandLine : 78% done (66TB 204GB), avg.'1041 MB/s', eff.'1046 MB/s'
    2017-06-11 17:35:31.718 INFO 7068 --- [Executor-146935] burstcoin.jminer.JMinerCommandLine : dl '1620' confirmed! [ 0d 0h 27m 0s ]
    2017-06-11 17:35:32.866 INFO 7068 --- [readerPool-9321] burstcoin.jminer.JMinerCommandLine : 89% done (75TB 618GB), avg.'1039 MB/s', eff.'1022 MB/s'
    2017-06-11 17:35:39.337 INFO 7068 --- [readerPool-9319] burstcoin.jminer.JMinerCommandLine : 100% done (84TB 981GB), avg.'856 MB/s', eff.'353 MB/s'
    2017-06-11 17:35:39.588 INFO 7068 --- [ roundPool-1] burstcoin.jminer.JMinerCommandLine : FINISH block '370042', best deadline '1620', round time '24s 232ms'



  • @Garbear Yeah. Those are way better speeds. I think my limitation right now is my hubs.. But you do understand how costly and how much more of a mess it is to have 4 drives per hub when you have this many hard drives. I think optimizing my drives might give me a read speed of like 700mb/s which i'd be happy with



  • @Garbear What cpu and ram do you have?



  • @ChuckNorris Yeah optimizing will see a big gain in read speeds. It's been a while since I've plotted unoptimized then optimize them as I use xplotter now but if I remember it was close to double scan speeds after the optimizing.



  • @ChuckNorris My current mining rig is a asrock extreme 3 gen 3 with i5 2500k stocks speeds with 16gigs ddr3 ram and a GTX 960.

    My new plotting and mining rig i'm still getting setup is a ASRock Fatal1ty X370 Professional Gaming AM4 AMD X370, ryzen 7 1700 and 16 gigs ddr4 ram. Plotting with xplotter atm with 12 of 16 threads is a little over 15k nonces/min

    It's got 10 sata3 ports, 8 usb3 ports and 3 pci express x16 slots for my usb3 controller cards.



  • @ChuckNorris said in Filled up half a PetaByte. Now What?:

    But you do understand how costly and how much more of a mess it is to have 4 drives per hub when you have this many hard drives.

    I bought Seagate Backup Plus drives, since they had an integrated 2-port hub on them. I daisy chained multiple groups of HDDs and never had to buy a single hub in addition. :)



  • Half petabytes 500tb. am i right? Massive mining.



  • @ChuckNorris Half a PB? Outstanding! Very, very impressive. Now go find a wall and hit your head against it, really hard:))

    You should have plotted AND optimized. It will take you an insane amount of time to optimize those plots in-place. More RAM will help, sure, but the bottleneck with your setup will be I/O. Lessons learned, you'll do better on the next go:)



  • @ChuckNorris said in Filled up half a PetaByte. Now What?:

    Would these things help my count speeds?

    1. Switching from Kaby i5 to Kaby i7? And if so. K or not to K.
    2. upgrading from 16gb to 32gb of ram?
    3. How can i better utilize my gtx 1070?

    You seem to be "going mediaeval" on this burst thing! I'm fairly new to these parts, but have some suggestions. First, forget Kaby-Lake, Intel have jumped the shark. The Ryzen CPUs have done well, and now AMD are about to release their "ThreadRipper" range. In my (limited) experience, more threads does make a difference, and they will be offering a 16/32 variant at about half the price of the Intel /Xeon equivalents. You'll also need one of the new X399 Motherboards, every one of which sports 64x PCIe lanes. That means you can run a whole lot of SAS/SATA DiCs, extra GPUs etc. As for the other components, if you intend to expand further, then Ram it up as far as you can afford. And consider a 1600W Platinum PSU (or 2). Plus if you really have the €£$, then evaluate some high-capacity SSDs, just for plotting.



  • @illuminatus excuse me. Do you have a half peta setup. And no. Your wrong. If I plotted optimized plots off the bat. I only get 5000 nounces because they are SMR drives. Instead. I can buffer two drives at once and get 28000 nounces per drive. So for me to be able to fill up half a Peta in under 2 weeks. So I can start mining right away. I still get read times of 90 seconds with un optimized plots. Which is still pretty decent. Don't make me get my homies to back me up. Maybe do your research before insulting people.



  • @BeholdMiNuggets amds are garbage. I plot and count with gpus



  • @ChuckNorris said in Filled up half a PetaByte. Now What?:

    @BeholdMiNuggets amds are garbage. I plot and count with gpus

    You will swallow that words pretty bad when you will optimize your drives now! LOL
    You can only optimize with CPU from the best of my knowledge...
    So now for optimizing I would suggest a good CPU and RAM (the more RAM the better, to optimize)...
    If you will plot a lot more I would suggest you to do buffer plots with your GPU but each plot with the same number of nonces as what you used as stagger, because then you will not need to optimize them... ;D
    Then you will end with thousands of plot files but I heared @dawallet say (sometime ago) that you can concatenate plots, meaning that after you plot the thousands of plots you can join the plot files into one file (or how much you want), although I am pretty sure you will not need to do this, and if you need to do this I recommend you to ask @dawallet how can you concatenate plots in windows, because I never tried, but I've heared it's possible... ;D

    Maybe even worth, you replot disk by disk with the way I said above (GPU), with your GPUs maybe its better than using the CPU to optimize everything... Although you can also keep using the CPU to optimize it at the same time as you replot optimized plots with your GPUs... The faster the better ;P

    Good luck... 90 seconds for 500Tb of unoptimized plots I would say it's a nice time but not nice enough to keep it unplotted defenetly xD


  • admin

    @gpedro that was not me but I know you can only merge plot files with the exact same stagger.



  • @daWallet @gpedro I would like to merge my plots. They all have the same stagger size. How do I do that?


  • admin



  • @daWallet Hmm I wasn't sure it was you that said but I remember you was in the discussion about this and I knew you had informations... hehhehe ;D
    Thanks for sharing again, I am sure the other discussion is somewhere buryed here in the forum and I couldn't find it xD



  • @daWallet thanks. I am afraid though that in that way there is a need to have extra space for the new merged file, so I don't see how I can do that. All my plots are generated with Xplotter, so also not sure about the sequential issue. Perhaps @Blago could give his input here :)



  • Just did a test with two small plot files and it seems to work fine. but as I said the problem here is the space. Don't know if there is another way to generate the merge file without having to preserve a copy of the original ones...



  • @vExact So it created a duplicate file including the two files and kept the old ones?
    Now your new file is not optimized right?
    Did you had to run PlotChecker after for mine with it?
    Your 2 files had sequential nonces?
    Sorry for the questions but I guess its what you get by being pioneer hahahah ;P


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.