Filled up half a PetaByte. Now What?
-
@daWallet Hmm I wasn't sure it was you that said but I remember you was in the discussion about this and I knew you had informations... hehhehe ;D
Thanks for sharing again, I am sure the other discussion is somewhere buryed here in the forum and I couldn't find it xD
-
@daWallet thanks. I am afraid though that in that way there is a need to have extra space for the new merged file, so I don't see how I can do that. All my plots are generated with Xplotter, so also not sure about the sequential issue. Perhaps @Blago could give his input here :)
-
Just did a test with two small plot files and it seems to work fine. but as I said the problem here is the space. Don't know if there is another way to generate the merge file without having to preserve a copy of the original ones...
-
@vExact So it created a duplicate file including the two files and kept the old ones?
Now your new file is not optimized right?
Did you had to run PlotChecker after for mine with it?
Your 2 files had sequential nonces?
Sorry for the questions but I guess its what you get by being pioneer hahahah ;P
-
@gpedro I guess as the original files are optimized from beginning the merged one should be optimized too, although I might be wrong on that one. I just assigned the corresponding name it would have if generated as a single plot file. I guess I should run plot checker to verify it is correct in any case.
-
@gpedro PlotChecker tells me the file is ok. So it seems to work with no problem :)
Just have to type e.g.>copy /b *. 10783921033877668933_0_8192_8192.But still the problem of the space remains, so I think this is unfeasible.
-
@ChuckNorris Consider this - my buddy and I both have ~100TB rigs, with roughly equivalent PC specs. All of my plots are optimized and all of his are not, but both of our rigs get scan times around 32 seconds with jminer and a decent GPU. Both PCs use about 50% of the i5 or FX-8350 CPU, but the difference is that his PC ends up using ALL of the 16GB of system RAM, whereas mine uses about 8GB total. It seems that unoptimized plots are not that big of a bottleneck if you have enough hardware to deal with it, which makes me think most of your slow scan times are due to having too many drives per USB3.0 controller and/or not having adequate hardware. Upgrading to an i7 and doubling the memory might help, but bang for the buck starts to play a major factor at that point. In your situation, I would build a third machine and spread the hard drives out over all three. A fourth machine could be dedicated to plotting and then be repurposed to mine Ethereum or Zcash when it's done.
I'm curious - how much RAM is being used in your machines each round? I'd suspect just about all of it.
For shits and giggles, pull out 4 drives from each 7 port hub and see what happens to your scan times.
-
@daWallet said in Filled up half a PetaByte. Now What?:
I don't know if they have to be sequential too. I never have done this myself. Good luck.
I was the one that did it - the files MUST have exactly the same stagger, MUST be sequential and MUST be concatenated in sequential order.
-
@haitch that hurt my brain
-
@vExact said in Filled up half a PetaByte. Now What?:
@gpedro I guess as the original files are optimized from beginning the merged one should be optimized too, although I might be wrong on that one. I just assigned the corresponding name it would have if generated as a single plot file. I guess I should run plot checker to verify it is correct in any case.
The last file name is wrong - the number of nonces is 8192, but the stagger is still 4096, unless you optimized it after concatenating.
-
@KitsuneKitten Sorry, but if all three conditions aren't followed, the resultant plot file is basically a corrupt piece of cr*p.
-
@haitch Yeah that was my big doubt... So it would need to optimize the plots after concatenate them... Seems this is not a good solution for you afterall @ChuckNorris ...
-
@haitch said in Filled up half a PetaByte. Now What?:
@vExact said in Filled up half a PetaByte. Now What?:
@gpedro I guess as the original files are optimized from beginning the merged one should be optimized too, although I might be wrong on that one. I just assigned the corresponding name it would have if generated as a single plot file. I guess I should run plot checker to verify it is correct in any case.
The last file name is wrong - the number of nonces is 8192, but the stagger is still 4096, unless you optimized it after concatenating.
@haitch that's odd as plotchecker was giving ok to those numbers. In any case I wouldn't do this, mainly because of the issue of not having space on a drive to copy a new file...
But if anyone is interested there is this software available for those matters :)
http://www.igorware.com/file-joiner
-
@vExact The plotchecker just verifies that filesize = Number of nonces * 262144, and number of nonces is a multiple of stagger. If it doesn't then it attempt to correct them.
It will report the plot to be fine if the stagger is wrong, but the number of nonces is still a multiple of it. It'll report fine if the file size is correct, but the nonces are out of order.
On your misnamed files, half the nonces in half the scoops are good, all others are basically corrupt.
-
@sevencardz Damn. Didn't even think about just unplugging a few drives from each row. Simple bottlekneck check. I'm getting 650mb/s with 7 drives per hub. I tried just 3 drives per hub. Same exact result actually. Something is bottleknecking me some. My gtx 1070 only gets 20 percent usuage or less when jminer counting.
A hub can handle at least 5 drives without a bottlekneck
-
@ChuckNorris I'm not sure what is the RAM role in jminer tbh...

