NTFS "allocation unit size" relevant?



  • Hi,

    I have to plot my new 8TB drives and the first thing I want to do is to format them.

    The question is, does it make any sense to choose the largest allocation unit configuration?
    Since the plot file is just one very big file I was thinking that a very large allocation unit would make sense?

    Or does the mining app just access small fragments of the plot file and a large allocation unit may harm the access times?

    Any expers out there? :-D

    thanks



  • 1 plot file 1 drive



  • Small or big I always used 32.


  • admin

    @fpdragon If you plot optimized, use the the 64KB allocation size - optimized the plot nonces are contiguous and maxing the block allocation minimizes the read requests.



  • @haitch - Related Q.

    Taking the 8Tb HDD example. This reads as ~7.27 Tb in Windows-10. So the expert recommendation is to create one large plot to (almost) fill the drive. But what's a good & reasonable head-room? That is, how many Gb should be left un-plotted and spare on the HDD?



  • The common knowledge is the 64kb cluster size is faster and more efficient, but not sure anyone has ever tested the hypothesis.



  • @rds said in NTFS "allocation unit size" relevant?:

    The common knowledge is the 64kb cluster size is faster and more efficient, but not sure anyone has ever tested the hypothesis.

    Thanks but I wasn't referring to the cluster size, rather the air-gap (when plotting). How much drive-space /fraction / Gb should be left between the full-plot, and the reported dive-size?


  • admin

    @BeholdMiNuggets Max size to use is: floor((<drive size in bytes> / 262144) / stagger) * stagger

    The plotter will always plot a multiple of your stagger number of nonces - so make sure that the number of nonces is a multiple of your stagger, and that <number of nonces> * 262144 < size of drive in bytes.



  • @BeholdMiNuggets said in NTFS "allocation unit size" relevant?:

    @rds said in NTFS "allocation unit size" relevant?:

    The common knowledge is the 64kb cluster size is faster and more efficient, but not sure anyone has ever tested the hypothesis.

    Thanks but I wasn't referring to the cluster size, rather the air-gap (when plotting). How much drive-space /fraction / Gb should be left between the full-plot, and the reported dive-size?

    3815000 nonces is 1E12 bytes, 1 TB, actually slightly more but drives always have more than exactly the TB size stated.

    So when I plot an 8 TB drive I make 8 files with 3815000 nonces then run an -n 0 file which usually gives another 1024 nonces.



  • @rds said in NTFS "allocation unit size" relevant?:

    3815000 nonces is 1E12 bytes, 1 TB, actually slightly more but drives always have more than exactly the TB size stated.
    So when I plot an 8 TB drive I make 8 files with 3815000 nonces then run an -n 0 file which usually gives another 1024 nonces.

    @rds - Why do you use 8x files (for the 8Tb HDD) and not just one big 'un? Other experts on this forum suggest that one (large) plot per drive is preferable for Burst Mining, even if it (initially) takes a while to generate. Also, what's an [ -n 0 ] file?

    Eg. @haitch said -

    If you plot optimized, use the the 64KB allocation size - optimized the plot nonces are contiguous and maxing the block allocation minimizes the read requests.



  • I do 1TB files to get the drive in the game before the whole drive is plotted. Also, if a file corrupts, you only have to replace 1TB not 8TB. That being said, I don't want 1GB files, too many files. I have 62 drives, on 3 machines, my drives scan in under 40 seconds.

    -n 0 is the nonce parameter that tells the plotter to fill the drive. Like this

    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6000000000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6003815000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6007630000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6011445000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6015260000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6019075000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6022890000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6026705000 -n 3815000 -t 8 -path r:\burst -mem 1G
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6030520000 -n 0 -t 8 -path r:\burst -mem 1G
    pause



  • @rds said in NTFS "allocation unit size" relevant?:

    I do 1TB files to get the drive in the game before the whole drive is plotted. Also, if a file corrupts, you only have to replace 1TB not 8TB. That being said, I don't want 1GB files, too many files. I have 62 drives, on 3 machines, my drives scan in under 40 seconds.

    Makes sense. Would it be worth using a high-capacity SSD, just for the initial plotting (then transfer)?

    -n 0 is the nonce parameter that tells the plotter to fill the drive. Like this: ...
    start "" /belownormal /b /w "c:\burst\XPlotter.v1.0\XPlotter_avx.exe" -id 15770867969884553097 -sn 6030520000 -n 0 -t 8 -path r:\burst -mem 1G

    Does that leave any (head)room on the HDD, or is that unnecessary?
    And I guess I need to look up "belowNormal" now too.



  • @BeholdMiNuggets ,

    I have done the ssd/PMR plot drive and transfer to the SMR drive but felt the xfer was just as slow, at least for me. I use this now, works for me. https://forums.burst-team.us/topic/5307/how-to-attain-max-nonce-min-plotting-direct-to-smr-drives

    Some of my drives have only a few KB left on them, all no more than a couple MB, never saw a problem with that.

    I need to run the plotter at below normal priority. I have 3 plotters running at once, at normal priority, the machine basically hangs as the plotters eat up all the CPU power.



  • Digging up this thread because it's relevant to me now...

    Is there preferable chunks sizes that should be plotted?
    If say 512GB is a preferable size, is there any gain in plotting 520GB?

    Is there a simple formula for preferable plot sizes? (mostly on the OS drive)

    Roland


  • admin

    @Roland_005 To maximize the number of nonces on a drive:

    Optimal nonces = floor(( drive size in bytes / 262144) / Stagger) * Stagger.


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.