5 TB Plotting with Macbook pro takes long



  • Hi everyone,

    I am plotting my new 5 TB disk with my macbook pro 2015 which has a i7 and 16 gb of ram. I startet the plotting with the following command:

    sudo ./plot -k 14363131333273464678 -x 2 -d /Volumes/seagate5tb ./ -s 0 -n 18841600 -m 32768 -t 8

    When it starts it has a speed of about 10500 nonce/minute. But the CPU is only active for a short time and the cpu utilization goes to nearly 0% and the disk is writing something. And after a while the CPU goes again up to 100% but then it shows only about 2340 nonces/minute. Is this a normal behavior? Is there a way that it will use the CPU permanently?

    Hope you can give me a hint.

    thanks
    ciscler



  • @ciscler compute, reorder, write.

    The algo computes nonces of 4096 scoops (that is, 4096x 64-byte values).

    When mining a block, just one of those 4096 scoops (in all nonces) is used. To efficiently mine a large file or whole harddisk, it is useful that all scoops are grouped together. This interleaving is the "reorder" or "optimize" stage in plotting, the value is "stagger". Filename: id_start_length_stagger
    The perfect file for ingestion has length==stagger, as all 64 Byte values of e.g. scoop Number 3 can be read in one go, no head movement necessary.

    So, some CPU/GPU processed a nonce (256 KiB) - it gets sorted into the output buffer you defined.
    I guess it is the -m flag above, but as I'm not familiar with the actual binary you are using lets just assume you reserved 2 GiB (32768 nonces, 64 Byte each).
    It plots until the 2 GiB are filled, and then writes that to disk.
    Depending on the filesystem usage, fragmentation and physical position the disk may ingest 1 - 200 MB/s - so this phase at least takes 10 seconds, most likely 30+ seconds.
    During this phase computation can only "refill" buffers that have been written already, and as most HDDs are slower than the computation, you get this pumping effect.

    One of the core concepts of this coin is that the hard part is done once (plotting), so that the mining can be energy efficient.
    As opposed to proof of work, where you work hard for EVERY block.



  • @ciscler said in 5 TB Plotting with Macbook pro takes long:

    18841600

    oh, and a hint: Don't know what hfs+ on osx leaves free on a 5 TB disk, but perhaps you should not plot it in one go;

    18841600 * 64 * 4096 = 4939212390400 Bytes

    You should probably try to plot 1 TB files first and see how it works..

    Also, a stagger of 32768 (2 GiB) gives you a continuous read of (2 GiB / 4096 = 512 KiB).
    After that the head has to move twice;

    1. metadata lookup where the next 512 KiB are after a gap of 4095*512KiB
    2. actually move the head there, read, repeat.

    Go for the largest stagger you can afford, as THIS is the defining factor for read speed and disk wear while mining.

    Bigger is better, as the file can be read a lot faster if the physically sequential portion is larger.

    • Either plot with more memory or
    • optimize the file later on. (search for a "merge" or "optimize" binary) This is best done from one physical disk to another.


  • @vaxman said in 5 TB Plotting with Macbook pro takes long:

    Either plot

    Thanks I will try that.



  • @vaxman said in 5 TB Plotting with Macbook pro takes long:

    continuous
    One more question. I am plotting with my mac on an ntfs formated disk. But I thought one big plot file is better then 5x1 TB. And 32768 = 8 GB RAM.



  • I don't know which values I should take

    sudo ./plot -k 14363131333273464678 -x 2 -d /Volumes/seagate5tb ./ -s 0 -n 18841600 -m 32768 -t 8

    -n which value 4096 = 1GB Plot file So I was calculation 4600 GB (of my disk) * 4096 = 18841600

    -m 4096 = 1 GB RAM so I used 8GB*4096 = 32768.

    What settings would you recommend me? I am also able to use 12 GB of my RAM.

    Thanks



  • And I also noticed that the resizing of the file takes also a very long time on the mac. On a windows 7 machine it only takes some seconds.



  • And I figured out that the write speed on the disk during the plotting progress is only 10 MByte/s. It is a USB3.0 device and the mac also supports usb 3.0.



  • @ciscler said in 5 TB Plotting with Macbook pro takes long:

    And I also noticed that the resizing of the file takes also a very long time on the mac. On a windows 7 machine it only takes some seconds.

    Well, NTFS is not native to your Mac, so some functions are implemented differently or nonexistent on osx.

    sudo ./plot -k 14363131333273464678 -x 2 -d /Volumes/seagate5tb ./ -s 0 -n 18841600 -m 32768 -t 8
    -n which value 4096 = 1GB Plot file So I was calculation 4600 GB (of my disk) * 4096 = 18841600
    -m 4096 = 1 GB RAM so I used 8GB*4096 = 32768.What settings would you recommend me? I am also able to use 12 GB of my RAM.

    If you are still testing, then I'd probably just plot 100 GiB.

    Keep in mind that your -n (numer of nonces to plot) MUST be a multiple of stagger given. It will be adjusted if not, and its adjusted by increasing n, which could lead to an incomplete plotfile if the disk runs full. The stagger also defines your memory requirement while plotting. 12 GiB vs. 8 GiB RAM/Stagger will increase the sequential read from 512 to 768 KiB. If you "optimize" your plotfile after plotting, you will be able to have this at ~256 MiB for 1 TiB file, or at 1,25 GiB for a 5 TiB file.
    A large, logically and physically sequential file will give you read throughput for fast mining. Anything larger than a second's worth of data (~200 MB/s for the fastest disks) for stagger is nice, but not necessary. A stagger smaller than half a second may reduce your mining speed, because the disk head has to seek a lot more. I'd say the sweet spot is between 256 GiB and 1024 GiB. Still managable file sizes, and can be thrown away independently if the space is needed and be replot later.

    [n] for 96 GiB is 393216 and this [n] is divisable by 8 GiB (RAM).
    Just make [n] a multiple of 32768 (your stagger, where " 1 " equals 256 KiB == 262144 Bytes).

    BTW, why are you plotting to NTFS ? It would be faster to plot to a network attached disk, if you want to mine that harddisk in a Windows-PC.



  • I am plotting to NTFS because if the plotting is ready I will attach this device to a virtual Windows 7 machine which will be the miner. Why will the plotting be faster if I plot to a network attached disk? You mean to share it via smb?



  • @ciscler I thought you were going to mine on a physical PC. So if you get 10 MB/s on ntfs on osx, it would have been faster to connect the target disk to the PC, share it via smb, and write via GBit Ethernet - I'd say at least 50 MB/s.

    Why on earth do you want to mine in a virtualized windows ?
    And with W7 having these caching problems (cache grows so large as to push vital processes out to swap) ?!



  • Because I have an Ubuntu server with KVM running. On I have virtualized windows machines on it. On what operating system are you mining and plotting?



  • @ciscler I mine on FreeBsd (dcct, zfs), plot on linux (centos 6.9, gpuPlot over nfs, no local storage) and work on osx.
    If you already have ubuntu, grab the dcct sources and run it natively. Rock stable, runs for months.



  • Okay. Which Plotter and miner do I need for ubuntu and where do I get it? Can you provide the link?



  • @ciscler let me ask you this : you haven't even looked in the software section ? Boy, this is why I hate this forum sometimes.

    http://burstcoin.biz/download/3-dcct's-miner-and-plot-tools



  • @vaxman said in 5 TB Plotting with Macbook pro takes long:

    http://burstcoin.biz/download/3-dcct's-miner-and-plot-tools

    yeah I just found it but thanks a lot. I think you put me in the right direction. I will go the linux way and run int natively on my server. Thank you for your help


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.