GPU plot generator v4.1.1 (Win/Linux)



  • @BeholdMiNuggets Sorry, I forgot the fact that the GPU buffer will be partially filled the whole time, thus the disks won't be able to write in parallel. It would be best for you to have GRAM=staggerSize.
    Thus : 4GB GRAM, 2*4GB RAM, for a total of 12GB RAM if you count the paired buffers.
    Just change the globalWorkSize to 16384 and the staggerSize to 16384.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    ... the GPU buffer will be partially filled the whole time, thus the disks won't be able to write in parallel. It would be best for you to have GRAM=staggerSize.
    Thus : 4GB GRAM, 2*4GB RAM, for a total of 12GB RAM if you count the paired buffers.
    Just change the globalWorkSize to 16384 and the staggerSize to 16384.

    Tks.
    So, you only need ~2Gb of System /CPU Ram - allocated (per GPU)?



  • @BeholdMiNuggets No, each GPU needs 4GB (x1 in this example), and each plots file needs 4GB too (x2 in this example), for a total of 12GB RAM.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    ...
    7.9TB = 33046528 plots

    Sadly, most "8Tb" HDDs only sport about 7.27 /Tb* of useable capacity! Which comes to ~ 30,493,248 Nonces (with a small degree of head-room).

    The code GitHub repository (readme) reference states: "Tweaks: When using multiple devices, make sure they have nearly the same performance." Does this mean that 2x GPUs can be used (in combination), to speed up the process? For example, for 2x GTX-1080Ti/1090 GPUs (under Windows), the plot [devices.txt] file might read:
    2 0 16384 4096 2048
    2 1 16384 4096 2048

    And the resulting Command Line (administrator) might look like this:
    /gpuPlotGenerator generate direct
    P:/plots/userburstid_200000000_30493248_16384
    Q:/plots/userburstid_300000000_30493248_16384

    Does that make sense?
    Thanks.



  • @BeholdMiNuggets Yes, that's the spirit. But the processing power of your card is far quicker than your IO throughput in direct mode. The bottleneck part is the plots writing and your GPU is already waiting most of the time. So adding another one won't help.
    In buffer mode that would help because the writing operation is quick, and the plot generation becomes the bottleneck part. But the resulting files won't be optimized.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    Yes, that's the spirit. But the processing power of your card is far quicker than your IO throughput in direct mode. The bottleneck part is the plots writing and your GPU is already waiting most of the time. So adding another one won't help. In buffer mode that would help because the writing operation is quick, and the plot generation becomes the bottleneck part. But the resulting files won't be optimized.

    Tried running the GPU Plot Generator - Two HHDs, one GPU.
    [ERROR] Unable to extend output file: 112

    Have checked the two HDDs. The first has the (new) plot name-file in place, but the second does not. Cannot see any syntax or /\ errors etc in the Command Line, parameters. And do not see this error listed in the GitHub Repository /ReadMe text. Tried removing the second HDD plot, but receiving the same [112] error shown.

    Any suggestions?
    Thanks.

    GPU plot generator v4.1.1
    Author: Cryo
    Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD
    Burst: BURST-YA29-QCEW-QXC3-BKXDL

    Loading platforms...
    Loading devices...
    Loading devices configurations...
    Initializing generation devices...
    [0] Device: GeForce GTX 1080 Ti (OpenCL 1.2 CUDA)
    [0] Device memory: 4GB 0MB
    [0] CPU memory: 4GB 0MB
    Initializing generation contexts...

    [ERROR] Unable to extend output file: 112



  • This post is deleted!


  • @BeholdMiNuggets Error 112 is ERROR_DISK_FULL. Sounds like you doesn't have enough space on one of your disks.



  • I downloaded last ver 4.1.1. When I start gpuplotter I saw generating nonces but stuck 0% while HDD 100% working on writing. Is it normal at the beginning?


  • admin

    @agente If generating in direct mode, there is a lot of disk activity without actual plotting at the beginning. However @cryo has released a new version that resolves this. Check for the new version.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    Error 112 is ERROR_DISK_FULL. Sounds like you doesn't have enough space on one of your disks.

    The disks are the same make/model/size. They were formatted before (Gpu) plotting, and a "plots" directory added. (But the resulting error is the same, even when there is no directory).

    The Nonces (number total per disk) is the same as per previous (of the same) HDDs already plotted (via Cpu). So the Drives are not full, nor are they under-Capacity for the new Gpu plot/s.

    But still getting the same "Error 112". Any suggestions? Unfortunately, the alternative, plotting via CPU, takes several days, per HDD.
    Thanks.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    Error 112 is ERROR_DISK_FULL. Sounds like you doesn't have enough space on one of your disks.

    After readjusting all of the numbers & parameters, the previous error message has vaporised...
    Only to be replaced by another one!
    [ ERROR ][ -5 ][ CL_OUT_OF_RESOURCES ] Error in step2 kernel launch

    But don't see this message anywhere in the ReadMe /##Troubleshooting section.
    Does the cognoscenti have any more sage advice? Thanks.



  • @BeholdMiNuggets doesn't that mean that the devices.txt isn't optimized for your system, causing it not to startup?



  • @haitch newer than 4.1.1.. I hope he release new ver on github



  • @BeholdMiNuggets The error CL_OUT_OF_RESOURCES is not so self explaining that it sounds. In brief: your card can't process the second step of the GPU kernel with your actual parameters (globalWorkSize and localWorkSize). This step is the most intensive one as it fills the GPU buffer with scoops by performing a lot of shabal hashes.

    To fix that:

    • Make sure your globalWorkSize can evenly divide your localWorkSize.
    • Try with lower localWorkSize values. If you run the listDevices command on your GPU platform it'll output some hints from the card (like the maxComputeUnits and maxWorkGroupSize soft values).

    You may say "why can't the plotter automatically determine those tricky parameters", the simple answer is: because the returned hint values don't ensure the success. In fact, most of the time, what graphic cards claim to support doesn't match with reality.

    Via the setup command, my actual strategy is:

    • for globalWorkSize, to take the minimum value between globalMemorySize / PLOT_SIZE and maxMemoryAllocationSize / PLOT_SIZE.
    • for localWorkSize, to take the maximum value between 1 and (maxWorkItemSizes[1] / 4) * 3. This formula sucks but it has the best results for now.


  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    ... In fact, most of the time, what graphic cards claim to support doesn't match with reality.

    \ gpuPlotGenerator ListDevices 2
    GPU plot generator v4.1.1
    Devices number: 2
    Id: 0
    Type: GPU
    Name: GeForce GTX 1080 Ti
    Vendor: NVIDIA Corporation
    Version: OpenCL 1.2 CUDA
    Driver version: 382.53
    Max clock frequency: 1670 MHz
    Max compute units: 28
    Global memory size: 11GB 0MB 0KB
    Max memory allocation size: 2GB 768MB 0KB
    Max work group size: 1024
    Local memory size: 48KB
    Max work-item sizes: (1024, 1024, 64)

    [devices.txt] --> [ 2 0 1024 1024 1024 ] /Ok?

    There are only a limited number of Cuda Pascal cards that can effectively Plot HDD for Bursting. Afaik, the various vendor versions alter the cooling & design, but this has no significant affect on the underlying performance & capabilities of each Nvidia Gpu.

    So the optimal [ Gpu Plot Generator 4.1.1 ] settings for all variants of the Gtx-1080Ti/1090 (for example) would be the same between manufacturers /models. Given this, it might be useful for Burst-Team users to publish what works for them, by Nvidia model. That is, the contents of their Working [devices.txt] file, and the Cmd line parameters.

    It seems counter-productive for every punter to have to corral so many variables in order to make this work!
    Thx.



  • @BeholdMiNuggets
    For your device.txt file, you can try : 2 0 8192 1024 8192
    In depth:

    • globalWorkSize: 8192 = 2GB GPU RAM
    • localWorkSize: 1024 = 1024 CUDA cores
    • hashesNumber: 8192 if your card is tied to your display, else 4.

    About the devices list, I totally agree on this. That was the idea behind an issue I opened a while ago. At that time I haven't found many volunteers to share and collect those parameters.

    I can gladly add a file on the official repository to list all the devices reported by the community along with working parameters and average nonces/minutes.



  • @cryo said in GPU plot generator v4.1.1 (Win/Linux):

    @BeholdMiNuggets
    For your device.txt file, you can enter...

    Thanks, I'll try your suggested parameters tomorrow.

    • hashesNumber: 8192 if your card is tied to your display, else 4.

    Was not aware that the GPU being plugged up to a display made so much difference, but will adjust accordingly.

    About the devices list, I totally agree on this. That was the idea behind [an issue I opened a while ago]

    I'm guessing that there are only about a dozen Video Cards that are most prevalent for Burst-Plotting, so it's not a huge task.

    I can gladly add a file on the official repository to list all the devices reported by the community along with working parameters and average nonces/minutes.

    Hopefully, the punters on this Forum (& others) can provide some data.



  • @BeholdMiNuggets The hashesNumber should be renamed as intensity. Performances are almost the same between 8192 and 4. It's just that the global work will be divided in more steps to allow the graphic card to answer to standard display rendering calls, or else a watchdog kills the plotter to prevent the display to hang.



  • Trying to find optimal parameters for my GTX 1060 6gb card:
    Using Windows 7 x64, 8gb RAM, HexaCore AMD Phenom II X6 1055T, 2800 MHz (14 x 200)
    MSI gtx 1060 6gb (default values for core and mem), Geoforce Game Ready driver 382.53

    Config gives me following recommended parameters:
    1 0 6144 768 8192
    But it never started, driver crashes, out of resources errors.

    For me, the solutiong was in halving parameters in devices.txt
    Started with this: 1 0 6144 768 8192
    Now its: 1 0 1536 192 4096

    It works now in "direct" mode with my GTX1060 6gb.
    But the speed at start was 40 000 - 30 000 nonces per minute
    and dropped to 11 000

    GPU TDP jumping from 5% to 41%
    Temperature of GPU chip rised to 67C

    Tried this ones 1 0 2048 256 4096 - worked for 5 mins and driver crashes again.
    If i can help to someone with my hardware, to find a solution or optimised numbers PM me


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.