GPU plot generator v4.1.1 (Win/Linux)



  • @vadirthedark I am using ubuntu 14.04 and GPU plot generator v4.0.3 (Win/Linux)



  • @CoinBuster Thanks for the info. I am using Ubuntu 16.04. Could you tell me what Nvidia driver you are using?
    Thanks



  • @vadirthedark You have to download the right pack from Nvidia for your specific GPU. https://developer.nvidia.com/cuda-downloads

    Depends on the drivers you want to install, some of them support 16.04. The driver I have downloaded do support 16.04. I installed pyopencl on my system.



  • @CoinBuster, Would you tell me the advantages you see for installing pyopencl?
    My GPU (GEforce 750TI) has similar specs. as your GPU. Did you install any additional
    repos. (i.e- Cuda, etc.) as help for plotting/mining?
    I am interested in system details for Linux based setups (successful and not). I did succeed onces
    with a very slow setup which took 5 days to plot only to stop when it hit what seems to the 5TB
    drive capacity which was derived from inaccurate input.
    Thanks again for any help you can give.



  • I get an error when trying to run the GPU plotting software.

    Loading platforms...
    Loading devices...
    Loading devices configurations...
    Initializing generation devices...

    [ERROR] Unable to open the source file

    Anyone know what file the error is referring to?



  •         Bump


  • root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# ./gpuPlotGenerator listDevices 0
    ./gpuPlotGenerator: /usr/local/cuda-8.0/targets/x86_64-linux/lib/libOpenCL.so.1: no version information available (required by ./gpuPlotGenerator)
    -------------------------
    GPU plot generator v4.0.2
    -------------------------
    Author:   Cryo
    Bitcoin:  138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD
    Burst:    BURST-YA29-QCEW-QXC3-BKXDL
    ----
    
    [ERROR][-1001][CL_UNKNOWN] Unable to retrieve the OpenCL platforms number
    root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# 
    


  • @Tate-A Okay, Installed the latest Drivers and OpenCL. Now I get this.

    root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# ./gpuPlotGenerator listDevices 0
    Inconsistency detected by ld.so: dl-version.c: 224: _dl_check_map_versions: Assertion `needed != NULL' failed!
    root@Tesla-SuperComputer:/home/tate/gpuPlotGenerator-bin-linux-x64-4.0.2# 
    


  • @ccminer said in GPU plot generator v4.0.3 (Win/Linux):

    @vadirthedark I finally got my new r9 380 4gb and I installed it!
    So now I start to face your same problem to GPU plotting my hdd.
    did you finally have been able to use the script on ubuntu?

    Hello everyone. I decided to give this GPU plotting method a try. Once I got through the initial stages of prepping my system I recall there being a question of whether gpuPlotGenerator 4.0.3 supported CUDA or not. I ran the software not knowing either way but it just stopped.

    Before I go ahead and troubleshoot can someone clarify this for me please?

    Thanks!



  • Hi,

    With what settings are we able to create a very optimized plots?
    1 GPU and 16GB of ram



  • Hi, just checking out this plotter as I found out my integrated GPU will work with the plotter/miners for Burst.

    I ran the setup to configure the GPU and the recommended values were not powers of 2:

    0_1487953082455_upload-7c22ac63-9947-44ec-8302-8d9cc645e4d6

    I put in the recommended values and the program appeared to store different values, specifically the 19 was stored as a 8:

    0_1487953207376_upload-d886df0f-fab2-410e-bc1e-b88bfe483d7e

    Is this normal?



  • @rds,

    bump,

    anyone?

    @haitch, @daWallet, @luxe, @ccminer ?


  • admin

    @rds Not seen that before - I'd manually set it to a power of two - either down to 4096 or up to 8192


  • admin

    @rds This setup is just creating the devices.txt i guess ... you can always edit that by hand, too. But i can not tell you why the 19 was not saved but 8 instead (seams to be a cpu? very low work group size) ... if this is important to you, you could try contact dev via github.



  • Hey guys,

    Has anyone else had an issue with gpuPlotGenerator where in buffer mode it will start by using pretty much 100% of my GPU (which is what I want) at around 30k-50k nonces/minute and then it drops to around 12k nonces/minute. Sometimes it will stay at the higher amount and I can get 1TB plotted in an hour or less. Other times it drops down and only utilizes my gpu around 17%. It's weird that this doesn't happen all the time and I was wondering if anyone else had this problem.

    Also I have a MSI Gaming X RX 480 8GB GPU.

    My devices txt is:
    0 0 8096 64 8192

    My Batch file looks like this:
    gpuPlotGenerator.exe generate buffer j:\Burst\plots\3817333640460646654_1026441216_4091904_36864
    0_1488893107768_gpu.png

    As you can see there are a couple spikes here and there, but it is barely using my GPU.

    The only gain I have managed to get so far is running gpuPlotgenerator as admin and I seemed to get a gain 23%-27% percent utilization of my gpu and and 18k-20k nonces/minute.

    Any help would be greatly appreciated!


  • admin

    @KB-Bountyhunter Plotgenerator can only generate plots as fast as your drive can write them, it starts with 30-50k cause they are generated in memory ... once writing to disk, the rate may drop to the write speed of your drive ... you could plot to multiple drives at once to increase plotting speed.



  • Hi.

    I've been using Xplotter for some time, but recently got a RX480 card.

    One feature I loved about the Xplotter was, that if you set the nonce number to 0, then it will calculate the biggest plotfile, and fill the drive completely..

    Will this be a feature in a future update.

    And maybe an argument to automaticaly split the file, that would also be great.



  • mah, idk I tried ploting with HD6870 but system start to freeze.



  • Hi,

    is this project still being developed?
    I'm not really good at C++ since most of my work is based on Java but i think there are still some points that could be improved. I looked at my gpu and hdd usages and found it very odd to have like 50% average gpu usage and 50% average hdd usage becase generation and writing are not done in parallel...

    For example it should be possible to add two features:

    1. hybrid mode, which does fill a part of the hdd (like direct mode does for the full plot size) while the gpu calculates the plot and then write it
    2. reserve twice (4x?) as much system ram then generate part after part (keeps the gpu running all the time) while the hdd writes those parts


  • @tco42 i think in most cases gpu will be much faster in generating than the data write speed of hdd, so it would still sit idle


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.