GPU plotter Help


  • Mod

    @Redmogul https://github.com/bhamon/gpuPlotGenerator/blob/master/README.md

    [platformId] [deviceId] [globalWorkSize] [localWorkSize] [hashesNumber]
    platformId: The platform id of the device. Can be retrieved with the [listPlatforms] command.
    deviceId: The device id. Can be retrieved with the [listDevices] command.



  • @Blago
    I appreciate the Info, but when I run the command to ListPlatforms it still only shows me my CPU



  • Show your 'junk' when asking for help. Others may be more likely to help. Ubuntu, Windows 7, 8, 10, What type of Video card? How much memory on your video card, What type CPU? How much memory on your motherboard?



  • @TurtleWinsRace OK Thank You for your suggestion. Here's the system info I'm working with.
    Windows 7 Home Premium 64
    Processor: AMD FX(tm)-8350 Eight-Core Processor Installed memory (RAM): 16.0 GB
    Graphics Card: AMD Radeon R9 270X 4GB



  • @Redmogul said:

    it still only shows me my CPU

    Could you please be a bit more specific? How many platforms does it tell you that you have? Does it show like this?

    0_1468344006836_Screenshot.png



  • @Propagandalf I really appreciate your help!!
    I finally figured out that part, i was making it much more difficult than I should have <--new to this.
    I am stuck on 1 last part which is how to figure out the stagger.



  • @Redmogul No problem, I'm glad to help! Have you already saved a configuration file?

    The stagger is a value where you decide how much RAM to use. The amount of nonces you wish to plot must be a multiple of your stagger, otherwise it won't work. However, I do not know the formula for calculating stagger, but I believe haitch does.

    @haitch Redmogul has 16 GB RAM and 4GB VRAM. How do you take RAM vs. VRAM into consideration when calculating stagger and what is the formula? If you decide to plot several files at once, should the stagger be changed for each file?



  • @Propagandalf ok cool
    so this is what I have
    gpuPlotGenerator generate buffer E://11928632418087677223_0_1860000_2000
    pause
    and tried to run it but got [ERROR] Invalid parameters count at line [1]



  • @Propagandalf
    and used this as the device folder:
    0 0 2000 2000 8192



  • @Redmogul said:

    generate buffer E://11928632418087677223_0_1860000_2000

    Preceding the rest of your file parameters, I believe the drive path should be with only one forward slash, and not a double back slash (e:). I also think that your stagger number is wrong, but I would prefer if someone else gave you some good intel on that.

    @Redmogul said:

    0 0 2000 2000 8192

    Are you certain that you have selected the correct value for device platform and device ID? I'm not saying you haven't, but do check. That last value, 8192, can be reduced if you experience display driver crashes when trying to plot. I had to reduce mine all the way down to 900 before it worked.



  • @Propagandalf wow
    I was following the video and the readme file for gpu plotting and thats the way they show it. I'll give it a try.



  • @Propagandalf same error message



  • @Redmogul I still don't know the exact answer, but wish I could help you! Have you figured it out yet?



  • @Propagandalf Thanks for the help, it was only 1 / and that seemed to fix it. now i have to figure out why i can't get the miner to work....lol



  • @Redmogul Glad to hear you figured out the plotting! Perhaps you could start a new thread with your mining issues if you need help. Remember to include as many details as possible, so that people can help more easily.

    May the burst be with you!



  • @Redmogul
    Hey I started this whole thing and I was downloading every miner plotter and wallet I could find and found myself pulling my hair out trying to write bat files do command lines and just plain trying to figure this stuff out. I found that the AIO (ALL IN ONE ) Client :
    https://forums.burst-team.us/topic/16/hotfix-burst-client-for-windows-v0-3-1-all-in-one-wallet-plotting-mining
    is your best bet, daWallet seems to be constantly working on this project and it seem to me the direction the whole project is going to make it turn key for the new users. Follow the video and it it step by step and you will be mining in 30 mins. (After you have plotted)Optional:
    If you want to get started fast with the local wallet, I recommend you to download the latest package of the blockchain here: http://db.burst-team.us. You have to extract it to your C:\Users\Username\AppData\Roaming\BurstWallet\burst_db\ folder.
    (The AppData folder is hidden! so you are going to have to go to explorer and change settings to see hidden files/ folders) and manually place it( Blockchain: DB.zip)as the above forum describes, it takes forever to download the BlockChain through the client.
    Again I can't emphasize enough just follow the video and do everything through the client, (THE BOTTOM Left Buttons Plotting and Mining) . After plotting there are 3 settings for mining and I believe the third one down is for GPU mining. I honestly did not see a difference in the speed or the amount of Burst Deadlines I was finding while doing GPU mining, and my Video card was getting hella hot. I live in the desert and its hot enough as is without adding to climate and burning up my video cards:) I hope this helps. Have a good day,

    Kind Regards,

    ~LostBoy_SeekTime



  • @LostBoy
    Good info! I was in the same boat as you when I started, and I did the same thing, hehe.

    @Redmogul
    I just want to add that you don't have to use a GPU miner if your plots were plotted with a GPU plotter, you could also use a CPU miner.

    @LostBoy

    my Video card was getting hella hot

    Unless you have an insane amount of TB I cannot imagine why your video card would get hot during mining, unless you are using your GPU for mining other cryptocurrencies at the same time. After all, Burst is POC and not POW.



  • @Propagandalf
    Yea I was doing litecoins to. But I just turned that off cause I need to pack that machine for the move to Midwest.:) My burst machine will stay running until I light the torch and walk out the door:)

    Regards


  • admin

    @Propagandalf I generally set stagger to = 0.5 * RAM, so in Redmogul's case 8GB. If you're plotting multiple files at once, divide this by the number of files being plotted - eg if plotting 4 files - 2GB each. The NVRam setting goes into the devices.txt and is per device, so it doesn't matter how many files you're plotting, just set it for that device.

    H.



  • Thanks for the reply. For non-technical people, it can sometimes be quite difficult to relate to all the different values and calculations. I have had a hard time understanding how stagger works and how to calculate it my self, but if I understand you correctly now, this is how we should think and plan for setting up stagger:

    Finding stagger (the next blockbuster film from CryptoPixar)
    1: Find out how much RAM is available in your system (remember that even though you have for example 8 GB of ram installed, you might already be using 3 GB for other processes, which leaves you with 5 GB available). RAM should not be confused with NVRAM, which is your graphics card's dedicated memory, and is not part of this equation.*

    2: Decide how much of your available RAM to dedicate to plotting, for instance 4 out of 5 GB of available RAM.

    3: Understand that 1 stagger is a unit measure of 256 KB of RAM memory. Therefore, if you want to run an exact calculation to understand the conversion from memory to stagger, you need to find out how many KB the 4 GB of RAM you want to use consists of (break the numbers down to smaller units). 4 GB equals 4096 MB equals 4194304 KB.

    4194304 KB (RAM) divided by 256 (stagger unit measure) equals 16384 (stagger). This will be your correct value for stagger, based on the above example from section 1 and 2.

    4: Know that it is easier to simply find out your amount of available RAM in MB and multiply it by 4 (4096 x 4 = 16384).

    5: If you decide to plot several files at once, you need to divide your stagger by the number of files being plotted. For two plot files it would be 16384 stagger divided by 2 equals 8192 per plot file.

    6: Start plotting if you have got the other values worked out such as

    4401562696129194441 - replace with your numeric account ID
    106430464 - nonce to start at, do not overlap (if this is your first plot use 0)
    7618560 - number of nonces to plot, must be a multiple of the stagger (the next field)
    16384 - stagger, where each unit represents 256 KB of memory, which in this case totals 4 GB (4096 MB)

    Example output: gpuPlotGenerator.exe generate direct u:\4401562696129194441_106430464_7618560_16384

    Note: My above calculations were based on the binary value of bytes, which differs from the decimal value.

    *NVRAM settings are put in devices.txt and is per device, so it does not need to be changed according to how many files you are plotting simultaneously.

    Source:
    @haitch said:

    I generally set stagger to = 0.5 * RAM, so in Redmogul's case 8GB. If you're plotting multiple files at once, divide this by the number of files being plotted - eg if plotting 4 files - 2GB each. The NVRam setting goes into the devices.txt and is per device, so it doesn't matter how many files you're plotting, just set it for that device.

    gpuPlotGenerator.exe generate direct u:\4401562696129194441_106430464_7618560_16384

    4401562696129194441 - replace with your numeric account ID
    106430464 - nonce to start at. If this is your first plot use 0
    7618560 - number of nonces to plot, must be a multiple of the stagger - the next field
    16384 - stagger. Each 1 = 256 KB of memory, so divide by 4 to get amount of memory to use. In this case it's 4GB (4096 MB)

    @FrilledShark @luxe
    I am not sure if this exists already in the FAQ section or if it is needed, but feel free to use this if it is of any help to newbies.


Log in to reply
 

Looks like your connection to Burst - Efficient HDD Mining was lost, please wait while we try to reconnect.