Cause of the botnet ( in response to " pajeet " exploit )
-
@captinkid said in Cause of the botnet ( in response to " pajeet " exploit ):
But it would only have to target the scoop that is up. All of the other scoops would be ignored effectively increasing it's leverage by 4000x
But the ASIC that could simulate 4TB of only the specific scoop that is up. It would be the equivalent of 16PB of HDD space.We have this discussion at least every half year. You calc is not right as there are Shabal funcions and Xor in between during plotting. @Blago is an expert on answering exactly this question now. An ASIC has to make all scoops to extract the one it needs. Check this out:
-
@captinkid You cant calculate scoop 4096 with out computing the 4095 scoops before it: scoop(x + 1) = shabal(scoop(X)) , and the plotting values require the hash from scoop 4,096. Burst Flowchart So even if the block is mining Scoop 1, you still have to calc all 4,096 scoops per nonce.
-
Ahh understood! That's actually a very clean design. I retract my previous uneducated statements :)
-
@captinkid Yeah, that's what helps make it ASIC resistant. And my formula above was a little simplistic:
chunk(x + 1) = shabal(chunk(x))
scoop(x) = chunk(x) XOR chunk(4096)So to be as effective as 150TB as physical disc, the theoretical ASIC, would need to compute 850M nonces per minute, which is 3.5 Trillion scoops per minute, each one requiring a computationally intense shabal hash. Like I said - no time soon, and I doubt in my lifetime.
-
@haitch and don't forget that HDD/SSD sizes are rising....
-
@daWallet Yep, Samsung said they'd be able to build 128TB SSD's by 2018. If they can do that cost effectively, bulk storage is about to get cheap.
-
id say another 5 years and platter HDD's wont be made anymore because SSD's will be cheaper and bigger
-
@Gibsalot It'll all come down to economies of scale - if SSD's get a whole lot bigger, does the price/TB come down, if not the big SSDs will only be used by a select few. That said though, SSD prices have been on a steady decreasing $/GB
-
@haitch yep , and its normal around 5 years for enterprise equip to start making its way into consumer market. with a 128TB SSD set to hit the market next year id say in 5 years we will have 10 - 40 TB SSD's as the norm to enthusiest grade on the consumer end
-
@Gibsalot Yep.
-
-
@Lexicon yea, his estimated plot size if he was running " legit " would be 2.592 PB
-
@Lexicon @HiDevin As mentioned in the other thread 7 of his blocks have also been from the SAME nuance number... How is that possible?? Surely the odds is more than the universe for it to win 7 blocks in such a small amount of time.
These blocks;
374232
374469
374519
374525
374612
374624
374693
-
@arihan That's odd...
-
@gpedro very odd...
-
@arihan The nonce number that won all the blocks is the maximum longint number that Java can display. The blocks were apparently won with a nonce number in excess of that, but it couldn't be displayed properly.
-
@haitch But how is that possible if that LongInt is the max index of nonces that can exist on one account? How can he be winning blocks with a nonce higher than that? Don't make sense...
-
@gpedro I was mistaken that max longint is the nonce number limit, it goes higher. https://forums.burst-team.us/topic/6390/has-pajeet-found-an-exploit-in-the-mining-system/129
-
@haitch said in Cause of the botnet ( in response to " pajeet " exploit ):
@Marc That plotter would be useful to a botnet, but not for a dedicated miner. See above for why.
@Evo A scoop is a scoop is a scoop. Each nonce is a hash of the previous nonce. The hashes in the scoop number are then combined with the previous blocks gensig, and hashed again, then computed against the block target height. A scoop at the beginning of the file is just as likely to win as the last scoop.
So the assumption of the author of SPlotter is incorrect and smaller plot files shouldn't lead to lower deadlines at all? I don't understand the math/algorithm behind mining at all but I do think the author realizes that nonces are nonces and scoops are scoops. The way I read it is that for the miner (that calculates the deadline right?) it matters what the relative position of the nonce within the plot is and by generating many small plots these positions are different I guess?
His "evidence" (the screenshot he posted) sure looks convincing.
-
@jant90 It's complete and utter BS.
I have Burst account X, I plot a single 500GB plot of nonces 0...Y. I also plot 100 5GB files for nonces 0...Y.
A nonce is computationally derived from your ID, the last scoop number, the value of scoop(4096). So both the single 500GB file and the 100 x 5GB files will have exactly the same data in them.
Which plot do you think will be more efficiently and quickly mined?





