The original design for Burstcoin uses a system of 'plots' and 'scoops'. 'Plots' are 256KB clusters which contains chains of hashes (an individual hash is called a 'scoop').
Every 'round' (or 'block') of Burstcoin derives both a target value to match (similar to a lottery target) and a random target scoop number to match. To calculate any particular 'scoop', its whole 'plot' needs to be known.
I'm not completely sure but there seems to be the belief that grouping the hashes (or 'tickets', as I call them, in the sense of the each block being a 'lottery') this way allows for a greater advantage against a pure processing based miner.
This belief may not be well-founded. It is true that this renders calculating any individual scoop up to 4096 times more difficult, but it also decreases the number of valid 'candidates' for a particular block 4096 times.
I think a more effective metric would be to compare the time it would take for a processing-only miner to generate an equivalent amount of candidates, say, comparable to a large, terabyte scale mined dataset.
If the block time is, say 1 minute, and mining to disk took a month, that means that the equivalent computing power needed by a processing-only miner is about 43,200x (there are 60 * 24 * 30 minutes per month) to test the same amount of candidates.
I understand this still may not be completely clear. The original description and flow chart wasn't very clear as well. I'll try to do my best to simplify this further if I can.