Improved mining speed - Multi GPU

It looks like I’ve screwed up the return value from the miner, but it is a completely harmless error-report. The miner will work exactly as it should, just that it reports an error when it fails to find a solution.

I’ll try to fix it later since it looks ugly, but have limited time today…

1 Like

The error reported by the miner:
2018-12-08 00:49:24.392 [error] <0.1266.0>@aec_conductor:handle_mining_reply:718 Failed to mine block, runtime error; retrying with different nonce (was 16731439899052739363). Error: no_solution

is totally benign, just the wrong thing returned by the more efficient miner. But if you pull the branch multi_gpu_v2 again I have pushed a patch that fixes this and instead gives the expected debug message!

1 Like

This is great, I have been worried. Thank you for your work, efficiency has increased.

1 Like

I think I’m at about 3 solutions / second per 1080ti with 6gpu setup so its an 4-5x increase in efficiency : )

still probably about 20-30% room to go compared to stratum consistent work feed but already a HUGE difference

Here it is 5.3 seconds between the mining attempts which is a bit high, for best network performance it should be between 3-5s. So adjusting N until the miner hits this sweetspot is crucial. Note: the node has to be restarted between changes to epoch.yaml. I suggest start with N = 5 and adjust accordingly.

Why do we need to target 3-5s. Isn’t it more efficient for the miner to just continue hashing until it finds a valid nonce? Is it because the miner does not return until it finishes checking all N nonces?

No, it is because the miner get the blockhash to mine on as input. And the candidate block will change as soon as there is a new micro block, this will happen every 3 seconds with the BitCoin NG parameters used in the Aeternity network. Thus with the current design the miner can’t be (much) more long running. We are looking into daemonizing the miner but this is a much more involved project than the quick re-design made this week…

Can you share your thoughts? What is the best set of repeats?

Thanks I will pull it today and test. Thanks for all the hard work!

Thanx for this. Before I was sending around 2 possible solutions per minute per 1080ti. Now I’m sending 10. This version is like 5 times faster.

Next optimisation should be around the CPU usage. Most of the time is spent by the CPU adapting the results from the GPUs. My CPU saturates very fast and my optimum is using 2 GPUs per rig. Using more GPUs from the same rig doesn’t improve the efficiency.

Hi hanssv, can the miner tools connect to a pool?

Asus DUAL 1070 OC 8 Гб GDDR5 350H/s

Next optimisation should be around the CPU usage.

The latest commits to my repo support a flag -c to reduce cpuload, at a very slight cost in performance.

Just a suggestion, would you mind adding a version to be shown on “-s”?

Because bee mine pool is using your source code and it would be nice to know what they based their work on.

Can SWAN_SYNC be used to dedicate a specific CPU thread to a specific GPU???
Works good for distributed computing.
I’m watching a 4x1080 Ti rig with a 28 thread Xeon. When Ae-MultiGPU miner is running it hops all around from one CPU thread to another. If I stop AE miner then my CPU DC WUs stay on the same CPU thread. If I run GPUgrid WUs with SWAN_SYNC they all stay put.
The wizards in the GPUgrid forum figured it costs ~10% in performance to hop around.

Is there a way to specify the memory size for each GPU individually???
E.g., I have a rig that has a 1080 Ti, 1080, 1070 Ti & a 1070. If I set it with extra_args: “-E 2” then it will slow all the other cards down or crash them.

Unfortunately not at the moment. The extra_args apply to all instances.

Updated - Monday, Dec 10th, 9:30 CET

That’s a little too hard for me to maintain. Could you instead convince bee mine pool to show from which of my commits they built?

If I spoke Chinese I Definitively would convince them. Hehe he. thanks!

there is a solution.

Question: Why do you use -E 2?

@tromp if you take a look to my code you will see some improvements:

I have different card on the same machine and had many error, like out of memory and so on, so I startet to
share my solution:

https://forum.aeternity.com/t/special-driver-for-gtx-series-open-source

and thanks: ak_XoongvC5xDqBCwdLr3ok1SzK3xEMFVwQ1sA7vj1guBd99HFjQ