Thanx for this. Before I was sending around 2 possible solutions per minute per 1080ti. Now I’m sending 10. This version is like 5 times faster.
Next optimisation should be around the CPU usage. Most of the time is spent by the CPU adapting the results from the GPUs. My CPU saturates very fast and my optimum is using 2 GPUs per rig. Using more GPUs from the same rig doesn’t improve the efficiency.
Can SWAN_SYNC be used to dedicate a specific CPU thread to a specific GPU???
Works good for distributed computing.
I’m watching a 4x1080 Ti rig with a 28 thread Xeon. When Ae-MultiGPU miner is running it hops all around from one CPU thread to another. If I stop AE miner then my CPU DC WUs stay on the same CPU thread. If I run GPUgrid WUs with SWAN_SYNC they all stay put.
The wizards in the GPUgrid forum figured it costs ~10% in performance to hop around.
Is there a way to specify the memory size for each GPU individually???
E.g., I have a rig that has a 1080 Ti, 1080, 1070 Ti & a 1070. If I set it with extra_args: “-E 2” then it will slow all the other cards down or crash them.