A 1080 Ti solves a graph on 2^29 nodes in about 225 milliseconds, giving a rate of 4.4 Graphs Per Second (GPS). Only about 1 in 42 graphs has a 42-cycle, so the solution rate is about 1 every 10 seconds. This assumes the GPU solver context is maintained between graphs, as with running the standalone solver on a range of nonces.

If you are building a new GPU context (i.e. allocating GBs of device memory) for every nonce then you’re doing it wrong…

# Expected 1080 Ti performance

**tromp**#1

Stratum server mining

**tromp**#2

I’ll change my cuckoo CUDA solver to make it easier to run it through a function interface, as currently done for the cuckatoo solver in Grin:

Hopefully the AE devs can adopt a similar interface to avoid a solver restart at every nonce.

**gunray**#3

Just to throw some numbers for 1080ti and AE’s way of mining.

This is for G4540 and some slow RAM, times may depend on these heavily.

With 1 GPU, a single iteration (time between the solver launches) takes about 550ms. This alone is a substantial inneficiency.

With 6 GPU its 1250ms which gives an average of total about 210ms per GPU, so just a bit faster than 2 GPUs running solo.

In both cases GPUs do their work in about 220ms.

**Kryztoval**#7

This may be too much to ask for, but, is there a call that returns a solution? So I can test if it is working properly.

something like “mean29 ‘tjnfbvjklnlkgbjfb’” has a solution.

Never mind, i saw it in your code repository.

**doge**#8

I understand the issue better now - I assumed that loading and unloading huge graphs is what what happening in memory. So a different nonce on the same graph leads to a different distirbution of cycles in the graph ?

**hanssv**#9

Yes, a different nonce means a completely different graph and thus a different distribution of cycles.

**doge**#10

So what is meant by “GPU context” is some sort of in-GPU-memory data structure that can very efficiently change the direction of the edges in a graph ? Instead of creating a new graph from scratch ? I like it.

**aemin**#12

can we add NONCE RANGE (4ex) like

extra_args: “-r 1000000” ??

seems it working without restarts.

**tromp**#13

The solver context is just a bunch of allocated GPU memory along with some solver configuration settings. It can be re-used for different graphs.

**tromp**#14

But the AE process invoking the solver doesn’t know how to parse its output.

Furthermore, the solver needs to update the header roughly every 3 seconds to produce new micro blocks.

**hanssv**#16

To make a computation example, if we assume you make 3 cuckoo attempts per second.

Then, with the current difficulty, you will on average get about 0.2 blocks per 24 hours.

**mrbeery**#17

And each block is 473 AE? (According to http://aeknow.org). If that is correct it seems extremely high.

**hanssv**#18

The reward is adjusted according to an inflation curve - it is discussed in Block reward and block time

The highest reward is 473 for a while here in the beginning if I remember correctly.

**mrbeery**#19

Thanks for very fast reply! I know there’s a high inflation the first year and that the difficulty will be increasing (hopefully).

How does the 2080 Ti compare, has anyone managed to mine with that card as of now?

CUDA issue for RTX 20 series video card

**2nd_doge**#20

I am getting my 2080 RTX Ti on Monday and will test it out.

Generally speaking, I need to test the new 1.0.1 release. Will check if this fixes the issue with all the hashpower going to waste 8x 1080 GTX and no Blocks mined “successfully” since launch + 2h.

## I would hold off on larger investments at the moment.

If you estimate the current Graphs per Second of the network there are huge farms or a huge number of people mining. Quite different than most mainnet launches e.g. Ethereum where miners made thousands in the first days of the network at an exchange rate that practically didn’t exist. Then again - orphaned Blocks are rewarded on Ethereum which is not the case with Aeternity. So it is a winner take-all.