Cuckoo Cycle no longer considered ASIC resistant

The appearance of the Bitmain Z9 mini miner for Equihash has made me realize that putting between a 100 and 200 MB of SRAM on a chip is not as hard or expensive as it has seemed to me before. For Cuckoo Cycle this means that a single chip ASIC costing under $50 can run cuckoo30 (and any smaller instances) much faster than a GPU. As you can read on cuckoo/README.md at master · tromp/cuckoo · GitHub I now claim Cuckoo Cycle to be ASIC friendly. Its main quality is that it is the simplest possible PoW, and that it functions as a proof of SRAM.

5 Likes

So that make cuckoo cycle meaningless.
Aeternity choose Cuckoo cycle because it is ASIC resitant, and Now it becomes ASIC friendly and in fact it is the most easiest POW ever.
What makes cuckoo cycle value now?
So are we going to replaced cuckoo cycle now?

@aeternity-team
Could you answear my question about whether we are going to change mining algos?

Maybe we just embrace the ASIC? They are cheap so as long as they don’t have significant economy of scale the network should retain its decentralization. This would discourage laptop/cell mining, but does this provide some security against botnets?

Simplest =/= easiest, per se.

I’d like to hear more from @tromp whether that claim can be made or not. :slight_smile:

I have no idea what is meant by easy. How do you define/measure ease? When is one PoW easier than another?

That’s kind of my point.

I felt it necessary to ask that directly to you, as “easiest PoW” has the connotation of being “most unsafe” PoW, which is probably what underlies that statement (correct me if I am wrong, @yanbosmu).

More to the point: @tromp, does this have any other direct consequences for the robustness of Cuckoo other than it now being ASIC friendly?

The ASIC friendly lean solver is significantly simpler than the mean solver, meaning it’s more likely to be optimal. In that sense Cuckoo remains quite robust.

What’s the Aeternity team’s view on Cuckoo Cycle being ASIC friendly? It was selected primarily as an ASIC resistant protocol, so is it still considered fit for purpose? Are there any other candidates that the team has been researching? How much would it add to the schedule if a change of protocol is being considered?

Hello guys!

Cuckoo Cycle is still a great mining algo and we have actually mentioned already that ASIC-resistance is not a realistic goal. ASICs cannot be stopped. However, if ASICs are universally available and easy to manufacture by users, then a good level of decentralization can still be achieved. Have a look at this quote from a blog post that we recently shared:

An interesting feature of Cuckoo Cycle is that it is not cost-effective to make ASICs. Nonetheless, ASICs are nearly impossible to avoid, so at some point in time an ASIC for Cuckoo Cycle will become available. What is great, however, is that even when that happens, hardware manufacturers will not have an advantage on creating ASICs over common users. No need for sophisticated GPUs and designs, no company or companies guarding the technology.

Read the blog post here.

With that said, there are still no ASICs available for Cuckoo Cycle and the best hardware to use for mining AE (on the Testnet for now) is NVidia 1080 Ti or alternatively → the 1070 Ti.

Best,
Vlad

2 Likes

翻译版:
大家好!

Cuckoo Cycle仍然是一个很好的挖矿算法,我们实际上已经提到抗ASIC矿机不是一个现实的目标。 ASIC无法停止。但是,如果ASIC普遍可用且用户容易制造,那么仍然可以实现良好的去中心化水平。从我们最近分享的博客文章中看一下这句话:

Cuckoo Cycle算法的一个有趣特性是制造ASIC并不符合成本效益。尽管如此,ASIC几乎无法避免,因此在某些时候,用于Cuckoo循环的ASIC将变得可用。然而,有趣的是,即使发生这种情况,硬件制造商相对普通用户在创建ASIC矿机方面不占优势。无需复杂的GPU和设计,也无需任何公司或公司来保护技术。
阅读博客文章1:工作证明(PoW)共识 vs 股权证明POS共识

话虽如此,目前仍然没有可用于Cuckoo Cycle的ASIC矿机,用于挖矿AE的最佳硬件(目前在测试网上)是NVidia 1080 Ti或者 - > 1070 Ti。

I think the best solution is to make the minimum RAM memory required to run the mining program,
1GB,
multiples of 1GB exist everywhere from phones to GPU to laptops and desktops,
but try to put 1GB RAM on an ACIS & CRY… CRY HARD !!!

which mining program? cuckoo cycle allows for two different approaches.
lean and mean, with different memory requirements. on GPUs mean is 4x faster but takes 11x more memory.

1 Like