Switch statements was resetting the values (which i think i am not using them correctly). So switching to if, else which makes it works with slight corrections.
But having a way to spot the error at runtime will be helpful
@hanssv.chain@dimitar.chain The title is changed, Can you please take a look and let me know if there is a way to make it efficient, So i can run hasher on top of chain ?
Yes it can be made more efficient. But, if you can get it efficient enough I don’t know.
Two observations:
Shouldn’t this be modulus something, right now you are creating humongously large numbers… With 6 rounds the largest number contains more than 27000 digits, that can’t be right (and it would never work in Solidity )
If you want something to be efficient you should recurse over the list rather than trying to use a list as an array - it means sligthly adapting the algorithm but it should be reasonably straightforward.
entrypoint t5_now(): int =
let _t: int = 6936272626871035740815028148058841877090860312517423346335878088297448888663
let t2_now: int = _t * _t
let t4_now: int = t2_now * t2_now
let t5_now: int = t4_now * _t
t5_now
@hanssv.chain Thank you for your earlier advice, mimcFeistel is working now
For dry-run even by setting gas fee too high ({ gas: 50000000, gasPrice: 20000000000 }), It cannot run twice but for the hasher to work, I need to run it twice for each input. If you run get_feistel_once() you will get the result even with a bit lower gas fee than above like 30000000 lesser might not work. But if you run get_feistel_twice() it always says Invocation failed: "Out of gas" even after setting gas like { gas: 59000000, gasPrice: 20000000000 } setting gas to 60000000 get me over the gas limit (cannot send actual tx, explained below). Should I cut the rounds (arrays in the list) to 1/4 or something similar ?
Hi, sorry for the slow response I was out over the weekend.
I still think the code can be optimized quite a bit, looking at loopResult, which is where the processing is done. For example xl and xr always contain exactly i elements, and we always get(j, xl/xr), i.e. we only use the last element of the list. So no need to carry the whole list which would save you both from cons:ing at the end (<long_list> ++ [element]) and List.get which are both really inefficient.
My initial comment still applies though I’m not sure the code can be efficient enough, but let’s see.
Your suggested optimization works & now even twice mimcfeistel provides the result with the same dry run fee that was able to provide it for once from the start.
Can you please suggest something with MIMCSponge, Any other way to make it more efficient ? I need to run it atleast 40 times in a single call because of 20 Levels on Merkle Tree as to make it more similar to our target.
@marco.chain also consider this that if the possibility to make it more efficient becomes harder and I need to run it at this state with atleast 40 times in each call then can you suggest a way where I can send “Gas * 40” than the limit you and @philipp.chain mentioned ??
It might looks like a big deal but it could hardly be near 1 AE, right ?
I’ll quote myself here - I’m not sure it is possible to get it efficient enough to compute 40 hash-values in a single transaction!? FATE is different from the EVM and while it is more efficient in many ways - pure computations like this is not one of those ways.
If you need a particular hash-function one way forward could be to have it implemented in the VM - this would make it more efficient and cheaper. If you look at the changes already made for the next hard fork we have added a similar hash-function (the Poseidon hash), but, yeah, that require a hard fork.
@hanssv.chain I understood it the first time you said but I need to try. Thank you.
I want to make the system live soon, Can you please suggest from available (different family) hash functions, sha3, sha2 & blake2b. Which one I should use (i can use addition and modulus instead of giving 2 inputs) ?
and also, Can you please tell When is the next Hard fork scheduled ? Like a guess even if exact date is not available to you. I will be prepared then to use correct family hash function for the update (if required).
Isn’t the problem with using a “normal” hash function that it will get very expensive to compute the ZK-proofs? But it was a long time since I did anything with this so I might be wrong.
Next protocol upgrade (a.k.a. hard fork) is not scheduled as far as I can tell. There are a couple of good improvements already implemented - perhaps most notably micro block packing - but I have not seen any discussion on actually upgrading the chain protocol - @dimitar.chain or @marco.chain maybe?
The schedule of the next hard fork is not yet planned, we are still accumulating features for it. Bear in mind that currently our main focus is HyperChains, so our resources are dedicated mostly in this direction. Regarding next hard fork (it is called ceres) improvements, there will be a blog/forum post soon
Thank you so much guys, Currently I am discussing with @hanssv.chain in a private thread about this as well (that i am stuck from blockchain with MiMc not covering in gas limit and sha256 not making efficient number of constraints to use it for anything with current circuits). I am inviting @dimitar.chain & @marco.chain if you guys would like to propose further possibilities.
Last day, I was thinking to run custom node with latest aeternity master as well to make it working on Poseidon hash. Will report it further if it worked, also looking forward to see if more possibilities exists.