Aeternity node 6.6.0

Hi all!

We are making great progress and some big topics are coming to an end. Long story short - we had released Aeternity node v6.6.0. It is already out, we had deployed it to both testnet and mainnet nodes. It is packed with new functionality but the big topics are DB refactoring and of course - HyperChains.

The DB refactoring had been a long running task. It primarily aims at data consistency. It also resulted in a couple of time speed up of the sync process. As a side effect of it, we now can implement a much more efficient GC of older states and lower the memory footprint. This would be included in a future release. With regards of the sync speed up, I would leave to @uwiger to post some metrics.

HyperChains are progressing with a full speed. So far we had a big feature branch and we were releasing from it. This is no more - now all changes go in the master branch and from now on running a HC node would be as simple as providing a different config. There are a lot of changes in the smart contract and it is much easier to modify it according to your specific use case. We are also introducing some parent chain interaction: at this point we would be using the main net’s blocks as a source of entropy for the child chain. The whole UI is revisited with a focus of UX. The HyperChain test net is not yet upgraded and once it is - we will put a dedicated forum post for it. If you are too eager to know all the changes, please consult the release notes of the release.

Other notable changes are the dropping of the support of OTP22 and Ubuntu 16.04. We also removed some unused CLI commands and some bug fixes.

So go ahead and upgrade your node in order to take advantage of the new features.

8 Likes

Regarding sync speed, here is an image illustrating a speed test we ran earlier, using three equivalent nodes:

The graph shows the time it took to reach a height of ca 230K while syncing against mainnet. The colors mean as follows:

  • Purple: Direct DB access (new mrdb API)
  • Yellow: Using Mnesia access, short-circuiting Mnesia’s commit logic
  • Blue: The previous release without optimizations

Direct DB access is an optional feature for the time being. It’s enabled by adding the following to the config:

chain:
    db_direct_access: true
8 Likes

cool stuff, thanks for the update! :slight_smile:

and special thanks to @uwiger for fixing the rollback bug in using devmode :raised_hands:

7 Likes

We had identified a big blocker for the release. There is a memory leak in it and the memory consumption keeps growing. We had tracked this both on main net and testnet. The issue had not shown itself on HC testnet network but this could be due to its lower amount of transactions there. The 6.6.0 release had been a really big one with regards of shipping new functionality and at this point we are not sure what causes the leak. We do have some strong candidates - some underlying C functions had changed and they could be the cause of it. We ran analysis with Valgrind and it indeed identified some issues there but those would happen only on node start and certainly do not explain the memory leak all through the lifespan of the node. There is more. At this point we are exploring the option of replacing those libraries with their Rust equivalents.

So how does this concern you? First of all - if you need any of the functionality the release brings, please use it. Bear in mind that the bug is there and you might have to restart your node every once and a while. We had downgraded our nodes to 6.5.2 and they work fine - if you plan a long running node without interruptions you should probably do that as well. We had marked the release as pre-release as we don’t want to promote it as the latest stable release. The same goes with docker hub images: the 6.6.0 ones are there but the latest and latest-bundle are 6.5.2 releases.

Once we have a solution for the problem, we will do a 6.6.1 release. I will keep you up to date :slight_smile:

6 Likes

great team effort! looking forward to seeing it solved :slight_smile: curious about the actual reason for the memory leak

3 Likes

Well done. Thank you