Since this is our last week on this proposal, on Monday we will share our last progress.
In the past month we’ve accomplished a lot and we are happy with our progress there, especially given it was Ulf and me doing the coding and Dincho the DevOps. On this basis we would like to share with you our proposal for the next 2 months. We propose a bigger timeframe so we can tackle some bigger tasks. What is more, Hans can help us out as well. At the moment he can not dedicate more than 16h a week, hopefully this would change for the better.
Below you can find our horizon of tasks for the next 2 months. Please note that we don’t commit that we would do all of those in the timeframe but rather this is the order we would tackle tasks.
So this is our proposal It is up to the foundation to decide if they would like to support it or not. cc @Lydia and @Tina
Update rocksdb to 6.4.6
opened 08:23AM - 13 Jan 20 UTC
closed 10:43AM - 12 Nov 20 UTC
kind/improvement
area/db
kind/technical-debt
The node is currently using rocksdb version `5.15.10`.
One issue with this up… grade is that rocksdb switched to using cmake, thus the build process will need to be adapted accordingly.
### Useful Links
[rocksdb Changelog](https://github.com/facebook/rocksdb/blob/master/HISTORY.md)
The latest version of erlang-rocksdb supports Rocksdb 6.5.2 Our system currently uses erlang-rocksdb 0.24.0, which uses Rocksdb 5.15.10. A new release should be forthcoming, also adapting the Erlang part to OTP 23. We want to move to a newer Rocksdb not least because Rocksdb takes up a large part of the Aeternity build time. Also, lots of bugfixes and performance improvements have been introduced in later Rocksdb versions.
When syncing from backup, accept previous states in DB if they don’t differ
opened 06:45AM - 18 Jun 20 UTC
closed 09:11AM - 26 Nov 20 UTC
area/sync
After unzipping DB from https://downloads.aeternity.io
and starting node, the… re are errors during sync: `found_already_calculated_state`
Coming from: aec_chain_state:update_state_tree/4
If the states are identical, we should ignore this error.
This would improve things for the Middleware, avoiding unnecessary problems during database import.
Rest API endpoints version prefix
opened 07:15AM - 15 Jun 20 UTC
closed 08:43AM - 20 May 21 UTC
breaking/api
kind/improvement
Rest API endpoints version prefix should be bumped with the node major. Currentl… y the version endpoint prefix is hardcoded to `v2` in the URL, this is plainly wrong and currently useless.
The prefix should reflect major (backward incompatible) API changes to signal users and machines about the fact.
An example use case is caching layers, e.g. block 1337 can be cached "forever" until its API structure changes for some reason, and changing the prefix will technically invalidate the cache.
This is regular technical debt, and should be fixed.
Dev mode
opened 07:21AM - 15 Jun 20 UTC
closed 02:27PM - 13 Apr 22 UTC
kind/feature
community
area/api
While this has been discussed multiple times I feel it was lost somewhere in the… forums.
Develop mode should enable application developers to run local environment/network with a "fast" non resource intensive mining. Include transactions once they appear. No cuckoo/simulation.
Currently we burn developer laptops for no reason.
Supporting “dev mode” (fake) mining instead of running light cuckoo cycle mining. A prototype for this can be said to exist in the test suites, where this is achieved through mocking.
Data and log locations should be configurable from other location
opened 03:29PM - 30 Jan 20 UTC
kind/improvement
area/core
status/approved
## Expected Behavior
Data and log locations should be configurable from the o… utside.
## Actual Behavior
Some data and log paths are relative, so end up in the CWD
A reasonable solution would be to ensure that if `setup:home()` is set to
an absolute path (and by extension also `setup:data_dir()` and `setup:log_dir()`), then all data and log files should end up there. Esp `lager` settings will need to be tweaked for this.
## Steps to Reproduce the Problem
1. See the [ae_plugin](https://github.com/aeternity/ae_plugin)
The bootstrap logic performs some path rewriting that should be unnecessary,
in order to get the files in the right place (and some still don't)
## Logs, error output, etc.
## Specifications
This would be helpful for plugin applications, and should not be too hard to implement.
Unhandled error in aec_chain_metrics_probe
opened 12:48PM - 23 Dec 19 UTC
closed 02:45PM - 23 Mar 21 UTC
kind/bug
need/input-requested
## Expected Behavior
Process not crashing
## Actual Behavior
Crash
#… # Steps to Reproduce the Problem
Probably inconsistent database. Restart does not help.
## Logs, error output, etc.
```
2019-12-23 12:17:17.022 [error] emulator Error in process <0.5158.196> on node aeternity@localhost with exit value:
{{try_clause,{error,not_rooted}},[{aec_chain_metrics_probe,total_difficulty,0,[{file,"/home/builder/aeternity/apps/aecore/src/aec_chain_metrics_probe.erl"},{line,130}]},{aec_chain_metrics_probe,sample_,2,[{file,"/home/builder/aeternity/apps/aecore/src/aec_chain_metrics_probe.erl"},{line,125}]},{aec_chain_metrics_probe,'-probe_sample/1-fun-0-',1,[{file,"/home/builder/aeternity/apps/aecore/src/aec_chain_metrics_probe.erl"},{line,79}]}]}
```
Might be related as well:
```
2019-12-23 11:50:27.872 [error] <0.32556.195> CRASH REPORT Process <0.32556.195> with 0 neighbours exited with reason: {{{badmatch,{error,not_rooted}},[{aec_peer_connection,local_ping_obj,1,[{file,"/home/builder/aeternity/apps/aecore/src/aec_peer_connection.erl"},{line,738}]},{aec_peer_connection,prepare_request_data,3,[{file,"/home/builder/aeternity/apps/aecore/src/aec_peer_connection.erl"},{line,592}]},{aec_peer_connection,handle_request,4,[{file,"/home/builder/aeternity/apps/aecore/src/aec_peer_connection.erl"},{line,587}]},{gen_server,try_handle_call,4,[{file,"gen_server.erl"},{line,636}]},{gen_server,handle_msg,...},...]},...} in gen_server:call/3 line 214
```
## Specifications
- Virtualization: AWS
- Hardware specs: t3.large
- OS: Ubuntu 16.04
- Node Version: 5.3.0
Probably a rare error, but should be easy to fix. Though the origin of the error is unknown, so testing may be a bit tricky, and addressing the root cause even more so. What we can begin to do is to make the metric probe more robust.
More flexible/file-less configuration
opened 07:33AM - 15 Jun 20 UTC
community
kind/improvement
Currently one needs to provision stable peer keys in a file for example as well … as the genesis accounts.
It would be much more convenient for containized environments to be able to set this in the configuration file and even better with [OS env vars](https://github.com/aeternity/aeternity/issues/3298)
This would simplify testing and deployment of closed systems, and should be easy to implement (testing may take a little bit more time).
Allow configuration by OS environment variables
opened 07:04AM - 15 Jun 20 UTC
closed 01:44PM - 27 Sep 21 UTC
community
kind/improvement
Currently the node can be configuration by command line parameters and configura… tion file. Where the configuration file itself can be changes by command line parameter or `AETERNITY_CONFIG` OS environment var.
see https://github.com/aeternity/aeternity/blob/master/docs/configuration.md#user-provided-configuration
In the world of containerisation is much more "natural" to use OS environment variables to fully configuration a given piece of software, that would easy the deployment in such environments.
e.g. `AETERNITY_NETWORK_ID`, TBD the exact structure/format
This would simplify test setup and development environments. The best way to address it may be to refactor some of the legacy code which checks configuration data. The methods of handling config data evolved over time, and the code reflects this.
aehttp_sc_SUITE failure: timeout waiting for channel open messages
opened 05:57PM - 25 Aug 20 UTC
kind/bug
area/tests
The `aehttp_sc_SUITE:sc_ws_min_depth_is_modifiable/1` test case fails with a tim… eout - at least in some runs.
```
=== Reason: {timeout,{messages,[{<0.9168.0>,websocket_event,channel,
update,
#{<<"jsonrpc">> => <<"2.0">>,
<<"method">> => <<"channels.update">>,
<<"params">> =>
#{<<"channel_id">> =>
<<"ch_21woyLUNVapgSZrHrbdTKohDG5Yad9xNw4wFsrLzf8sKFspZim">>,
<<"data">> =>
#{<<"state">> =>
<<"tx_+QENCwH4hLhAdU5TGcrejxQkGW36Bb7mfyY/N6FwCG5qJxSDCpUmcIv0ie2oy0TzPWq9TpTNeby7zYVcnO/hjIUY1S+KiDTrBLhAn0NBWw3RzA4FS9tujscTinOUTK4jm5RuG7eRMyrMycUVRl4olmgxkDHIPLi3YMMzJ+sdHgK3wHvkxUETOYCDA7iD+IEyAaEBnvLdFTEfyWE16WLId900E+O0wsvSmKqkynYPhUodScSGP6olImAAoQG5u4uTbiAM4+qz6hnyjAQZJnfxP/hQFvbjGc7kagT/PoYkYTnKgAACCgCGEAZ510gAwKCBwESIXtJYQs/2KrdjFqVbmEKLcMJyZ1sylrOYFH8zrQJifO77">>}},
<<"version">> => 1}},
{<0.9163.0>,websocket_event,channel,
update,
#{<<"jsonrpc">> => <<"2.0">>,
<<"method">> => <<"channels.update">>,
<<"params">> =>
#{<<"channel_id">> =>
<<"ch_21woyLUNVapgSZrHrbdTKohDG5Yad9xNw4wFsrLzf8sKFspZim">>,
<<"data">> =>
#{<<"state">> =>
<<"tx_+QENCwH4hLhAdU5TGcrejxQkGW36Bb7mfyY/N6FwCG5qJxSDCpUmcIv0ie2oy0TzPWq9TpTNeby7zYVcnO/hjIUY1S+KiDTrBLhAn0NBWw3RzA4FS9tujscTinOUTK4jm5RuG7eRMyrMycUVRl4olmgxkDHIPLi3YMMzJ+sdHgK3wHvkxUETOYCDA7iD+IEyAaEBnvLdFTEfyWE16WLId900E+O0wsvSmKqkynYPhUodScSGP6olImAAoQG5u4uTbiAM4+qz6hnyjAQZJnfxP/hQFvbjGc7kagT/PoYkYTnKgAACCgCGEAZ510gAwKCBwESIXtJYQs/2KrdjFqVbmEKLcMJyZ1sylrOYFH8zrQJifO77">>}},
<<"version">> => 1}}]}}
in function aehttp_ws_test_utils:wait_for_msg/5 (/home/builder/aeternity/apps/aehttp/test/aehttp_ws_test_utils.erl, line 324)
in call from aehttp_sc_SUITE:wait_for_channel_event_/3 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 4294)
in call from aehttp_sc_SUITE:wait_for_channel_event_match/4 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 4268)
in call from aehttp_sc_SUITE:channel_send_chan_open_infos/3 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 894)
in call from aehttp_sc_SUITE:finish_sc_ws_open/2 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 842)
in call from aehttp_sc_SUITE:sc_ws_open_/4 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 775)
in call from aehttp_sc_SUITE:sc_ws_min_depth_is_modifiable/1 (/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl, line 3012)
in call from test_server:ts_tc/3 (test_server.erl, line 1755)
```
From some log analysis, it seems as if the problem is that the channel is opened with `minimum_depth => 0`. This confuses the generic channel setup code, which has a finishing phase where, optionally, blocks are mined to ensure that the `create_tx` is actually included in a block, and minimum depth is reached. This is triggered for the test case in question, but the tx has already been included, and since `minimum_depth == 0`, minimum depth has also been reached and the associated info reports already delivered.
In a failing run, the following could be seen from the test case output:
```
*** User 2020-08-25 07:23:33.929 ***
aec_conductor:start_mining(#{}) (aeternity_dev1@localhost) -> ok
*** User 2020-08-25 07:23:33.973 ***
aec_conductor:stop_mining() (aeternity_dev1@localhost) -> ok
*** CT Error Notification 2020-08-25 07:23:45.980 ***
aehttp_ws_test_utils:wait_for_msg failed on line 324
Reason: timeout
```
From the stacktrace above, we can see that the test core is waiting for an `open` info msg (line 842).
But scrolling up, we find those messages already delivered, although the test case code wasn't ready for them then.
```
*** User 2020-08-25 07:23:33.908 ***
No test registered for this event (Msg = #{<<"jsonrpc">> => <<"2.0">>,
<<"method">> => <<"channels.info">>,
<<"params">> =>
#{<<"channel_id">> =>
<<"ch_21woyLUNVapgSZrHrbdTKohDG5Yad9xNw4wFsrLzf8sKFspZim">>,
<<"data">> =>
#{<<"event">> =>
<<"open">>}},
<<"version">> => 1})
*** User 2020-08-25 07:23:33.909 ***
[initiator] Received msg #{<<"jsonrpc">> => <<"2.0">>,
<<"method">> => <<"channels.info">>,
<<"params">> =>
#{<<"channel_id">> =>
<<"ch_21woyLUNVapgSZrHrbdTKohDG5Yad9xNw4wFsrLzf8sKFspZim">>,
<<"data">> => #{<<"event">> => <<"open">>}},
<<"version">> => 1}
```
This bug was detected during the maintenance project, and causes intermittent failures in the CI. It should be fixed, should not take more than 1-2 man-days.
The following issues are broken-down tasks from the already approved issue #3194 (Relax restriction that channel cannot be used before min_depth · Issue #3194 · aeternity/aeternity · GitHub )
State Channels: Inactivity timer in chain watcher
opened 03:53PM - 01 Sep 20 UTC
area/statechannels
High-level issue: #3194
[High-level discutssion](https://github.com/aeternity/… aeternity/wiki/Making-the-State-Channel-FSM-responsive-before-minimum-depth-confirmation)
The idea is to be able to order a timer which triggers if a given event (e.g. any, or specific, channel change, for a given channel ID) doesn't occur within a given number of key-/microblocks. As a first step, this type of event could be requested by the client (perhaps also other types of chain watcher events).
State Channels: Client can ask FSM to quit waiting for minimum depth
opened 03:47PM - 01 Sep 20 UTC
closed 02:19PM - 13 Apr 22 UTC
area/statechannels
High-level issue: #3194
[Longer discussion](https://github.com/aeternity/aeter… nity/wiki/Making-the-State-Channel-FSM-responsive-before-minimum-depth-confirmation)
The issue involves not just supporting a client request to proceed without waiting for minimum depth, but also to eventually receive the minimum depth confirmation event and report it.
State Channels: modifiable minimum_depth default
opened 03:37PM - 01 Sep 20 UTC
area/statechannels
The State Channel FSM keeps a default value for minimum depth confirmation.
It … would be practical to be able to allow the client to modify this default.
See [the discussion here](https://github.com/aeternity/aeternity/wiki/Making-the-State-Channel-FSM-responsive-before-minimum-depth-confirmation). High-level issue: #3194
FATE cannot get blockhash of current generation
opened 07:49AM - 11 Nov 19 UTC
closed 08:48AM - 08 Oct 20 UTC
breaking/consensus
kind/improvement
area/fate
## Expected Behavior
For Sophia contract:
```
entrypoint my_hash() =
C… hain.block_hash(Chain.block_height)
```
One expects a hash back, but instead `None` is returned.
The cause is in:
https://github.com/aeternity/aeternity/blob/afef1aa92a1cd6a75f4037d8e2be540ed4114612/apps/aefate/src/aefa_fate_op.erl#L858
There is an off-by-one error `>=` should be `>`. This is consensus breaking and has to be conditional w.r.t. next hard fork. If we change that, a request to relax 256 blocks in the past to something that reflects 24 hours or so.
Some discussion needed on how to deal with contracts that change semantics. Clearly, contracts created after the hardfork will have to respect the new semantics, but should we support different call outrcomes for contracts created before hardfork, but called after hardfork? Do we need a general policy for this or is this a case-by-case discussion?
This an outright bug that should be fixed.
AENS: Review and simplify pointers
opened 07:50AM - 09 Jun 20 UTC
closed 09:56AM - 29 Jun 21 UTC
area/names
breaking/consensus
- [ ] Check what is really enforced right now
- [ ] Adapt to something more str… ict (if necessary), restrict the number of pointers, the order(?), etc.
- [ ] Update tests to cover this
Currently name pointers allow too much freedom for the user to be creative. This should be revisited
Make inner transaction of PayingForTx non-valid
opened 09:37AM - 03 Dec 19 UTC
closed 10:33AM - 05 Jan 21 UTC
breaking/consensus
kind/improvement
area/core
The initial implementation has a fully fledged normally signed inner transaction… . This means that it is possible to unwrap the PayingForTx and post the inner transaction on-chain. Since the inner transaction isn't intended to be used like this it would be nice to disallow it.
The idea is to change what is signed for the inner transaction - the obvious idea is to drop the network-id from the signing schema but perhaps there are other ways as well.
This is a bug in the PayingForTx that would render it useless. The attack vector is described in the GitHub issue. This must be done before Iris release.
AENS: Increase the name expiry time
opened 07:43AM - 09 Jun 20 UTC
closed 12:47PM - 30 Nov 20 UTC
area/names
breaking/consensus
governance
- [x] Make a vote on how long it should be (3m, 6m, 1y?)
- [ ] Change the const… ant (and make it height dependent!)
- [ ] Adapt the tests
This is something that came up a few times in the forum already: name expiration was never decided by the public. The idea here is to allow the community to vote on when names should expire.
AENS: Fix bug in AENS.update signature check
opened 07:45AM - 09 Jun 20 UTC
closed 06:21AM - 21 Oct 20 UTC
kind/bug
area/names
breaking/consensus
area/fate
- [x] Fix it. There is a copy paste error, so the signature check is over the wr… ong thing.
- [x] Improve tests to cover this
This is a bug, it must be fixed.
Deprecate AEVM properly for Iris
opened 09:38AM - 19 Feb 20 UTC
closed 09:54AM - 29 Jun 21 UTC
kind/cleanup
breaking/consensus
kind/improvement
kind/technical-debt
This would mainly be a cleaning up task - moving things, possibly change the tes… t setup so we keep running AEVM tests for old protocol versions, etc.
This one is a technical debt, it should be resolved ASAP.
Dincho would be providing us with his DevOps skills so he is needed all over the tasks, really. When he is not overloaded with work, he will be cleaning the issues assigned to him:
æternity blockchain - scalable blockchain for the people - smart contracts, state channels, names, tokens - aeternity/aeternity
Sync: cleanup dead peers
opened 12:11PM - 09 Jun 20 UTC
closed 09:58AM - 28 Sep 21 UTC
kind/bug
area/sync
This had been discussed over and over again. There are still quite a lot of dead… peers that are being sent around.
Related issues:
#3114
This bug had beem around for long time now. This would be my priority task. There had been a few attempts to expose the bug, so far all of those exposed some issues but didn’t solve it. It is a black box issue and we would not know how much time and effort it would require to fix. It might take 2 weeks or over a month, exactly how much it would take is to determine my availability for the rest of the tasks. A few more issues might be created from this one. I will need Dincho’s help here as well.
HTTP Websockets upgrade regression
opened 11:19AM - 27 Jan 20 UTC
kind/bug
area/statechannels
area/api
status/approved
## Expected Behavior
```
$ curl localhost:3014/channel -v
* Trying ::1...…
* TCP_NODELAY set
* Connected to localhost (::1) port 3014 (#0)
> GET /channel HTTP/1.1
> Host: localhost:3014
> User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
> Accept: */*
> Referer:
>
< HTTP/1.1 426 Upgrade Required
< connection: upgrade
< content-length: 0
< date: Mon, 27 Jan 2020 11:08:39 GMT
< server: Cowboy
< upgrade: websocket
<
* Connection #0 to host localhost left intact
* Closing connection 0
```
## Actual Behavior
```
$ curl localhost:3014/channel -v
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3014 (#0)
> GET /channel HTTP/1.1
> Host: localhost:3014
> User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
> Accept: */*
> Referer:
>
< HTTP/1.1 500 Internal Server Error
< content-length: 0
<
* Connection #0 to host localhost left intact
* Closing connection 0
```
## Steps to Reproduce the Problem
Install any 5.* version of the node (except rc1) and use the above commands.
## Logs, error output, etc.
*`aeternity.yaml` configuration file (formerly named `epoch.yaml`)*
```
websocket:
channel:
listen_address: 0.0.0.0
```
## Specifications
- Node Version: 5.*
This is working as expected in node versions 4.* and v5.0.0-rc.1 First appears in v5.0.0-rc.2
This bug is breaking some of the tools used by SRE and should be a low hanging fruit.
Out of sync /status endpoint data
opened 04:34PM - 18 Dec 19 UTC
closed 09:54AM - 05 Jan 21 UTC
kind/bug
## Expected Behavior
```
$ curl -s http://35.166.231.86:3013/v2/status | jq
…
{
"difficulty": 21527826519820,
"genesis_key_block_hash": "kh_pbtwgLrNu23k9PA6XCZnUbtsvEFeQGgavY4FS2do3QP8kcp2z",
"listening": true,
"network_id": "ae_mainnet",
"node_revision": "c6c12b039971ebe9a367d76826c6acbbd966fa0d",
"node_version": "5.2.0",
"peer_connections": {
"inbound": 110,
"outbound": 20
},
"peer_count": 25238,
"peer_pubkey": "pp_21DNLkjdBuoN7EajkK3ePfRMHbyMkhcuW5rJYBQsXNPDtu3v9n",
"pending_transactions_count": 104,
"protocols": [
{
"effective_at_height": 161150,
"version": 4
},
{
"effective_at_height": 90800,
"version": 3
},
{
"effective_at_height": 47800,
"version": 2
},
{
"effective_at_height": 0,
"version": 1
}
],
"solutions": 0,
"sync_progress": 100,
"syncing": false,
"top_block_height": 184607,
"top_key_block_hash": "kh_2hTa446BBHBYKoodtnQxJmXzmophGjmF5P8gtwdeUx8Ji3aZvN"
}
```
## Actual Behavior
```
$ curl -s http://35.166.231.86:3013/v2/status | jq
{
"difficulty": 21527826519820,
"genesis_key_block_hash": "kh_pbtwgLrNu23k9PA6XCZnUbtsvEFeQGgavY4FS2do3QP8kcp2z",
"listening": true,
"network_id": "ae_mainnet",
"node_revision": "c6c12b039971ebe9a367d76826c6acbbd966fa0d",
"node_version": "5.2.0",
"peer_connections": {
"inbound": 110,
"outbound": 20
},
"peer_count": 25238,
"peer_pubkey": "pp_21DNLkjdBuoN7EajkK3ePfRMHbyMkhcuW5rJYBQsXNPDtu3v9n",
"pending_transactions_count": 104,
"protocols": [
{
"effective_at_height": 161150,
"version": 4
},
{
"effective_at_height": 90800,
"version": 3
},
{
"effective_at_height": 47800,
"version": 2
},
{
"effective_at_height": 0,
"version": 1
}
],
"solutions": 0,
"sync_progress": 100,
"syncing": true,
"top_block_height": 184607,
"top_key_block_hash": "kh_2hTa446BBHBYKoodtnQxJmXzmophGjmF5P8gtwdeUx8Ji3aZvN"
}
```
Note that `sync_progress` and `syncing` fields are out of sync.
## Steps to Reproduce the Problem
No, looks like the sync processes are sometimes stuck
## Logs, error output, etc.
Nope
## Specifications
See the status output above. Hardware not related probably.
This is a curious bug that points to a race condition in the code. The result is a confusing API that is hard to reason about.
aec_chain_state infinity restarts and crashes
opened 09:54AM - 11 Dec 19 UTC
closed 11:27AM - 18 Dec 20 UTC
## Expected Behavior
Recover or crash the node
## Actual Behavior
The n… ode starts spitting exponential number of errors (e.g. 10k/4h) without trying to recover. If the recovery it's not possible it should stop then as it's not operational at all.
## Steps to Reproduce the Problem
Unknown
## Logs, error output, etc.
```
2019-12-09 12:44:26.482 [error] <0.12513.30> Supervisor aec_conductor_sup had child aec_conductor started with aec_conductor:start_link() at <0.12610.30> exit with reason {aborted,{{found_already_calculated_state,<<104,122,94,58,45,227,152,23,188,69,0,106,35,191,113,115,133,113,37,25,201,170,116,99,65,78,0,151,68,239,164,19>>},[{aec_chain_state,update_state_tree,4,[{file,"/home/builder/aeternity/apps/aecore/src/aec_chain_state.erl"},{line,702}]},{aec_chain_state,update_state_tree,2,[{file,"/home/builder/aeternity/apps/aecore/src/aec_chain_state.erl"},{line,693}]},{aec_chain_state,internal_insert_transaction,3,[{file,"/home/builder/aeternity/apps/aecore/src/a..."},...]},...]}} in context child_terminated
```
## Specifications
- Virtualization: AWS
- Hardware specs: t3.large
- OS: Ubuntu 16.04.5
- Node Version: 5.2.0
- Instance ID: i-0dc2fe355c1e42ab7
The error recover mechanism seems to be broken, not marked as a bug but it is clearly one. This could result in filling one’s HDD with garbage logs.
meta_tx’s TTL
opened 11:14AM - 03 Dec 19 UTC
closed 01:20PM - 02 Dec 20 UTC
kind/bug
breaking/consensus
area/generalized_accounts
`aetx:ttl/1` specializes the inner tx and calls its callback's `ttl/1`. In the c… ase of a channel co-authenticated transaction when both participants are GAs, that would result in two embedded meta transactions. Calling `aetx:ttl/1` that would result in the outermost meta_tx's ttl.
Instead `aetx:ttl/1` should return the innermost transaction's `ttl/1`.
This bug could result in unexpected results when using generalised accounts: the TTL being used is the one authenticating the inner transaction but it must be the other way around.
Test suite bugs
aest_channels_SUITE ==> test_simple_different_nodes_channel: FAILED badmatch
opened 12:50PM - 26 Nov 19 UTC
kind/bug
area/statechannels
area/tests
## Expected Behavior
Tests pass.
## Actual Behavior
```
%%% aest_chann… els_SUITE ==> test_simple_different_nodes_channel: FAILED
%%% aest_channels_SUITE ==> {{badmatch,{ok,#{<<"info">> => <<"close_mutual">>,
<<"tx">> =>
<<"tx_+OkLAfiEuEBlz0bOgvRRn2S4RBOecxdlkIUQFjzgA8MBVYURh8aXIo0siqCiUmslCJkGDq1DrxRf5w79kPXtPhpSPFAmlKcDuECsGIAKMPPE3i0gLwhTE91HQZzILSU+IGPpR9kWNIgncSBH0tPmRpnpUKU8SvzXondpx57nLVU91ORxqz5YNQUCuF/4XTUBoQbFNveodt+B570IC8UcdMjjekwZxecIXKZESd7ONTxTy6EBZxxVRkZJRXWytJT2UWghcQZj2EiTzdLSNgN6VMM+7oSGJFyRsrf+hiRVlY8MAgCGEjCc5UAAA27NY1I=">>,
<<"type">> => <<"channel_close_mutual_tx">>}}},
[{aest_api,sc_close_mutual,2,
[{file,"/home/circleci/aeternity/_build/system_test+test/extras/system_test/common/helpers/aest_api.erl"},
{line,176}]},
{aest_channels_SUITE,simple_channel_test,4,
[{file,"/home/circleci/aeternity/_build/system_test+test/extras/system_test/common/aest_channels_SUITE.erl"},
{line,205}]},
{test_server,ts_tc,3,[{file,"test_server.erl"},{line,1755}]},
{test_server,run_test_case_eval1,6,[{file,"test_server.erl"},{line,1262}]},
{test_server,run_test_case_eval,9,[{file,"test_server.erl"},{line,1194}]}]}
.
```
## Steps to Reproduce the Problem
None atm.
## Logs, error output, etc.
https://circleci.com/gh/aeternity/aeternity/100969
aehttp_sc_SUITE ==> plain.with_open_channel.sc_ws_update_abort: FAILED timeout
opened 09:23AM - 31 Oct 19 UTC
kind/bug
area/tests
## Expected Behavior
Tests pass.
## Actual Behavior
```
%%% aehttp_sc_… SUITE ==> plain.with_open_channel.sc_ws_update_abort: FAILED
%%% aehttp_sc_SUITE ==> {{timeout,{messages,[{<0.14644.0>,websocket_event,channel,conflict,
#{<<"jsonrpc">> => <<"2.0">>,
<<"method">> => <<"channels.conflict">>,
<<"params">> =>
#{<<"channel_id">> =>
<<"ch_2mYFVhAbGMgwQPyPuHS9prC7WmKLMFyy89cqqtRR6CZ9kjcYDA">>,
<<"data">> =>
#{<<"channel_id">> =>
<<"ch_2mYFVhAbGMgwQPyPuHS9prC7WmKLMFyy89cqqtRR6CZ9kjcYDA">>,
<<"error_code">> => 2,
<<"error_msg">> => <<"conflict">>,
<<"round">> => 5}},
<<"version">> => 1}}]}},
[{aehttp_ws_test_utils,wait_for_msg,5,
[{file,"/home/builder/aeternity/apps/aehttp/test/aehttp_ws_test_utils.erl"},
{line,316}]},
{aehttp_sc_SUITE,wait_for_channel_event_,3,
[{file,"/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl"},
{line,3676}]},
{aehttp_sc_SUITE,wait_for_channel_event_match,4,
[{file,"/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl"},
{line,3641}]},
{aehttp_sc_SUITE,channel_abort_sign_tx,4,
[{file,"/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl"},
{line,509}]},
{aehttp_sc_SUITE,sc_ws_update_abort,1,
[{file,"/home/builder/aeternity/apps/aehttp/test/aehttp_sc_SUITE.erl"},
{line,3096}]},
{test_server,ts_tc,3,[{file,"test_server.erl"},{line,1755}]},
{test_server,run_test_case_eval1,6,[{file,"test_server.erl"},{line,1262}]},
{test_server,run_test_case_eval,9,[{file,"test_server.erl"},{line,1194}]}]}
```
## Steps to Reproduce the Problem
Can't be reliably reproduced yet.
## Logs, error output, etc.
https://circleci.com/gh/aeternity/aeternity/95397#tests/containers/2
Those are bugs in the test setup.
Drop “native” windows support
opened 07:10AM - 15 Jun 20 UTC
closed 02:16PM - 29 Apr 21 UTC
kind/technical-debt
It's unclear why would anyone ever run a node in windows, we should find if ther… e is even single user of it.
We don't release windows package since 5.4.1 because of broken builds and there are no complains yet.
Bring the discussion in the forum if the community needs the Windows build and if not - deprecate it.