[Solved] Some tx died in pending?

It seems that some normal tx died in pending pool, and can not be packed.

Today, a tx from gate.io to a user was reported that it was able to see the transaction on chain at height 49137, but later the tx died.

And there is no microblock at 49137 or 49138 :
https://www.aeknow.org/block/height/49137

Could that happen in such condition: mined to a microblock → the microblock was forked → tx came back to the pending pool and died there-> No transaction on chain.

Yes, this is called a micro fork and those are expected to happen quite often. Those are a well defined part of the BitcoinNG protocol. Microblocks happen quite fast and since network propagation takes some time - it is possible that the next leader was not aware of the microblock at the time when mining the next keyblock. That would mean that the following keyblock was based on top of the previous block and the microblock(s) are excluded in the micro fork. Since by some rough estimates it takes a microblock ~10s to propagate through the network, it could be a couple of microblocks ending up in a micro fork.

When a micro fork happens, the tx in the microblocks are rolled back and returned to the mempool. Once there, the transaction is expected to be mined soon, usually in the next few microblocks but there is no guarantee for that.

3 Likes

Thanks!

Is it possible to improve the efficience of mining microblocks? I asked the similar question in the last hangout, but I was driving at that time and there is no video recorded.

And could rasing the tx fee speed up the tx?

Regrads,
Liu

Since “what one includes in a microblock” as a function of “what one has in one’s mempool” is not a subject of the protocol itself, I am afraid there is not much to be done to protect from microforks. Miners might use our implementation of the micro block candidate building but they might as well be using one of their own.

In our approach - miners greedily take the txs with greater fees. So bumping the fee will give it some advantage, for our implementation at least. Other implementations might have different approaches to this.

1 Like

Thank you~ I’ll observe and pass your advice to others!

1 Like