-
Notifications
You must be signed in to change notification settings - Fork 118
feature wait 50 blocks to process Ethereum events #338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Testing locally, ceth going across the bridge works perfectly! Will now test wiring up erowan and making sure it gets minted on the cosmos side as rowan after 30 blocks. |
cmd/ebrelayer/relayer/ethereum.go
Outdated
|
||
// Iterate find out older enough events | ||
for key, value := range sub.EventsBuffer.Buffer { | ||
sub.Logger.Info(fmt.Sprintf("New header is %d stored header is %d", newHead.Number, key)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized that we have a logic error here. This picks up events and stores them in the q, but it doesn't check that this event is still valid and that TX did not end up getting dis included in a block because the block was orphaned, which was the problem we are trying to solve.
Here, we will need to check on the transaction that was included in this event and make sure that it is in fact in a block that is 30 blocks ago, if it is not in a previous block, we delete it from the q, if it got included in a later block, we add it again to the queue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the we can check it via code logic before. If we get the event, which block hash not the same with the buffered one. Then we removed all events with old block hash.
if ok && events.BlockHash != newHead.Hash() {
// Deal with block reorg, delete all events with old hash value
sub.Logger.Info(fmt.Sprintf("Block reorg found old hash is %v new hash is %v", events.BlockHash, newHead.Hash()))
delete(sub.EventsBuffer.Buffer, newHead.Number)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That solution sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to somehow query to check if this old block was valid, if it is invalid, like you said, we throw all of the old stuff out.
…hain/sifnode into feature/wait-blocks-confirm
const web3 = new Web3(provider); | ||
try { | ||
const accounts = await web3.eth.getAccounts(); | ||
await web3.eth.sendTransaction({from: accounts[8], to: accounts[9], value: 1}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we please loop over this 30 times in this file vs having the python file trigger this 30 times? It should make things a lot faster as truffle exec has a bunch of overhead of spinning up the compiler and checking on the artifacts and then connecting to the network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loop over 30 times added into python script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need this the other way around - there should be a yarn command that generates N blocks. Like @ElliotFriedman said, doing this in python is really slow, but we also want to be able to do it in shell scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@juniuszhou Please add a loop inside of the javascript file to make sendTransfer.js mine 30 blocks and remove the loop in the python file. This way things will execute much faster when tests are run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use the advance block function from openzeppelin to advance blocks https://forum.openzeppelin.com/t/how-to-simulate-a-date-in-the-future-with-truffle-tests/908/3
https://docs.openzeppelin.com/test-helpers/0.5/api#advanceblock
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use the advance block function from openzeppelin to advance blocks https://forum.openzeppelin.com/t/how-to-simulate-a-date-in-the-future-with-truffle-tests/908/3
https://docs.openzeppelin.com/test-helpers/0.5/api#advanceblock
Does this bignumber issue matter to us? OpenZeppelin/openzeppelin-test-helpers#70
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is one case where we really do want to improve the integration tests before we merge this. We should definitely have tests that check the balances before and after 30 blocks, for example.
const web3 = new Web3(provider); | ||
try { | ||
const accounts = await web3.eth.getAccounts(); | ||
await web3.eth.sendTransaction({from: accounts[8], to: accounts[9], value: 1}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need this the other way around - there should be a yarn command that generates N blocks. Like @ElliotFriedman said, doing this in python is really slow, but we also want to be able to do it in shell scripts.
I'm not seeing anything here that looks incorrect to me. I talked with the peggy althea team, they use 50 blocks before allowing transactions through. I think we should bump up to 50 blocks waited for security reasons. Before I run a final test against this PR, please fix all of the integration tests and then tag me so I can run a final test before signing off on it. |
Apparently we can check the status of an event that has previously mined. I would prefer our implementation uses this method by querying the event and tx hash instead of the recursive check that we currently use. https://ethereum.stackexchange.com/questions/68012/is-it-possible-to-get-duplicates-of-the-log-events-on-ethereum-when-we-have-a-fo |
…t.py from develop (commit bb368ec) We're going to do block advancement a different way
…node into feature/wait-blocks-confirm
cmd/ebrelayer/relayer/ethereum.go
Outdated
sub.EventsBuffer.AddHeader(newHead.Number, newHead.Hash(), newHead.ParentHash) | ||
|
||
for { | ||
thirty := big.NewInt(30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be 50 blocks per the new requirement.
…ure/wait-blocks-confirm * 'develop' of ssh://github.com/Sifchain/sifnode: Integration scripts now do all transfers in python and have better waiting (#410)
…ure/permissioned-validators * feature/test-whitelist-validators: Add support for calling sifnoded add-genesis-validators to integration tests [Frontend] Feature/Notifications (#389) fix issues 7 and 8 auditors found (#411) chains/ethereum: yarn add truffle (incl for local) (#413) feature wait 50 blocks to process Ethereum events (#338) Integration scripts now do all transfers in python and have better waiting (#410) [Frontend] Apply testing to CLP API. Apply fixes for breaking changes. (#409) Fix/peggy docs (#406) Tests for block delay (#407) Smart Contract Clean up #1, silence test warnings, fix smart contract issues (#403) Extract assets out of config (#390) Fix to format the tag name. (#400) added asset details to query liquidity provider response (#398) [Peggy] Test improvments and add test for ceth => eth (#396) Clp tut (#395) Workflow fixes. (#399) [ChainOps] feature/k8s-tutorial (#397)
* create event queue for delay send tx to sifchain. * set delay blocks as 30. * add adr file to describe ebrelayer. * update account according to new init.sh * update docs * new algorithm to handle orphan block. * fix linter issue. * fix two errors. * add scripts to generate 30 new blocks. * print out balance for check. * print out balance for check. * remove the eth lock in integration env, it impact the balance of ceth. * make the transfer script send transaction according to input. * add sleep time to make sure new block produced. * Pull in peggy-basic-test-docker.py and test/integration/peggy-e2e-test.py from develop (commit bb368ec) We're going to do block advancement a different way * remove send transfer, create advanceBlock script * fix the conflict. * Don't wrap block advance test in an exception handler * Require 50 block delay instead of 30 * update docs Co-authored-by: Elliot Friedman <elliotfriedman3@gmail.com> Co-authored-by: utx0_ <90531+utx0@users.noreply.github.com> Co-authored-by: Elliot <34463580+ElliotFriedman@users.noreply.github.com> Co-authored-by: James Moore <james@sifchain.finance>
For Ethereum, the block not finalized and the reorg happened sometimes. After ebrelayer received an envent from Ethereum, we put it on the event queue, process them one by one after 30 blocks produced.