Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: validator merkle proof #4132

Closed

Conversation

Inspector-Butters
Copy link

This PR adds a test to the Altair light-client tests in test_single_merkle_proof.py, in order to test the generation and validation of a merkle proof for a validator in the beacon state. This is a functionality usually used by light clients in order to prove some information about a validator.

This PR also adds the .idea path to the list of gitignore files.

The following are a number of potential ways to improve this repository and the testing framework in general:

  1. PR/Issue templates:
    It would be beneficial to have templates for PRs and Issues. some examples:

    • Having an issue template for test coverage would allow creating a number of issues that specify places that lack tests, and then by encouraging contributors to work on those issues we could reach better coverage in the repo.
    • Defining a PR checklist would help the contributors to provide all the needed information in the PR, add sufficient tests, check for linter errors, and ... before publishing their PRs. This would help to have clearer PRs and decreases the work needed by maintainers to review them.
  2. Aggregated Specs:
    Having the aggregated specs instead of per fork, would allow client developers to see the latest specs in one place, and not to be forced to implement changes incrementally. check out light client agg specs as an example.

  3. Only Build eth2sepc for specified forks:
    If possible, don't build the whole eth2specs if a fork=X is provided in the make test command. there is no need to build the package for newer phases than the targeted phase.

  4. Cache for builds and test runs:
    If possible, add caching to the testing framework. It can be helpful to rebuild only parts of the package that have changed, and not the whole package. It can also be used for running tests, to cache test results instead of running them every time. this might be tricky. pytest offers a few ways to improve this, like --lf or --ff flags, or if worth it, use build systems like Bazel that automatically help with this.

  5. Provide documentation on test decorators and their ordering:
    It is not obvious that what decorators are available and what their usecases are. specifically it's not visible in what order you can use them on a test case. An explanation on this would help contributors to use the decorators efficiently.

Copy link
Member

@jtraglia jtraglia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Inspector-Butters, thanks for the PR. I'd like to ask for @etan-status's opinion on the usefulness of this test. It seems fine I suppose, but I'm just not sure.

Regarding your extra feedback...

It would be beneficial to have templates for PRs and Issues.

Maybe. Personally I find templates & checklists a little distracting, but you're right that it would help with first time contributors. I'll look into doing this.

Having the aggregated specs instead of per fork, would allow client developers to see the latest specs in one place, and not to be forced to implement changes incrementally.

Sounds nice, but I like our current organization. We've built tools (e.g., https://jtraglia.github.io/eth-spec-viewer/) which allow developers to view an "aggregated" version of the specifications.

If possible, don't build the whole eth2specs if a fork=X is provided in the make test command. there is no need to build the package for newer phases than the targeted phase.

This would be more complicated to implement than you think. I've made some recent improvements to the build system. I would prefer to just quickly rebuild everything & be sure that the pyspec is correct.

If possible, add caching to the testing framework. It can be helpful to rebuild only parts of the package that have changed, and not the whole package. It can also be used for running tests, to cache test results instead of running them every time. this might be tricky. pytest offers a few ways to improve this, like --lf or --ff flags, or if worth it, use build systems like Bazel that automatically help with this.

Again, sounds nice but overly complex. I don't believe iterative/cached builds are currently worth the hassle. After recent improvements, the pyspec build takes like 3 seconds. Not instant, but not that slow. We could indeed add wrapped support for --lf and --ff if necessary. There's a 0% chance we will add Bazel to our toolchain 😅

It is not obvious that what decorators are available and what their usecases are. specifically it's not visible in what order you can use them on a test case. An explanation on this would help contributors to use the decorators efficiently.

We are certainly lacking a lot of documentation. A page on this would be nice to have.

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>
@etan-status
Copy link
Contributor

etan-status commented Feb 19, 2025

Thanks for the contribution.

Generally, it helps to separate the various points into multiple issues / PRs, that way it is easier to track each of them individually. Here, you mix adding a new test, updating a buildsystem file, with a couple different suggestions (some specific and some general).

The consensus-specs currently focus on covering functionality that is required to implement them. So far, the other single_merkle_proof tests are all needed for serving light client data serving, and the tests ensure that even implementations can be tested that do not support arbitrary merkle proof generation and instead hardcode proof generation for just the minimally necessary specific generalized indices.

If we add tests for validators, such implementations would either have to skip those tests or extend their implementation to properly pass them. I'm not sure what the policy is on that, and would defer back to @jtraglia to decide. The test itself looks okay, and I think that existing implementations are likely to already support producing the new validator proofs without changes (but there is no guarantee that they do right now).

@jtraglia
Copy link
Member

Thanks for your response @etan-status!

The consensus-specs currently focus on covering functionality that is required to implement them. So far, the other single_merkle_proof tests are all needed for serving light client data serving, and the tests ensure that even implementations can be tested that do not support arbitrary merkle proof generation and instead hardcode proof generation for just the minimally necessary specific generalized indices.

If we add tests for validators, such implementations would either have to skip those tests or extend their implementation to properly pass them. I'm not sure what the policy is on that, and would defer back to @jtraglia to decide.

This makes sense & I agree. I would prefer not to force clients into updating their reference tests infrastructure. And since this test doesn't really test a new & necessary part of light clients, I'm inclined to reject this test.

@etan-status
Copy link
Contributor

etan-status commented Feb 19, 2025

The test infrastructure is likely already there, this is more of a policy decision. Technically, one doesn't need to be able to create validator proofs to be a full node.

@Inspector-Butters
Copy link
Author

@jtraglia @etan-status Thank you for the reviews. I'll mark this PR as closed. will try to add separate improvements later on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants