Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POS efficient l1 polling #2506

Open
wants to merge 77 commits into
base: main
Choose a base branch
from
Open

POS efficient l1 polling #2506

wants to merge 77 commits into from

Conversation

tbro
Copy link
Contributor

@tbro tbro commented Jan 29, 2025

Closes #2683

This PR:

Asynchronously updates stake table cache by listening for L1Event::NewFinalized. This happens in a second update loop thread (added to L1Client.update_tasks). This task is initialized in first call to get_stake_tables (which will only be called in POS version). To avoid fetching from l1 block 0, we take the block contract was deployed at as initial block. Subsequent calls will span previous finalized block to that received in the NewFinalized events.

Note that errors retrieving initial block from contract are logged, but they fallback to using the last known finalized block. We might double check this logic to make sure this is acceptable.

Most relevant code is in types/src/v0/impls/l1.rs. Tests exist for fetching stake tables from l1 and to ensure stake table cache is being updated from update loop. Therefore, upgrading tasks is tested by implication.

There will be some followups for more error handling on the http requests. We should crate issues for this before merge (see comments below).

A generator utility was added as a best of both worlds solution to avoiding some async calls and avoid lifetimes issues behind obtuse impl Future types.

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

Need to check if the limit settings we have right now make sense. Or if we should switch to using subscriptions. For alchemy the limits are documented here: https://docs.alchemy.com/reference/eth-getlogs

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

I think for subscriptions there is no way to subscribe to only finalized L1 events. I might be wrong about this but couldn't find it. So we would have to do some extra work to make sure we don't consider anything that isn't finalized.

If we use HTTP then I think it should be possible handle the errors we get when we fetch to many events and fetch less. For that I think we can check what HTTP errors are returned from alchemy and infura (we use both providers at the moment) and handle those.

@sveitser
Copy link
Collaborator

sveitser commented Feb 3, 2025

Hmm, both seem to return HTTP 200 for too large requests, and not the same error code

infura

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32005,
    "data": {
      "from": "0x4B5749",
      "limit": 10000,
      "to": "0x4B7EEC"
    },
    "message": "query returned more than 10000 results. Try with this block range [0x4B5749, 0x4B7EEC]."
  }
}

alchemy

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Log response size exceeded. You can make eth_getLogs requests with up to a 2K block range and no limit on the response size, or you can request any block range with a cap of 10K logs in the response. Based on your parameters and the response size limit, this block range should work: [0x0, 0x4b6b35]"
  }
}

So I wonder if we should try like 5 more times with smaller request sizes as long as we do get a successful response from the RPC and then give up, because it's then likely not the response size that is the problem.

state.stake_table_initial_block = init_block;
init_block
} else {
tracing::error!("Failed to retrieve initial block from stake-table contract");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the rationale for actually handling the case where we don't have the init block from the stake table? I think we can't handle this case correctly anyway because it means that the stake table contract at this address is not what we expect, or the RPC is having issues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to make this function fallible and return an error in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes so we log the error so if its the former, someone can do some investigation. I don't think we want to panic on RPC errors. We could retry indefinitely, but then we aren't handling the first case. Using latest finalized as a fallback doesn't seem to terrible to me.

next: Option<Range<u64>>,
}

impl ChunkGenerator {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can replace this with a much shorter function that has the same behaviour, e.g.

fn chunk_range(range: std::ops::Range<usize>, chunk_size: usize) -> Vec<std::ops::Range<usize>> {
    range.clone().step_by(chunk_size)
        .map(|start| start..(start + chunk_size).min(range.end))
        .collect()
}

I'm also fine leaving it as is but if we can have less code for the same functionality that's nice.

Copy link
Contributor Author

@tbro tbro Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can. It's not exactly the same thing though. Yours needlessly allocates a Vec. I don't that is a big deal either, but since we already have the code to avoid that, why not use it?


// deploy the stake_table contract
let stake_table_contract =
contract_bindings_ethers::permissioned_stake_table::PermissionedStakeTable::deploy(
Copy link
Collaborator

@sveitser sveitser Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use only alloy bindings in new code. Otherwise it increases the maintenance work for the migration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I have time to learn a new framework today.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't be a big change and we have conversion utilities. This PR is already using PermissionedStakeTableInstance which is from the alloy bindings. But okay, we can deal with it later.

setup_test();

let anvil = Anvil::new()
.args(vec!["--block-time", "1", "--slots-in-an-epoch", "1"])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice if the tests was written such that it would fail if it used non-finalized blocks. Currently I think it wouldn't matter because every block is a finalized block.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not following.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make a specific suggestion?

Copy link
Collaborator

@sveitser sveitser Feb 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in terms of what we should test, apart from basic correctness of the implementation. One problem would be if we fetched any events that are not finalized. I don't think we have a test that asserts that we are not fetching any non-finalized events. This was an issue / unclear requirement with the original PR which I think tells us that we should have such a test.

sleep(Duration::from_secs(1)).await;
}
// Take block_number from the first receipt. Later
// blocks don't appear to become finalized. Need to wait longer?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment confuses me. So this is anvil behaving strangely?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's actually just the last transaction doesn't get inserted into the cache. I'm not sure if it is a bug in the test or the code, but I'm looking into it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The event listener in stake_update_loop never receives an event for the last block. I can trigger it by submitting two transactions later. I think this is how anvil works, but I may be wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tbro tbro requested a review from sveitser February 25, 2025 16:20
@tbro
Copy link
Contributor Author

tbro commented Feb 25, 2025

Created Issue: #2681

sync::{Mutex, Notify},
task::JoinHandle,
};
use url::Url;

use crate::v0::utils::parse_duration;
use crate::{v0::utils::parse_duration, v0_3::StakeTables};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We import from v0_3 in v0_1. I think here it doesn't really create an issue because it doesn't affect anything we serialize but should we move it around? I think we normally avoid this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we create a new v3::L1State, we also need a v3::L1Client. Feels like a rabbit hole. I'll fiddle a bit and see how deep it goes.

@sveitser
Copy link
Collaborator

sveitser commented Feb 26, 2025

I'm question if this PR is doing what we want. If I add a big of logs

diff --git a/types/src/v0/impls/l1.rs b/types/src/v0/impls/l1.rs
index f6193bf26c..51beef8e9a 100644
--- a/types/src/v0/impls/l1.rs
+++ b/types/src/v0/impls/l1.rs
@@ -440,6 +440,7 @@ impl L1Client {
                     let chunks = ChunkGenerator::new(start_block, finalized.number, chunk_size);
                     let mut events: Vec<StakersUpdated> = Vec::new();
                     for Range { start, end } in chunks {
+                        tracing::debug!("fetching l1 events from {start} to {end}");
                         match stake_table_contract
                             .StakersUpdated_filter()
                             .from_block(start)
@@ -462,6 +463,7 @@ impl L1Client {
                     {
                         let st = StakeTables::from_l1_events(events);
                         let mut state = state.lock().await;
+                        tracing::debug!("putting st at block {} {st:?}", finalized.number);
                         state.put_stake_tables(finalized.number, st)
                     };
                     sleep(retry_delay).await;

I get this output from test_stake_table_update, which I think shows that we don't fetch the right stake table.

2025-02-26T15:43:48.488085Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 3 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(18337985037180234260463961761104771073406584577940333424897686394546658103112 + 13681344426547445834563732424320056242569556617600483740808652019948198640071 * u), QuadExtField(16928608149749939418265816302295618712963487584480756638328751944826025501210 + 13761642627553679691372456450177978856022009220215521961095038798593968653153 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([]) }
2025-02-26T15:43:49.488779Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 4 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }

In order to do this correctly I think we need to keep a cache of events (or better persist them) and then fetch new events, add them to the events we already have and then compute the stake table. Or alternatively implement a function to apply new events to an existing stake table to compute the new stake table.

full logs:

env RUST_LOG=espresso_types=debug cargo test -p espresso-types -- test_stake_table_update
   Compiling espresso-types v0.1.0 (/home/lulu/r/EspressoSystems/espresso-sequencer-worktree/types)
    Finished `test` profile [optimized] target(s) in 5.67s
     Running unittests src/lib.rs (target/nix/debug/deps/espresso_types-dec13ba098fb79bd)

running 1 test
2025-02-26T15:43:43.503902Z DEBUG espresso_types::v0::impls::l1: spawn blocks update loop
2025-02-26T15:43:43.505300Z  INFO L1 client update: espresso_types::v0::impls::l1: Established L1 block stream
2025-02-26T15:43:44.507708Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=1
2025-02-26T15:43:44.507844Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=1 old_head=0
2025-02-26T15:43:44.507852Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 0, timestamp: 1740584623, hash: 0xc7e2a5d154776226e79e4176efcb1ad3a36160da18dffbdbd54e23e5d1a3b44d }) old_finalized=None
2025-02-26T15:43:44.507861Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 1, finalized: Some(L1BlockInfo { number: 0, timestamp: 1740584623, hash: 0xc7e2a5d154776226e79e4176efcb1ad3a36160da18dffbdbd54e23e5d1a3b44d }) }
2025-02-26T15:43:45.483827Z DEBUG espresso_types::v0::impls::l1: spawn blocks update loop
2025-02-26T15:43:45.483845Z DEBUG espresso_types::v0::impls::l1: spawn stake table update loop
2025-02-26T15:43:45.483856Z  WARN espresso_types::v0::v0_1::l1: Upgrading `L1Client` background tasks for v3 (Proof of Stake)!
2025-02-26T15:43:45.483875Z  WARN espresso_types::v0::impls::l1: `Successfully upgrade L1Client background tasks!`
2025-02-26T15:43:45.483880Z DEBUG espresso_types::v0::impls::l1: fetch stake table events in range start=0 end=0
2025-02-26T15:43:45.484086Z  INFO L1 client update: espresso_types::v0::impls::l1: Established L1 block stream
2025-02-26T15:43:46.485552Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=3
2025-02-26T15:43:46.485766Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=3 old_head=1
2025-02-26T15:43:46.485779Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 1, timestamp: 1740584624, hash: 0x7680377c0dbd50a520395aa7cef7f1ab3d414e12cc2ef6a01900b65d5a4681eb }) old_finalized=Some(L1BlockInfo { number: 0, timestamp: 1740584623, hash: 0xc7e2a5d154776226e79e4176efcb1ad3a36160da18dffbdbd54e23e5d1a3b44d })
2025-02-26T15:43:46.485793Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 3, finalized: Some(L1BlockInfo { number: 1, timestamp: 1740584624, hash: 0x7680377c0dbd50a520395aa7cef7f1ab3d414e12cc2ef6a01900b65d5a4681eb }) }
2025-02-26T15:43:46.485810Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 1 to 1
2025-02-26T15:43:46.485977Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 1 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:47.485738Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=4
2025-02-26T15:43:47.485876Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=4 old_head=3
2025-02-26T15:43:47.485884Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 2, timestamp: 1740584625, hash: 0xd8f2d49f9431a9fb170e61528839d6f80129930cb85d2e61fba6d5b1fdb61e8f }) old_finalized=Some(L1BlockInfo { number: 1, timestamp: 1740584624, hash: 0x7680377c0dbd50a520395aa7cef7f1ab3d414e12cc2ef6a01900b65d5a4681eb })
2025-02-26T15:43:47.485892Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 4, finalized: Some(L1BlockInfo { number: 2, timestamp: 1740584625, hash: 0xd8f2d49f9431a9fb170e61528839d6f80129930cb85d2e61fba6d5b1fdb61e8f }) }
2025-02-26T15:43:47.486961Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 2 to 2
2025-02-26T15:43:47.487071Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 2 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:48.487231Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=5
2025-02-26T15:43:48.487383Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=5 old_head=4
2025-02-26T15:43:48.487393Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 3, timestamp: 1740584626, hash: 0x1d87b05f1d3dc15fa3db017a4de32fbbcc824709c52d7378bd16c6d5ce9828e4 }) old_finalized=Some(L1BlockInfo { number: 2, timestamp: 1740584625, hash: 0xd8f2d49f9431a9fb170e61528839d6f80129930cb85d2e61fba6d5b1fdb61e8f })
2025-02-26T15:43:48.487407Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 5, finalized: Some(L1BlockInfo { number: 3, timestamp: 1740584626, hash: 0x1d87b05f1d3dc15fa3db017a4de32fbbcc824709c52d7378bd16c6d5ce9828e4 }) }
2025-02-26T15:43:48.487427Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 3 to 3
2025-02-26T15:43:48.488085Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 3 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(18337985037180234260463961761104771073406584577940333424897686394546658103112 + 13681344426547445834563732424320056242569556617600483740808652019948198640071 * u), QuadExtField(16928608149749939418265816302295618712963487584480756638328751944826025501210 + 13761642627553679691372456450177978856022009220215521961095038798593968653153 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([]) }
2025-02-26T15:43:49.487838Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=6
2025-02-26T15:43:49.487968Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=6 old_head=5
2025-02-26T15:43:49.487976Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 4, timestamp: 1740584627, hash: 0x133afe48a0abc70c33b89f3326ef473a18ea90aaff66b48013e171339cb4c934 }) old_finalized=Some(L1BlockInfo { number: 3, timestamp: 1740584626, hash: 0x1d87b05f1d3dc15fa3db017a4de32fbbcc824709c52d7378bd16c6d5ce9828e4 })
2025-02-26T15:43:49.487985Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 6, finalized: Some(L1BlockInfo { number: 4, timestamp: 1740584627, hash: 0x133afe48a0abc70c33b89f3326ef473a18ea90aaff66b48013e171339cb4c934 }) }
2025-02-26T15:43:49.488617Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 4 to 4
2025-02-26T15:43:49.488779Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 4 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:50.489419Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=7
2025-02-26T15:43:50.489580Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=7 old_head=6
2025-02-26T15:43:50.489591Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 5, timestamp: 1740584628, hash: 0x990390f59657d03acb59d8b7bee41a11ce9763d100956cdcdab907f9a10b71c1 }) old_finalized=Some(L1BlockInfo { number: 4, timestamp: 1740584627, hash: 0x133afe48a0abc70c33b89f3326ef473a18ea90aaff66b48013e171339cb4c934 })
2025-02-26T15:43:50.489606Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 7, finalized: Some(L1BlockInfo { number: 5, timestamp: 1740584628, hash: 0x990390f59657d03acb59d8b7bee41a11ce9763d100956cdcdab907f9a10b71c1 }) }
2025-02-26T15:43:50.489617Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 5 to 5
2025-02-26T15:43:50.490258Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 5 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(13623634627503345705121812380705525230337895273009930462882583478713189498455 + 8891781882268445072131145626320047112552658273383267537885594119390401735248 * u), QuadExtField(18652700513987170285021948077060530158799708004159449452364305497991962903399 + 142494975689909387615101216831150618248822926015426895654967292800621328166 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([]) }
2025-02-26T15:43:51.490710Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=8
2025-02-26T15:43:51.490860Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=8 old_head=7
2025-02-26T15:43:51.490872Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 6, timestamp: 1740584629, hash: 0x5aae3b69b177d1ce890d8cca6d7d8307cc024a4f656c8bca8c314f80556a03c5 }) old_finalized=Some(L1BlockInfo { number: 5, timestamp: 1740584628, hash: 0x990390f59657d03acb59d8b7bee41a11ce9763d100956cdcdab907f9a10b71c1 })
2025-02-26T15:43:51.490883Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 8, finalized: Some(L1BlockInfo { number: 6, timestamp: 1740584629, hash: 0x5aae3b69b177d1ce890d8cca6d7d8307cc024a4f656c8bca8c314f80556a03c5 }) }
2025-02-26T15:43:51.490895Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 6 to 6
2025-02-26T15:43:51.491026Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 6 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:52.492706Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=9
2025-02-26T15:43:52.492883Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=9 old_head=8
2025-02-26T15:43:52.492896Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 7, timestamp: 1740584630, hash: 0x53704b9876cf8168451824a18667c23ad4866a6ed07a130e6cfcc120d4835cd5 }) old_finalized=Some(L1BlockInfo { number: 6, timestamp: 1740584629, hash: 0x5aae3b69b177d1ce890d8cca6d7d8307cc024a4f656c8bca8c314f80556a03c5 })
2025-02-26T15:43:52.492914Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 9, finalized: Some(L1BlockInfo { number: 7, timestamp: 1740584630, hash: 0x53704b9876cf8168451824a18667c23ad4866a6ed07a130e6cfcc120d4835cd5 }) }
2025-02-26T15:43:52.492928Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 7 to 7
2025-02-26T15:43:52.493669Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 7 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(8337572746547760066247273712136516957097096708801987562305928242594638545226 + 2640591789187571665209678152946213199339033290567467027379368355661573557412 * u), QuadExtField(1718121884470706531111023519611763630700788791152872539263453595418662830375 + 20297158613147534280822226236142655972781525194765122249839607583424469532051 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([StakeTableEntry { stake_key: VerKey((QuadExtField(8337572746547760066247273712136516957097096708801987562305928242594638545226 + 2640591789187571665209678152946213199339033290567467027379368355661573557412 * u), QuadExtField(1718121884470706531111023519611763630700788791152872539263453595418662830375 + 20297158613147534280822226236142655972781525194765122249839607583424469532051 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]) }
2025-02-26T15:43:53.494222Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=10
2025-02-26T15:43:53.494342Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=10 old_head=9
2025-02-26T15:43:53.494351Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 8, timestamp: 1740584631, hash: 0xdb6340f8b10f95efd0553e1f1b10825e168db9087c83e2028083c1b2e6777f3d }) old_finalized=Some(L1BlockInfo { number: 7, timestamp: 1740584630, hash: 0x53704b9876cf8168451824a18667c23ad4866a6ed07a130e6cfcc120d4835cd5 })
2025-02-26T15:43:53.494359Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 10, finalized: Some(L1BlockInfo { number: 8, timestamp: 1740584631, hash: 0xdb6340f8b10f95efd0553e1f1b10825e168db9087c83e2028083c1b2e6777f3d }) }
2025-02-26T15:43:53.495430Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 8 to 8
2025-02-26T15:43:53.495547Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 8 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:54.494957Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=11
2025-02-26T15:43:54.495113Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=11 old_head=10
2025-02-26T15:43:54.495124Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 9, timestamp: 1740584632, hash: 0x0721ede50256ff8c11efca1b2984f5b826250e9e08e9e33f88f9e07efe8ad4fa }) old_finalized=Some(L1BlockInfo { number: 8, timestamp: 1740584631, hash: 0xdb6340f8b10f95efd0553e1f1b10825e168db9087c83e2028083c1b2e6777f3d })
2025-02-26T15:43:54.495138Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 11, finalized: Some(L1BlockInfo { number: 9, timestamp: 1740584632, hash: 0x0721ede50256ff8c11efca1b2984f5b826250e9e08e9e33f88f9e07efe8ad4fa }) }
2025-02-26T15:43:54.497216Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 9 to 9
2025-02-26T15:43:54.497905Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 9 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(13082772893696348912861012303361832596884036908790327819686050484814715409786 + 15054166845034755303143865455393897213599863915466223684077576145888354277835 * u), QuadExtField(10791847071486488528977442371038597602438978271725892111922842998862559252051 + 20035711991739647015976017621578765249575766623548934712878392131559606313310 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([]) }
2025-02-26T15:43:55.496226Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=12
2025-02-26T15:43:55.496358Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=12 old_head=11
2025-02-26T15:43:55.496368Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 10, timestamp: 1740584633, hash: 0xd3bf9f42a6bcd6fc485fdfdaeb240bdeb1d6bb8f41095594f311c397fd12c5f0 }) old_finalized=Some(L1BlockInfo { number: 9, timestamp: 1740584632, hash: 0x0721ede50256ff8c11efca1b2984f5b826250e9e08e9e33f88f9e07efe8ad4fa })
2025-02-26T15:43:55.496382Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 12, finalized: Some(L1BlockInfo { number: 10, timestamp: 1740584633, hash: 0xd3bf9f42a6bcd6fc485fdfdaeb240bdeb1d6bb8f41095594f311c397fd12c5f0 }) }
2025-02-26T15:43:55.499357Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 10 to 10
2025-02-26T15:43:55.499493Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 10 StakeTables { stake_table: StakeTable([]), da_members: DAMembers([]) }
2025-02-26T15:43:56.497313Z DEBUG L1 client update: espresso_types::v0::impls::l1: Received L1 block head=13
2025-02-26T15:43:56.497507Z DEBUG L1 client update: espresso_types::v0::impls::l1: L1 head updated head=13 old_head=12
2025-02-26T15:43:56.497520Z  INFO L1 client update: espresso_types::v0::impls::l1: L1 finalized updated finalized=Some(L1BlockInfo { number: 11, timestamp: 1740584634, hash: 0xde38145185c4f748b6a89e78791ff14fc62eb8f5d5969423b56cc691a2f734e8 }) old_finalized=Some(L1BlockInfo { number: 10, timestamp: 1740584633, hash: 0xd3bf9f42a6bcd6fc485fdfdaeb240bdeb1d6bb8f41095594f311c397fd12c5f0 })
2025-02-26T15:43:56.497536Z DEBUG L1 client update: espresso_types::v0::impls::l1: Updated L1 snapshot to L1Snapshot { head: 13, finalized: Some(L1BlockInfo { number: 11, timestamp: 1740584634, hash: 0xde38145185c4f748b6a89e78791ff14fc62eb8f5d5969423b56cc691a2f734e8 }) }
2025-02-26T15:43:56.500615Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: fetching l1 events from 11 to 11
2025-02-26T15:43:56.501190Z DEBUG L1 client stake_tables update: espresso_types::v0::impls::l1: putting st at block 11 StakeTables { stake_table: StakeTable([StakeTableEntry { stake_key: VerKey((QuadExtField(6195101787386319371099117264747156414863265634514860698372362368383735796793 + 13756038602497367063234995403535089644255930618230664535148919525258333215812 * u), QuadExtField(8716370302699735735273708665010299285957267098720485307411461611374882566834 + 3315120772995648354755409153522393000253661851616508073037464474662401184877 * u), QuadExtField(1 +  * u))), stake_amount: 1 }]), da_members: DAMembers([]) }
test v0::impls::l1::test::test_stake_table_update_loop ... ok

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cache requests to stake table
2 participants