Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

suzuka-full-node service in docker stuck #684

Open
prostghost opened this issue Oct 11, 2024 · 5 comments
Open

suzuka-full-node service in docker stuck #684

prostghost opened this issue Oct 11, 2024 · 5 comments

Comments

@prostghost
Copy link

prostghost commented Oct 11, 2024

Describe the bug
suzuka-full-node service in docker stuck

Desktop:

  • OS: Ubuntu 20.04

Additional info
I am using docker-compose files ("docker-compose.faucet-replicas.yml", "docker-compose.follower.yml", "docker-compose.yml") from main branch repo

Also I have .env file:

CONTAINER_REV=e6cb8e287cb837af6e61451f2ff405047dd285c9
DOT_MOVEMENT_PATH=/path/.movement 

I tried to restart services several times and it didn't help, setup service is successfully executed and after it comes these logs:

setup exited with code 0
celestia-light-node-synced       | No sync check when following.
celestia-light-node-synced exited with code 0
suzuka-full-node                 | 2024-10-10T14:30:16.170667Z  INFO maptos_opt_executor::executor::initialization: Ledger info found, not bootstrapping DB: V0(LedgerInfoWithV0 { ledger_info: LedgerInfo { commit_info: BlockInfo { epoch: 1, round: 0, id: HashValue(2df623e7715772374d07f0f8949b27215c403c1fc9d50e233554c1f754c99aa8), executed_state_id: HashValue(4e995a4740321faf5e136a2c4b8745d2d3112db6b44076831265156ad71bfc5a), version: 4, timestamp_usecs: 1728554771875184, next_epoch_state: Some(EpochState [epoch: 1, validator: ValidatorSet: [d1126ce48bd65fb72190dbd9a6eaa65ba973f1e1664ac0cfba4db1d071fd0c36: 100000000, ]]) }, consensus_data_hash: HashValue(0000000000000000000000000000000000000000000000000000000000000000) }, signatures: AggregateSignature { validator_bitmask: BitVec { inner: [] }, sig: None } })
suzuka-full-node                 | 2024-10-10T14:30:16.171183Z  INFO maptos_opt_executor::executor::services: Starting maptos-opt-executor services at: "0.0.0.0:30731"
suzuka-full-node                 | 2024-10-10T14:30:16.176091Z  INFO poem::server: listening addr=socket://0.0.0.0:30731
suzuka-full-node                 | 2024-10-10T14:30:16.176111Z  INFO poem::server: server started
suzuka-full-node                 | 2024-10-10T14:30:16.176117Z  INFO maptos_fin_view::fin_view: Starting maptos-fin-view services at: "0.0.0.0:30733"
suzuka-full-node                 | 2024-10-10T14:30:16.179675Z  INFO poem::server: listening addr=socket://0.0.0.0:30733
suzuka-full-node                 | 2024-10-10T14:30:16.179691Z  INFO poem::server: server started
suzuka-full-node                 | 2024-10-10T14:30:16.179722Z  INFO maptos_opt_executor::executor::execution: Rollover genesis: epoch: 1, round: 0, block_id: 18e0c218, genesis timestamp 1728554771875184
suzuka-full-node                 | state_checkpoint_to_add: Some(HashValue(18e0c218939fc2fb1ec43ea1358900efc19fc522de3011453012e61fe5388fd4))
suzuka-full-node                 | Appending StateCheckpoint transaction to the end of to_keep

Also output of curl localhost:30731/v1

curl: (56) Recv failure: Connection reset by peer
@l-monninger
Copy link
Collaborator

l-monninger commented Oct 11, 2024

Please do not include the faucet-replicas overlay. I will see if I can reproduce your behavior with it shortly.

@prostghost
Copy link
Author

@l-monninger Do you have any updates on the issue?

@l-monninger
Copy link
Collaborator

l-monninger commented Oct 15, 2024

Unfortunately, I've been unable to reproduce the exact behavior. I would suggest upgrading to a more recent commit, e.g., 2ad5bf6.

To perhaps make things more consistent, I would initially suggest following the Follower Node docs and working from the tooling provided within this repo.

@prostghost
Copy link
Author

prostghost commented Oct 15, 2024

@l-monninger Now node is failing on setup service both manually via nix (I did everything according to the docs you provided above) and docker
Also I used the latest commit hash above
The logs look like:

[build  ] Built wait-for-celestia-light-node!
[setup  ] 2024-10-15T15:47:16.061252Z  INFO suzuka_full_node_setup: Config: None
[setup  ] 2024-10-15T15:47:16.062363Z  INFO suzuka_full_node_setup: Config: Config { execution_config: MaptosConfig { maptos_config: Config { chain: Config { maptos_chain_id: 250, maptos_rest_listen_hostname: "0.0.0.0", maptos_rest_listen_port: 30731, maptos_private_key: <elided secret for Ed25519PrivateKey>, maptos_ledger_prune_window: 50000000, maptos_epoch_snapshot_prune_window: 50000000, maptos_state_merkle_prune_window: 100000, maptos_db_path: None }, indexer: Config { maptos_indexer_grpc_listen_hostname: "0.0.0.0", maptos_indexer_grpc_listen_port: 30734, maptos_indexer_grpc_inactivity_timeout: 60, maptos_indexer_grpc_inactivity_ping_interval: 10 }, indexer_processor: Config { postgres_connection_string: "postgresql://postgres:password@localhost:5432", indexer_processor_auth_token: "auth_token" }, client: Config { maptos_rest_connection_hostname: "0.0.0.0", maptos_rest_connection_port: 30731, maptos_faucet_rest_connection_hostname: "0.0.0.0", maptos_faucet_rest_connection_port: 30732, maptos_indexer_grpc_connection_hostname: "0.0.0.0", maptos_indexer_grpc_connection_port: 30734 }, faucet: Config { maptos_rest_connection_hostname: "0.0.0.0", maptos_rest_connection_port: 30731, maptos_faucet_rest_listen_hostname: "0.0.0.0", maptos_faucet_rest_listen_port: 30732 }, fin: Config { fin_rest_listen_hostname: "0.0.0.0", fin_rest_listen_port: 30733 }, load_shedding: Config { max_transactions_in_flight: 12000 } } }, m1_da_light_node: M1DaLightNodeConfig { m1_da_light_node_config: Local(Config { appd: Config { celestia_rpc_listen_hostname: "0.0.0.0", celestia_rpc_listen_port: 26657, celestia_websocket_connection_protocol: "ws", celestia_websocket_connection_hostname: "0.0.0.0", celestia_websocket_connection_port: 26658, celestia_auth_token: None, celestia_chain_id: "movement", celestia_namespace: Namespace(NamespaceId([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 109, 111, 118, 101, 109, 101, 110, 116])), celestia_path: None, celestia_validator_address: None, celestia_appd_use_replace_args: false, celestia_appd_replace_args: [] }, bridge: Config { celestia_rpc_connection_protocol: "http", celestia_rpc_connection_hostname: "0.0.0.0", celestia_rpc_connection_port: 26657, celestia_websocket_listen_hostname: "0.0.0.0", celestia_websocket_listen_port: 26658, celestia_bridge_path: None, celestia_bridge_use_replace_args: false, celestia_bridge_replace_args: [] }, m1_da_light_node: Config { celestia_rpc_connection_protocol: "http", celestia_rpc_connection_hostname: "0.0.0.0", celestia_rpc_connection_port: 26657, celestia_websocket_connection_hostname: "0.0.0.0", celestia_websocket_connection_port: 26658, m1_da_light_node_listen_hostname: "0.0.0.0", m1_da_light_node_listen_port: 30730, m1_da_light_node_connection_hostname: "m1-da-light-node.testnet.movementlabs.xyz", m1_da_light_node_connection_port: 443 }, celestia_force_new_chain: true, memseq: Config { sequencer_chain_id: Some("test"), sequencer_database_path: Some("/tmp/sequencer"), memseq_build_time: 1000, memseq_max_block_size: 2048 }, m1_da_light_node_is_initial: true }) }, mcr: Config { eth_connection: Config { eth_rpc_connection_protocol: "https", eth_rpc_connection_hostname: "ethereum-holesky-rpc.publicnode.com", eth_rpc_connection_port: 443, eth_ws_connection_protocol: "ws", eth_ws_connection_hostname: "ethereum-holesky-rpc.publicnode.com", eth_ws_connection_port: 443, eth_chain_id: 0 }, settle: Config { should_settle: false, signer_private_key: "0xc5d051183c873852bb2f3b1d8dd2b280760a52b2ac6ad23a6b3e3610953ecadb", mcr_contract_address: "0x0" }, transactions: Config { gas_limit: 10000000000000000, batch_timeout: 2000, transaction_send_retries: 10 }, maybe_run_local: false, deploy: None, testing: None }, da_db: Config { da_db_path: "suzuka-da-db" }, execution_extension: Config { block_retry_count: 10, block_retry_increment_microseconds: 5000 }, syncing: Config { movement_sync: Some("follower::mtnet-l-sync-bucket-sync<=>{maptos,maptos-storage,suzuka-da-db}/**") } }
[setup  ] 2024-10-15T15:47:16.062462Z  INFO suzuka_full_node_setup: Syncing with bucket: mtnet-l-sync-bucket-sync, glob: {maptos,maptos-storage,suzuka-da-db}/**
[setup  ] 2024-10-15T15:47:16.062480Z  INFO syncup: Running syncup with root "/movement/.movement", glob {maptos,maptos-storage,suzuka-da-db}/**, and target S3("mtnet-l-sync-bucket-sync")
[setup  ] 2024-10-15T15:47:16.073513Z  INFO syncador::backend::s3::shared_bucket: Client used region Some(Region("us-west-1"))
[setup  ] Error: Backend Error: dispatch failure
[setup  ]
[setup  ] Caused by:
[setup  ]     0: dispatch failure
[setup  ]     1: other
[setup  ]     2: the credential provider was not enabled
[setup  ]     3: no providers in chain provided credentials

@l-monninger
Copy link
Collaborator

@prostghost This looks like you might not have AWS creds. The AWS Rust SDK needs something to properly auth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants