Skip to content

feat: batching for reads and smart-account writes #773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jan 14, 2025
Merged

feat: batching for reads and smart-account writes #773

merged 8 commits into from
Jan 14, 2025

Conversation

d4mr
Copy link
Member

@d4mr d4mr commented Nov 15, 2024

Test read with example curl:

curl --request POST \
  --url http://localhost:3005/contract/421614/read-batch \
  --header 'Authorization: Bearer {{authToken}}' \
  --header 'content-type: application/json' \
  --data '{
  "calls": [
    {
      "contractAddress": "0x75faf114eafb1BDbe2F0316DF893fd58CE46AA4d",
      "functionName": "name"
    },
    {
      "contractAddress": "0x75faf114eafb1BDbe2F0316DF893fd58CE46AA4d",
      "functionName": "totalSupply"
    }
  ]
}'

Test write cURL example:

curl --request POST \
  --url http://localhost:3005/backend-wallet/84532/send-transactions-atomic \
  --header 'Authorization: Bearer {{authToken}}' \
  --header 'Content-Type: application/json' \
  --header 'x-backend-wallet-address: <use a smart backend wallet> \
  --data '{
  "transactions": [
    {
      "toAddress": "0x5175E053bE7BDC74896C0D8675590FfbCbfe2E73",
      "value": "0",
      "data": "0xe5071b8e"
    },
    {
      "toAddress": "0x5175E053bE7BDC74896C0D8675590FfbCbfe2E73",
      "value": "0",
      "data": "0xe5071b8e"
    }
  ]
}

0xe5071b8e is the function selector for incrementCount(), and 0x5175E053bE7BDC74896C0D8675590FfbCbfe2E73 is deployed on base sepolia. On checking status of queued transaction, and viewing event logs for the sent transaction, logs from 0x51..73 should be visilble.

image

PR-Codex overview

This PR introduces support for batch operations in transaction processing, enhancing the ability to handle multiple transactions atomically. It includes new schemas, routes, and methods for batch reading and sending transactions, as well as updates to existing transaction handling.

Detailed summary

  • Added batchOperations field to various schemas and types.
  • Introduced sendTransactionBatchAtomic route for atomic transaction processing.
  • Created readBatchRoute for batch reading from multiple contracts.
  • Updated transaction handling to support batch operations.
  • Enhanced error handling for wallet details retrieval.
  • Modified OpenAPI URL in generate-sdk.ts for local development.
  • Implemented caching for wallet details to improve performance.

✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}

@d4mr
Copy link
Member Author

d4mr commented Nov 15, 2024

changelog

@d4mr d4mr requested a review from arcoraven November 21, 2024 13:54
Copy link

github-actions bot commented Dec 1, 2024

This PR is stale because it has been open for 7 days with no activity. Remove stale label or comment or this PR will be closed in 3 days.

@github-actions github-actions bot closed this Dec 5, 2024
@d4mr d4mr reopened this Jan 13, 2025
Copy link

zeet-co bot commented Jan 13, 2025

We're building your pull request over on Zeet.
Click me for more info about your build and deployment.
Once built, this branch can be tested at: https://thirdweb-engine-ad8x-pb-batch-read.engine-aws-usw2.zeet.app before merging 😉

@d4mr d4mr changed the title Refactor: Add readMulticall route for batch contract reads feat: batching for reads and smart-account writes Jan 13, 2025
@d4mr d4mr requested a review from joaquim-verges January 13, 2025 07:50
@d4mr d4mr requested a review from arcoraven January 13, 2025 07:55
Querystring: Static<typeof requestQuerystringSchema>;
}>({
method: "POST",
url: "/backend-wallet/:chain/send-transactions-atomic",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe send-batch-transaction ? we kinda use that language already in the sdk and docs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a send-transaction-batch endpoint which just queues up all the transactions and sends them non atomically

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't you edit that endpoint then? Just do an actual batch if it's a smart backend wallet

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I considered modifying the send-transaction-batch endpoint, but decided against it, because:

  1. There's legitimate use-cases for non-atomic batching, even when using smart backend wallets. Batching atomically you might run into block gas limit, no limits on non-atomic batching though

  2. Because of above reason, we shouldn't auto-force atomic batch on SBW. This might break user expectations, so instead having a batch?: boolean option would be good. This isn't easy to add, because existing endpoint takes an expects an array as the transaction body, no easy way to add to request body.
    (We can still add to queryParam)

  3. Another argument against automatic atomic batching when SBW: we also want to support atomic batching for regular EOA + Account Address style smart accounts.

  4. There's also a couple of things don't make sense to share between the two batching strategies. Eg, idempotency-key. Non-atomic batching returns multiple queue-ids (queueIds: string[]), so an idempotency key doesn't work there, but it does make sense to include when using atomic batching, since there's only one queue-id returned.

  5. shouldSimulate is not available as a query-param for non-atomic batch, due to latency of individually simulating transactions. But we can allow this for atomic batch.

  6. As mentioned before: non atomic batch returns an array of queueIds, and atomic batch only returns a single queueId. If we automatically move to atomic batching on the same endpoint, and start returning only a single queueId in the array, client code that expects transactions.length == queueIds.length might break

Given the number of special cases to handle if we were to merge endpoints, I think having them separate is best.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You listed enough differences to justify a different endpoint rather than overloading one. 👍

More thoughts on names:

  • /send-ordered-transactions
  • /send-userop-batch
  • /send-transaction-sequence
    I don't have a strong opinion against atomic, seems clear too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ugh i see. is anyone actually using the non-atomic batch endpoint though? seems very niche when you can just send 2 api calls

const results = await readContract({
contract: multicall,
method: MULTICALL3_AGGREGATE_ABI,
params: [encodedCalls],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think there was a limit to how much data you can pack in here, did you try a very large read?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's definitely a limit, but failing and forwarding the error should be fine, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah should be fine

Querystring: Static<typeof requestQuerystringSchema>;
}>({
method: "POST",
url: "/backend-wallet/:chain/send-transactions-atomic",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a /send-transactions-batch. IMO introducing /send-transactions-atomic will confuse the dev.

WDYT about reusing the same handler and branching the logic if SBW? In the docs we'd then need to clarify "Order is guaranteed for SBW only. If non-SBW, order is not guaranteed".

If you still want two separate paths, I'd prefer avoiding a new term (atomic) and using simpler language, like /send-transaction-batch-ordered. Or /send-userop-batch to make it clear it's for smart wallets.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

answered why not the same handler in a question above, Joaquim had a similar question.

I do agree the name for the other path could be improved, but I ran out of ideas.

send-userop-batch could be misleading since we're creating a single UserOp rather than batching multiple UserOps.

I do like keeping "atomic" in the endpoint name since it's a widely understood term beyond web3 - developers regularly encounter it in databases, filesystems, and other contexts where operations need to succeed or fail as one unit, and it's more precise than "ordered".

I was initially thinking /send-transaction-batch-atomic to go along with the existing /send-transaction-batch, but felt it would be too long.

Comment on lines 28 to 36
Type.Object({
toAddress: Type.Optional(AddressSchema),
data: Type.String({
examples: ["0x..."],
}),
value: Type.String({
examples: ["10000000"],
}),
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: These are the minimal args of a "raw transaction" and may be useful to DRY since it's used in a few places. Then we can add descriptions too.

transactions: Type.Array(
Type.Object({
toAddress: Type.Optional(AddressSchema),
data: Type.String({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use HexSchema

Comment on lines 114 to 120
if (transactionRequests.length === 0) {
throw createCustomError(
"No transactions provided",
StatusCodes.BAD_REQUEST,
"NO_TRANSACTIONS_PROVIDED",
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to assert this in TypeBox: { minItems: 1 }

data: Type.String({
examples: ["0x..."],
}),
value: Type.String({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use WeiAmountStringSchema

@@ -1,3 +1,4 @@
import LruMap from "mnemonist/lru-map";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: LRUMap to be consistent with other places we use this

Reply: Static<typeof responseSchema>;
};

export async function readMulticallRoute(fastify: FastifyInstance) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I consider multicall the implementation, and any interface higher than this method shouldn't need to know about it. So var/fn names (even if not user facing) don't need to have multicall in them. Maybe batchReadRoute or readBatchRoute?

Comment on lines +147 to +153
if (queuedTransaction.batchOperations) {
queuedTransaction.batchOperations.map((op) => {
assert(op.to, "Invalid transaction parameters: to");
});
} else {
assert(toAddress, "Invalid transaction parameters: to");
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my understanding:

  • in a single userOp, toAddress is set.
  • in a batch userOp, toAddress is not set but it must be set on every batch operation.

What about contract deploys? Is there a toAddress on a userOp to deploy a contract?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for contract deployment, it should be 0x0000000000000000000000000000000000000000

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought contract deployment to = null? Something to test

Comment on lines +209 to +223
const transactions = queuedTransaction.batchOperations
? queuedTransaction.batchOperations.map((op) => ({
...op,
chain,
client: thirdwebClient,
}))
: [
{
client: thirdwebClient,
chain,
...queuedTransaction,
...overrides,
to: getChecksumAddress(toAddress),
},
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're explicitly ignoring overrides in the batch. That said, I recall discussion that gas/gasFee overrides for userOps don't make sense and we should remove them anyway. We should pick one or the other: this code does it both ways (overrides for single userOp, no overrides for batch userOps).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking comment (maybe cleanup for later)

Single vs batch userOp flows should overlap as much as possible: there should be minimal difference other than the array of transactions being submitted.

That said I think all the "single vs batch userOp" logic should be in one place.

// pseudocode
let transactions = []
if (batchOperations) {
  for (op of batchOperations) {
    assert(op.to)
    transactions.push({ ...op, chain, client })
  }
} else {
  transactions.push({ to, data, value, chain, client })
}

// ...the rest of the code doesn't care about batchOperations

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gas/gasFee overrides for userOps don't make sense

the current overrides don't work for userOps correct, but overrides in general can still be changed to use with userOps:

type UserOpGasOverrides = {
  callGasLimit?: bigint
  verificationGasLimit?: bigint
  preVerificationGas?: bigint
  maxFeePerGas?: bigint  
  maxPriorityFeePerGas?: bigint
}

even in the batch userop flow, there's only a single UserOp being constructed. And gas overrides are applied per UserOp, not per transaction inside the userop.

@d4mr d4mr merged commit 72760c3 into main Jan 14, 2025
6 checks passed
@d4mr d4mr deleted the pb/batch-read branch January 14, 2025 23:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants