-
Notifications
You must be signed in to change notification settings - Fork 91
feat: batching for reads and smart-account writes #773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
changelog |
This PR is stale because it has been open for 7 days with no activity. Remove stale label or comment or this PR will be closed in 3 days. |
We're building your pull request over on Zeet. |
…t details retrieval
Querystring: Static<typeof requestQuerystringSchema>; | ||
}>({ | ||
method: "POST", | ||
url: "/backend-wallet/:chain/send-transactions-atomic", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe send-batch-transaction ? we kinda use that language already in the sdk and docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a send-transaction-batch endpoint which just queues up all the transactions and sends them non atomically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't you edit that endpoint then? Just do an actual batch if it's a smart backend wallet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered modifying the send-transaction-batch
endpoint, but decided against it, because:
-
There's legitimate use-cases for non-atomic batching, even when using smart backend wallets. Batching atomically you might run into block gas limit, no limits on non-atomic batching though
-
Because of above reason, we shouldn't auto-force atomic batch on SBW. This might break user expectations, so instead having a
batch?: boolean
option would be good. This isn't easy to add, because existing endpoint takes an expects an array as the transaction body, no easy way to add to request body.
(We can still add to queryParam) -
Another argument against automatic atomic batching when SBW: we also want to support atomic batching for regular EOA + Account Address style smart accounts.
-
There's also a couple of things don't make sense to share between the two batching strategies. Eg,
idempotency-key
. Non-atomic batching returns multiple queue-ids (queueIds: string[]
), so an idempotency key doesn't work there, but it does make sense to include when using atomic batching, since there's only one queue-id returned. -
shouldSimulate
is not available as a query-param for non-atomic batch, due to latency of individually simulating transactions. But we can allow this for atomic batch. -
As mentioned before: non atomic batch returns an array of
queueIds
, and atomic batch only returns a single queueId. If we automatically move to atomic batching on the same endpoint, and start returning only a singlequeueId
in the array, client code that expectstransactions.length == queueIds.length
might break
Given the number of special cases to handle if we were to merge endpoints, I think having them separate is best.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You listed enough differences to justify a different endpoint rather than overloading one. 👍
More thoughts on names:
- /send-ordered-transactions
- /send-userop-batch
- /send-transaction-sequence
I don't have a strong opinion against atomic, seems clear too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ugh i see. is anyone actually using the non-atomic batch endpoint though? seems very niche when you can just send 2 api calls
const results = await readContract({ | ||
contract: multicall, | ||
method: MULTICALL3_AGGREGATE_ABI, | ||
params: [encodedCalls], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think there was a limit to how much data you can pack in here, did you try a very large read?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's definitely a limit, but failing and forwarding the error should be fine, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah should be fine
Querystring: Static<typeof requestQuerystringSchema>; | ||
}>({ | ||
method: "POST", | ||
url: "/backend-wallet/:chain/send-transactions-atomic", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's already a /send-transactions-batch
. IMO introducing /send-transactions-atomic
will confuse the dev.
WDYT about reusing the same handler and branching the logic if SBW? In the docs we'd then need to clarify "Order is guaranteed for SBW only. If non-SBW, order is not guaranteed".
If you still want two separate paths, I'd prefer avoiding a new term (atomic) and using simpler language, like /send-transaction-batch-ordered
. Or /send-userop-batch
to make it clear it's for smart wallets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
answered why not the same handler in a question above, Joaquim had a similar question.
I do agree the name for the other path could be improved, but I ran out of ideas.
send-userop-batch
could be misleading since we're creating a single UserOp rather than batching multiple UserOps.
I do like keeping "atomic" in the endpoint name since it's a widely understood term beyond web3 - developers regularly encounter it in databases, filesystems, and other contexts where operations need to succeed or fail as one unit, and it's more precise than "ordered".
I was initially thinking /send-transaction-batch-atomic
to go along with the existing /send-transaction-batch
, but felt it would be too long.
Type.Object({ | ||
toAddress: Type.Optional(AddressSchema), | ||
data: Type.String({ | ||
examples: ["0x..."], | ||
}), | ||
value: Type.String({ | ||
examples: ["10000000"], | ||
}), | ||
}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-blocking: These are the minimal args of a "raw transaction" and may be useful to DRY since it's used in a few places. Then we can add descriptions too.
transactions: Type.Array( | ||
Type.Object({ | ||
toAddress: Type.Optional(AddressSchema), | ||
data: Type.String({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use HexSchema
if (transactionRequests.length === 0) { | ||
throw createCustomError( | ||
"No transactions provided", | ||
StatusCodes.BAD_REQUEST, | ||
"NO_TRANSACTIONS_PROVIDED", | ||
); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be able to assert this in TypeBox: { minItems: 1 }
data: Type.String({ | ||
examples: ["0x..."], | ||
}), | ||
value: Type.String({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use WeiAmountStringSchema
@@ -1,3 +1,4 @@ | |||
import LruMap from "mnemonist/lru-map"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: LRUMap to be consistent with other places we use this
Reply: Static<typeof responseSchema>; | ||
}; | ||
|
||
export async function readMulticallRoute(fastify: FastifyInstance) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I consider multicall the implementation, and any interface higher than this method shouldn't need to know about it. So var/fn names (even if not user facing) don't need to have multicall in them. Maybe batchReadRoute
or readBatchRoute
?
if (queuedTransaction.batchOperations) { | ||
queuedTransaction.batchOperations.map((op) => { | ||
assert(op.to, "Invalid transaction parameters: to"); | ||
}); | ||
} else { | ||
assert(toAddress, "Invalid transaction parameters: to"); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my understanding:
- in a single userOp,
toAddress
is set. - in a batch userOp,
toAddress
is not set but it must be set on every batch operation.
What about contract deploys? Is there a toAddress on a userOp to deploy a contract?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for contract deployment, it should be 0x0000000000000000000000000000000000000000
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought contract deployment to = null
? Something to test
const transactions = queuedTransaction.batchOperations | ||
? queuedTransaction.batchOperations.map((op) => ({ | ||
...op, | ||
chain, | ||
client: thirdwebClient, | ||
})) | ||
: [ | ||
{ | ||
client: thirdwebClient, | ||
chain, | ||
...queuedTransaction, | ||
...overrides, | ||
to: getChecksumAddress(toAddress), | ||
}, | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're explicitly ignoring overrides
in the batch. That said, I recall discussion that gas/gasFee overrides for userOps don't make sense and we should remove them anyway. We should pick one or the other: this code does it both ways (overrides for single userOp, no overrides for batch userOps).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-blocking comment (maybe cleanup for later)
Single vs batch userOp flows should overlap as much as possible: there should be minimal difference other than the array of transactions being submitted.
That said I think all the "single vs batch userOp" logic should be in one place.
// pseudocode
let transactions = []
if (batchOperations) {
for (op of batchOperations) {
assert(op.to)
transactions.push({ ...op, chain, client })
}
} else {
transactions.push({ to, data, value, chain, client })
}
// ...the rest of the code doesn't care about batchOperations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gas/gasFee overrides for userOps don't make sense
the current overrides don't work for userOps correct, but overrides in general can still be changed to use with userOps:
type UserOpGasOverrides = {
callGasLimit?: bigint
verificationGasLimit?: bigint
preVerificationGas?: bigint
maxFeePerGas?: bigint
maxPriorityFeePerGas?: bigint
}
even in the batch userop flow, there's only a single UserOp being constructed. And gas overrides are applied per UserOp, not per transaction inside the userop.
Test
read
with example curl:Test
write
cURL example:0xe5071b8e
is the function selector forincrementCount()
, and0x5175E053bE7BDC74896C0D8675590FfbCbfe2E73
is deployed on base sepolia. On checking status of queued transaction, and viewing event logs for the sent transaction, logs from0x51..73
should be visilble.PR-Codex overview
This PR introduces support for batch operations in transaction processing, enhancing the ability to handle multiple transactions atomically. It includes new schemas, routes, and methods for batch reading and sending transactions, as well as updates to existing transaction handling.
Detailed summary
batchOperations
field to various schemas and types.sendTransactionBatchAtomic
route for atomic transaction processing.readBatchRoute
for batch reading from multiple contracts.generate-sdk.ts
for local development.