Skip to content

Conversation

spalladino
Copy link
Contributor

Adds a spec for the circuit and protocol changes needed for building in chunks with chained txs.


#### Committing to `TxEffects`

However, we still need a commitment to the contents of a chunk. Each `TxEffect` has several fields that are entered into the world state tree (nullifiers, note hashes, etc), so these are committed to as part of the chunk header state reference. But the `TxEffect` contains fields that are not committed into world state: logs and l2-to-l1 messages. So we still need to commit to them as part of the block hash. To do this, we can leverage the blob Poseidon sponge, which hashes all tx effects across a block, and include the start and end state of the sponge in the chunk header. This guarantees that the chunkhash is a commitment to all effects in that chunk.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we still need to commit to them as part of the block hash.

Do you mean "chunk hash" in the context of this document?

which hashes all tx effects across a block

"chunk"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my mind was thinking in blocks and then trying to rename them to chunks as I was writing. I clearly failed.

- Updated circuit topology: 3-5 weeks
- L1 changes: 1-2 weeks

Total is 7-12 weeks, on top of the 5 weeks, for **12-17 weeks total**.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How parallelizable is this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updating circuit topology; Changes to blob layout; Updates to headers; L1 changes - they're all intertwined. Maybe with some very well-planned interfaces, and very frequent communication, a team of engineers could parallelize it by working against stubbed interfaces, and all merge into a feature branch. Only once all streams of work are merged into the feature branch would you be able to check to see if all the full repo's tests are passing.

Don't forget also changes to the archiver to interpret the new blob layout and L1 calldata layouts and L1 event layouts.

You might be able to ignore making many changes to the node for quite some time:
You could implement the circuit, blob, L1, archiver changes, and then continue building 1 chunk per checkpoint, which from the pov of a node would look like the current "1 block per proposal" paradigm.

Then some time later, you could update the node software to teach a proposer that they can create multiple chunks per checkpoint, and teach validators that they need to sign multiple chunks etc etc.

github-merge-queue bot pushed a commit to AztecProtocol/aztec-packages that referenced this pull request Sep 17, 2025
Adding new checkpoint circuits as describe in the [design
doc](AztecProtocol/engineering-designs#73).

Things different to the doc:
- The parity root is verified in the first block root in a checkpoint.
This is to speed up the overall proving time.
- The `CheckpointHeader` is the same as the previous
`ProposedBlockHeader` - we will still be validating the same values on
L1 when submitting a checkpoint.

To make the e2e tests pass without changing too many things, each
checkpoint currently only contains one block. The existing "L2Block"
class represents a block and its checkpoint. And a temporary
`L2BlockHeader` is created for it, which has all the fields required to
construct a `BlockHeader` and a `CheckpointHeader`.

The orchestrator should already work correctly for multiple blocks per
checkpoint.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants