Skip to content
This repository was archived by the owner on Mar 1, 2019. It is now read-only.

ApplyingBlocks Rollback Before Decoupling

Pawel Jakubas edited this page Jan 4, 2019 · 1 revision

ApplyingBlocks-Rollback-Before-Decoupling (Byron era)

Executive summary From the point of view of applying blocks and rollback the wallet is coupled with node components by (a) NodeStateAdaptor which brings the node state to the wallet's scope, and (b) shared state that allows the respective callbacks.

On the other hand, node-to-node sharing of newly applied block constitutes the closed cycle. First, the node retrieve worker is triggered by block header queue. As a consequence, the missing block is downloaded. If successful, then all the peers are informed of this which results in the updating of their block header queues. Finally, as the retrieve workers of the peers detect that event the cycle continues. What is important here is that both downloading the block and announcing the header are realized using diffusion layer. And as the wallet and diffusion layer components are bound to stay stitched together, these same ways of getting blocks and rollback used in node-to-node communication can be used on the level of wallet to realize block application and rollback

Upon wallet's instantiation a node state adaptor is created. This constitutes a strong coupling of the wallet to the node. NodeStateAdaptor allows to bring the node state into the scope of the wallet. The node state is used in a number of places, also in the context of new block application and rollback. The second coupling between wallet and the node is established during runWalletMode when readerT mechanism is used to share blunds between wallet and blockchain db. The shared state allows for callbacks as defined by MonadBListener. Both callbacks, onApplyBlocks and onRollbackBlocks are called after putting blocks into BlocksDB (in case of applying blocks), before changing of GStateDB and performed under block lock. On the wallet side dedicated wallet worker is running that reacts on changes in the shared state. When triggered the worker interprets it and responds to block events by either applyBlock or switchToFork function from Cardano.Wallet.Kernel.BListener. The node state as delivered by the node adaptor is also extensively used here to construct the corresponding response.

See Fig. 1 for more details and information of modules involved.

Fig. 1. The relation of activate wallet, wallet worker, node adaptor and blockchain db

Fig. 1. The relation of activate wallet, wallet worker, node adaptor and blockchain db

Moreover, there is a mechanism behind node-to-node intermediation when it comes to block application. When the node is setup a number of workers are spawned as enumerated by blkWorkers (Pos.Worker.Block) that are responsible for block processing. retrievalWorker is one of the four workers defined in Pos.Network.Block.Retrieval module. It follows block queue and recovery header variable and hence takes part in handling both block retrieval and recovery.

Block Retrieval

The block retrieval communication diagram is presented in Fig. 2.

Fig. 2. Block retrieval without protocal details

Fig. 2. Block retrieval without protocal details

Fig 2. locates important modules and functions that participate in block application. On high level, retrieval worker is in business of following changes of block header queue. Here, it retries until (NodeId, BlockHeader) is extracted. When it sees new header it uses diffusion layer intermediary to get blocks from peer identified by NodeId. If the block download is successfull it tries to accommodate the block. In order to do it, it first pushes it to its own DB using verifyAndApplyBlocks. When successful HeaderHash value gives rise to block header tip reconstruction and the new block header is shared with peers via diffusion layer mechanism. If not in recovery mode, announceBlockHeader from diffusion layer is invoked. Fig 3. shows what happens on this level (one can also refer to transaction submission discussion for comparison).

Fig. 3. Peer-to-peer announcing of block header

Fig. 3. Peer-to-peer announcing of block header

The message to be pushed forward is of MsgAnnounceBlockHeader type and is enqueued to outbound queue that possess the whereabouts of the peers. The message is worked on within announceBlockDo of Pos.Diffusion.Full.Block. The function uses send - low level primitive of ConversationActions that is a workhorse of networking module. The following ingredients are needed for the sending to be well constructed:

  • conversation of ConversationActions MsgHeaders MsgGetHeaders is established
  • data value constructed as MsgHeaders (NewestFirst NE BlockHeader) of Pos.Network.Block.Types is sent to the peer
  • the peer after absorbing the initial announcement is given an opportunity to request more headers within the same conversation. The following type is responsible for that:
data MsgGetHeaders = MsgGetHeaders
    { -- not guaranteed to be in any particular order
      mghFrom :: ![HeaderHash]
    , mghTo   :: !(Maybe HeaderHash)
    } deriving (Generic, Show, Eq)
  • for each of these messages, the node that initiated the conversation tries to send back the relevant header hashes until the client closes up.
  • within handling of each block header announcement the approached peer uses logic layer to retrieve additional block headers from block header hashes using:
data Logic m = Logic {
-- |
    getBlockHeader     :: HeaderHash -> m (Maybe BlockHeader)
-- |
}
  • alternatively, the node that shares announcement can send MsgNoHeaders txt and close the conversation by itself

When the conversation is closed the peer that received calls postBlockHeader of logic layer. The call is implemented in postBlockHeader of Pos.Logic.Full. Here, the header is processed by handleUnsolicitedHeader from Pos.Network.Block.Logic. In the end, addHeaderToBlockRequestQueue is called with respective nodeId and header, ie., block header queue is updated and the node's retrieval worker gets triggered. This completes the full cycle and starts it over.

From the above it follows that the peer announces the header of the block. The block download is realized when diffusion layer getBlocks is invoked. The detailed communication diagram is presented below.

Fig. 4. Peer-to-peer downloading of block

Fig. 4. Peer-to-peer downloading of block

MsgRequestBlocks is used within outbound enqueue and ConversationActions MsgGetBlocks MsgBlock is utilized to facilitate communication. requestBlock of Pos.Diffusion.Full.Block manages to get NewestFirst [] Block from OldestFirst NE HeaderHash. Under the hood the following function is used:

retrieveBlocks
    :: ConversationActions MsgGetBlocks MsgBlock
    -> BlockVersionData
    -> Int
    -> ExceptT Text IO (NewestFirst [] Block)
retrieveBlocks conv bvd numBlocks = retrieveBlocksDo conv bvd numBlocks []

that uses internally recvLimited from Pos.Infra.Communication.Types.Protocol.

To sum up, we have node-to-node cycle that invokes repeatedly two API calls from diffusion layer:

-- Get all blocks from a set of checkpoints to a given tip.
-- The blocks come in oldest first, and form a chain (prev header of
-- {n}'th is the header of {n-1}th.
getBlocks
   :: NodeId
   -> HeaderHash
   -> [HeaderHash]
   -> m (OldestFirst [] Block)

-- Announce a block header.
announceBlockHeader
   :: MainBlockHeader
   -> m ()

Rollback

In the node-to-node communication the same cycle as for applying blocks is valid. The real discrepancy takes place in handleBlocks (see Fig 2. to locate the call) of lib/src/Pos/Network/Block/Logic.hs :

handleBlocksWithLca
   :: HeaderHash
   -> m ()
handleBlocksWithLca lcaHash = do
    logDebug $ sformat ("Handling block w/ LCA, which is "%shortHashF) lcaHash
    -- Head blund in result is the youngest one.
    toRollback <- DB.loadBlundsFromTipWhile (configGenesisHash genesisConfig)
        $ \blk -> headerHash blk /= lcaHash
    maybe (applyWithoutRollback genesisConfig txpConfig diffusion blocks)
          (applyWithRollback genesisConfig txpConfig diffusion blocks lcaHash)
          (_NewestFirst nonEmpty toRollback)

In case of rollback applyWithRollback of Pos.DB.Block.Logic.VAR is called as a consequence (without rollback it was verifyAndApplyBlocks from this module). After DB modification in both cases (with and without rollback) relayBlock is called afterwards which invokes announceBlockHeader of diffusion layer. Hence the cycle comes down to cyclic calling of getBlocks and announceBlockHeader of diffusion layer in both cases.

Clone this wiki locally