Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 69 additions & 0 deletions BIER_DIFF.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# BIER Integration: Diff vs upstream named-data/ndnd@dv2

This repo adds BIER (Bit Index Explicit Replication) multicast support on top of
the upstream `dv2` branch. All changes are backward-compatible; non-BIER traffic
is unaffected.

---

## New Files (not in upstream)

| File | Purpose |
|------|---------|
| `fw/fw/bier.go` (296 lines) | BIFT table, bit-string helpers (`BierClone`, `BierClearBit`, `BierIsZero`, `BuildBierBitString`) |
| `fw/fw/bier_strategy.go` | `BierStrategy` — PIT-tandem BIER forwarding; replicates Interest per-bit via `SendInterest`; shared `bierReplicate()` function used by both `BierStrategy` and `Multicast` |
| `fw/mgmt/bift.go` (95 lines) | Management face handler for BIER Forwarding Information Base Table (BIFT) |
| `cmd/svs-chat/main.go` | SVS ALO chat CLI demo that exercises BIER multicast sync; announces sync group prefix with `Cost: 1` to signal the Sync group multicast flag through DV |
| `fw/bier_tests/` | Unit + integration tests for BIER (4 files) |
| `e2e/test_005.py` | E2E test: BIER multicast on 52-node Sprint topology (51/51 consumers OK) |
| `e2e/test_006.py` | E2E test: multi-group BIER (two concurrent prefixes) |
| `e2e/test_007.py` | E2E test: SVS Chat over BIER (4/4 consumers receive message) |
| `e2e-run.sh` | Docker entrypoint: installs binaries, fake `go` shim, runs Mininet tests |
| `run-bier-e2e.sh` | macOS helper: cross-compiles Linux binaries, launches Docker |

---

## Modified Files

### `fw/core/config.go`
- Added `BierIndex int` field (`bier_index`) to `FwConfig` struct (line 121)
- Default value `-1` (disabled) set in `DefaultConfig()` (line 223)
- Operators set `bier_index: N` in YAML config to enable BIER on a node

### `fw/face/ndnlp-link-service.go`
- `sendPacket()` (line 262): copies `packet.Bier` onto the NDNLPv2 fragment TLV `0x035a` for outgoing Interests
- `handleIncomingFrame()` (line 356): reads `LP.Bier` from incoming frame and attaches it to the parsed packet

### `fw/fw/multicast.go`
- `AfterReceiveInterest()`: when `packet.Bier != nil`, delegates to the shared `bierReplicate()` function (BIER-targeted replication). Falls back to flood-to-all nexthops with retransmission suppression for non-BIER interests (needed by DV routing advertisement sync which carries no bit-string).

### `fw/fw/bier_strategy.go`
- Extracted `bierReplicate()` as a package-level function shared by both `BierStrategy` and `Multicast`, taking a `sendInterest` callback so either strategy can drive replication through its own PIT machinery.
- `BierStrategy.AfterReceiveInterest()` fallback (no bit-string): floods to all nexthops with retransmission suppression, matching `Multicast` behavior — making the two strategies interchangeable.
- `BierStrategy.replicateBier()` delegates to `bierReplicate()`.

### `fw/fw/thread.go`
Four logical additions in `processInterest` / `twoPhaseLookup`:

1. **BFER+BFR combined delivery** (line 372): after local PIT match, clear the local bit from the bit-string and fall through to network replication (so a node that is both a consumer and a transit BFR still forwards to remaining bits)

2. **BFIR encoding** (line 403): when `twoPhaseLookup` returns >1 egress router, `IsBierEnabled()`, and `isMulticast=true`, call `Bift.BuildBierBitString()` to pre-encode a bit-string on the outgoing packet

3. **Auto-strategy selection** (line 412): when `packet.Bier != nil`, override the strategy to `BierStrategy` so bit-level replication runs through the PIT (PIT-tandem design)

4. **Multicast flag from PET** (`twoPhaseLookup`): returns a 4th value `isMulticast bool` from `petEntry.Multicast`, so the BFIR condition is gated on whether the prefix is a Sync group prefix (announced with `cost=1` by DV) rather than a multihomed producer prefix

---

## Architecture in One Paragraph

BIER piggybacks on NDNLPv2 via TLV `0x035a`. The BFIR (first-hop router) detects
a Sync group prefix (PET entry with `Multicast=true`) with >1 egress router and
stamps a bit-string onto the Interest. `BierStrategy` (and `Multicast` strategy,
which is now BIER-aware) then replicates the Interest once per set bit, sending
one copy per downstream router through the normal PIT machinery. Each BFR clears
its own bit before forwarding, so no router receives a copy destined only for it;
the Interest naturally terminates when all bits are cleared. BFER nodes deliver
locally and then re-forward any remaining bits. No bypass of the PIT occurs.
Multihomed producer prefix announcements (`Multicast=false`) are unaffected and
never trigger BIER encoding.
41 changes: 0 additions & 41 deletions CITATION.cff

This file was deleted.

3 changes: 0 additions & 3 deletions CONTRIBUTING.md

This file was deleted.

3 changes: 0 additions & 3 deletions PRIVACY.md

This file was deleted.

106 changes: 106 additions & 0 deletions cmd/svs-chat/main.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
package main

import (
"flag"
"fmt"
"os"
"time"

enc "github.com/named-data/ndnd/std/encoding"
eng "github.com/named-data/ndnd/std/engine"
"github.com/named-data/ndnd/std/log"
"github.com/named-data/ndnd/std/ndn"
"github.com/named-data/ndnd/std/object"
"github.com/named-data/ndnd/std/object/storage"
"github.com/named-data/ndnd/std/sync"
)

func main() {
log.Default().SetLevel(log.LevelTrace)
prefixStr := flag.String("prefix", "/minindn/svs", "SVS group prefix")
nameStr := flag.String("name", "", "Participant name")
msgStr := flag.String("msg", "", "Message to send")
delaySecs := flag.Int("delay", 0, "Wait N seconds before sending the message")
waitSecs := flag.Int("wait", 10, "Seconds to wait before exiting")
flag.Parse()

if *nameStr == "" {
fmt.Println("Error: -name is required")
os.Exit(1)
}

groupPrefix, _ := enc.NameFromStr(*prefixStr)
nodeName, _ := enc.NameFromStr(*nameStr)

app := eng.NewBasicEngine(eng.NewDefaultFace())
if err := app.Start(); err != nil {
fmt.Printf("Failed to start engine: %v\n", err)
os.Exit(1)
}
defer app.Stop()

store := storage.NewMemoryStore()
client := object.NewClient(app, store, nil)

err := client.Start()
if err != nil {
panic(err)
}
defer client.Stop()

syncPrefix := groupPrefix.Append(enc.NewKeywordComponent("svs"))
client.AnnouncePrefix(ndn.Announcement{Name: syncPrefix, Expose: true, Cost: 1})

dataPrefix := groupPrefix.Append(nodeName...)
client.AnnouncePrefix(ndn.Announcement{Name: dataPrefix, Expose: true})

alo, err := sync.NewSvsALO(sync.SvsAloOpts{
Name: nodeName,
Svs: sync.SvSyncOpts{
Client: client,
GroupPrefix: groupPrefix,
BootTime: 1,
},
Snapshot: &sync.SnapshotNodeLatest{
Client: client,
SnapMe: func(_ enc.Name) (enc.Wire, error) {
if *msgStr != "" {
return enc.Wire{[]byte(*msgStr)}, nil
}
return enc.Wire{[]byte("(no message)")}, nil
},
Threshold: 5,
},
})
if err != nil {
panic(err)
}

alo.SetOnPublisher(func(name enc.Name) {
alo.SubscribePublisher(name, func(pub sync.SvsPub) {
fmt.Printf("CHAT %s: %s\n", pub.Publisher.String(), string(pub.Bytes()))
})
})

if err := alo.Start(); err != nil {
panic(err)
}

// Important: let the sync initialization happen
time.Sleep(2 * time.Second)

if *msgStr != "" {
if *delaySecs > 0 {
time.Sleep(time.Duration(*delaySecs) * time.Second)
}
_, _, err := alo.Publish(enc.Wire{[]byte(*msgStr)})
if err != nil {
fmt.Printf("Publish error: %v\n", err)
} else {
fmt.Printf("Published message: %s\n", *msgStr)
}
}

time.Sleep(time.Duration(*waitSecs) * time.Second)
alo.Stop()
}
2 changes: 1 addition & 1 deletion dv/dv/insertion.go
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ func (pfx *PrefixModule) onPrefixInsertionObject(object ndn.Data, faceId uint64)
return resError
}

pfx.Announce(prefix, faceId, cost, params.ValidityPeriod)
pfx.Announce(prefix, faceId, cost, cost == 1, params.ValidityPeriod)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this?


return &mgmt.ControlResponse{
Val: &mgmt.ControlResponseVal{
Expand Down
2 changes: 1 addition & 1 deletion dv/dv/mgmt.go
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ func (dv *Router) mgmtOnPrefix(args ndn.InterestHandlerArgs) {
}

dv.mutex.Lock()
dv.pfx.Announce(name, faceID, cost, validity)
dv.pfx.Announce(name, faceID, cost, cost == 1, validity)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

dv.mutex.Unlock()

res.Val.StatusCode = 200
Expand Down
42 changes: 25 additions & 17 deletions dv/dv/prefix.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,10 @@ type PrefixModule struct {
}

type petEgressOp struct {
add bool
name enc.Name
egress enc.Name
add bool
name enc.Name
egress enc.Name
multicast bool
}

type petNextHopOp struct {
Expand Down Expand Up @@ -225,10 +226,11 @@ func (pfx *PrefixModule) Reset() {

// Announce adds or updates a local prefix in prefix egress state.
// Use face=0 and cost=0 for route-only semantics.
func (pfx *PrefixModule) Announce(name enc.Name, face uint64, cost uint64, validity *spec.ValidityPeriod) {
// multicast=true marks this as a Sync group prefix (vs. a producer prefix).
func (pfx *PrefixModule) Announce(name enc.Name, face uint64, cost uint64, multicast bool, validity *spec.ValidityPeriod) {
pfx.mu.Lock()
petOps := pfx.addRouterPrefixPet(pfx.routerName, name)
pfx.pfx.Announce(name, face, cost, validity)
petOps := pfx.addRouterPrefixPet(pfx.routerName, name, multicast)
pfx.pfx.Announce(name, face, cost, multicast, validity)
pfx.mu.Unlock()

pfx.applyPetOps(petOps)
Expand Down Expand Up @@ -330,7 +332,7 @@ func (pfx *PrefixModule) processUpdate(wire enc.Wire) (dirty bool, petOps []petE
petOps = append(petOps, pfx.resetRouterPet(router)...)
}
for _, add := range ops.PrefixOpAdds {
petOps = append(petOps, pfx.addRouterPrefixPet(router, add.Name)...)
petOps = append(petOps, pfx.addRouterPrefixPet(router, add.Name, add.Multicast)...)
}
for _, remove := range ops.PrefixOpRemoves {
petOps = append(petOps, pfx.removeRouterPrefixPet(router, remove.Name)...)
Expand Down Expand Up @@ -360,7 +362,7 @@ func (pfx *PrefixModule) resetRouterPet(router enc.Name) []petEgressOp {
return ops
}

func (pfx *PrefixModule) addRouterPrefixPet(router enc.Name, prefix enc.Name) []petEgressOp {
func (pfx *PrefixModule) addRouterPrefixPet(router enc.Name, prefix enc.Name, multicast bool) []petEgressOp {
routerHash := router.Hash()
prefixes := pfx.petPrefixes[routerHash]
if prefixes == nil {
Expand All @@ -376,9 +378,10 @@ func (pfx *PrefixModule) addRouterPrefixPet(router enc.Name, prefix enc.Name) []

egress := router.Clone()
return []petEgressOp{{
add: true,
name: prefix.Clone(),
egress: egress,
add: true,
name: prefix.Clone(),
egress: egress,
multicast: multicast,
}}
}

Expand Down Expand Up @@ -413,17 +416,22 @@ func (pfx *PrefixModule) applyPetOps(ops []petEgressOp) {

for _, op := range ops {
cmd := "remove-egress"
args := &mgmt.ControlArgs{
Name: op.name,
Egress: &mgmt.EgressRecord{Name: op.egress},
}
if op.add {
cmd = "add-egress"
// Encode the Sync group (multicast) flag via Cost: 1=multicast, absent=producer.
if op.multicast {
args.Cost = optional.Some(uint64(1))
}
}

pfx.nfdc.Exec(nfdc.NfdMgmtCmd{
Module: "pet",
Cmd: cmd,
Args: &mgmt.ControlArgs{
Name: op.name,
Egress: &mgmt.EgressRecord{Name: op.egress},
},
Module: "pet",
Cmd: cmd,
Args: args,
Retries: -1,
})
}
Expand Down
7 changes: 7 additions & 0 deletions dv/dv/table_algo.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ package dv
import (
"github.com/named-data/ndnd/dv/config"
"github.com/named-data/ndnd/dv/table"
fw "github.com/named-data/ndnd/fw/fw"
enc "github.com/named-data/ndnd/std/encoding"
"github.com/named-data/ndnd/std/log"
)
Expand Down Expand Up @@ -134,4 +135,10 @@ func (dv *Router) updateFib() {
}
}
dv.fib.RemoveUnmarked()

// Rebuild the BIFT whenever the FIB changes so BIER forwarding paths
// are always consistent with the routing table.
if fw.IsBierEnabled() {
fw.Bift.BuildFromFibPet()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought BIFT is solely built from FIB? What you need PET for?

}
}
9 changes: 7 additions & 2 deletions dv/table/prefix_state.go
Original file line number Diff line number Diff line change
Expand Up @@ -94,16 +94,21 @@ func (pt *PrefixEgreState) Reset() {

// Announce updates or creates a local prefix with an optional validity period.
// Use face=0 and cost=0 for route-only semantics.
func (pt *PrefixEgreState) Announce(name enc.Name, face uint64, cost uint64, validity *spec.ValidityPeriod) {
// multicast=true marks this as a Sync group prefix (vs. a producer prefix).
func (pt *PrefixEgreState) Announce(name enc.Name, face uint64, cost uint64, multicast bool, validity *spec.ValidityPeriod) {
hash := name.TlvStr()
entry := pt.me.Prefixes[hash]
publishAdd := false
if entry == nil {
entry = &PrefixEntry{
Name: name,
Name: name,
Multicast: multicast,
}
pt.me.Prefixes[hash] = entry
publishAdd = true
} else if multicast && !entry.Multicast {
entry.Multicast = true
publishAdd = true
}

if !sameValidityPeriod(entry.ValidityPeriod, validity) {
Expand Down
Loading
Loading