Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions Guide/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,10 @@
- [mesh]()
- [inspect]()
- [OpenHCL Architecture](./reference/architecture/openhcl.md)
- [IGVM](./reference/architecture/igvm.md)
- [Boot Process](./reference/architecture/openhcl_boot.md)
- [Processes and Components](./reference/architecture/openhcl/processes.md)
- [Boot Flow](./reference/architecture/openhcl/boot.md)
- [Sidecar](./reference/architecture/openhcl/sidecar.md)
- [IGVM](./reference/architecture/openhcl/igvm.md)

---

Expand Down
20 changes: 0 additions & 20 deletions Guide/src/reference/architecture/igvm.md

This file was deleted.

75 changes: 32 additions & 43 deletions Guide/src/reference/architecture/openhcl.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,71 +7,60 @@

* * *

> This page is under construction

## Overview

The following diagram offers a brief, high-level overview of the OpenHCL
Architecture.
OpenHCL is a paravisor execution environment that runs within the guest partition of a virtual machine. It provides virtualization services to the guest OS from within the guest itself, rather than from the host.

The following diagram offers a brief, high-level overview of the OpenHCL Architecture.

![OpenHCL High Level Overview](./_images/openhcl.png)

## VTLs
## Virtual Trust Levels (VTLs)

OpenHCL currently relies on Hyper-V's implementation of [Virtual Trust Levels]
(VTLs) to implement the security boundaries necessary for running OpenVMM as a
paravisor.
OpenHCL relies on [Virtual Trust Levels] (VTLs) to establish a security boundary between itself and the guest OS.

VTLs can be backed by:
- **VTL2:** OpenHCL runs here[^sk]. It has higher privileges and is isolated from VTL0.
- **VTL0:** The Guest OS (e.g., Windows, Linux) runs here. It cannot access VTL2 memory or resources.

- Hardware-based [TEEs], like Intel [TDX] and AMD [SEV-SNP]
- Software-based constructs, like Hyper-V [VSM]
This isolation is enforced by the underlying hypervisor (Hyper-V) and can be backed by:

OpenHCL runs within VTL2[^sk], and provides virtualization services to a Guest OS
running in VTL0.
- Hardware [TEEs], like Intel [TDX] and AMD [SEV-SNP].
- Software-based constructs, like Hyper-V [VSM].

## OpenHCL Linux
## Scenarios

By building on-top of Linux, OpenHCL is able to leverage the extensive Linux
software and development ecosystem, and avoid re-implementing various components
like core OS primitives, device drivers, and software libraries. As a result:
OpenHCL provides a familiar and productive environment for developers.
OpenHCL enables several key scenarios by providing a trusted execution environment within the VM:

The OpenHCL Linux Kernel uses a minimal kernel configuration, designed to host a
single specialized build of OpenVMM in userspace.
### Azure Boost

In debug configurations, userspace may include additional facilities (such as an
interactive shell, additional perf and debugging tools, etc). Release
configurations use a lean, minimal userspace, consisting entirely of OpenHCL
components.
OpenHCL acts as a compatibility layer for Azure Boost. It translates legacy synthetic device interfaces (like VMBus networking and storage) used by the guest OS into the hardware-accelerated interfaces (proprietary [Microsoft Azure Network Adapter] (MANA) and NVMe) provided by the Azure Boost infrastructure. This allows unmodified guest images to take advantage of next-generation hardware.

* * *
The following diagram shows a high level overview of how synthetic networking is supported in OpenHCL over Microsoft Azure Network Adapter (MANA)

![OpenHCL Synthetic Networking](./_images/openhcl-synthetic-nw.png)

The following diagram shows a high level overview of how accelerated networking is supported in OpenHCL over MANA

![OpenHCL Accelerated Networking](./_images/openhcl-accelnet.png)

## Scenario: Azure Boost Storage/Networking Translation
### Confidential Computing

Traditionally, Azure VMs have used Hyper-V VMBus-based synthetic networking and
synthetic storage for I/O. Azure Boost introduces hardware accelerated storage
and networking. It exposes different interfaces to guest VMs for networking and
storage. Specifically, it exposes a new proprietary [Microsoft Azure Network
Adapter] (MANA) and an NVMe interface for storage.
In Confidential VMs (CVMs), the host is not trusted. OpenHCL runs inside the encrypted VM context (VTL2) and provides necessary services (like device emulation and TPM) that the untrusted host cannot securely provide.

OpenHCL is able to provide a compatibility layer for I/O virtualization on
Azure Boost enabled systems.
### Trusted Launch

Specifically, OpenHCL exposes Hyper-V VMBus-based synthetic networking and
synthetic storage for I/O to the guest OS in a VM. OpenHCL then maps those
synthetic storage and networking interfaces to the hardware accelerated
interfaces provided by Azure Boost.
OpenHCL hosts a virtual TPM (vTPM) and enforces Secure Boot policies, ensuring the integrity of the guest boot process.

The following diagram shows a high level overview of how synthetic networking is
supported in OpenHCL over Microsoft Azure Network Adapter (MANA)
## Architecture Components

<img src="./_images/openhcl-synthetic-nw.png" height="400" width="600"> <br>
OpenHCL is built on top of a specialized Linux kernel and consists of several userspace processes that work together to provide these services.

The following diagram shows a high level overview of how accelerated networking
is supported in OpenHCL over MANA
For more details on the internal components and boot process, see:

<img src="./_images/openhcl-accelnet.png" height="400" width="600"> <br> <br>
- [Processes and Components](./openhcl/processes.md)
- [Boot Flow](./openhcl/boot.md)
- [Sidecar](./openhcl/sidecar.md)
- [IGVM Artifact](./openhcl/igvm.md)

[^sk]: Why not VTL1? Windows already uses VTL1 in order to host the [Secure Kernel].

Expand Down
107 changes: 107 additions & 0 deletions Guide/src/reference/architecture/openhcl/boot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# OpenHCL Boot Flow

This document describes the sequence of events that occur when OpenHCL boots, from the initial loading of the IGVM package to the fully running paravisor environment.

```mermaid
sequenceDiagram
autonumber
participant Host as Host VMM
box "VTL2 (OpenHCL)" #f9f9f9
participant Shim as Boot Shim<br/>(openhcl_boot)
participant Sidecar as Sidecar Kernel
participant Kernel as Linux Kernel
participant Init as Init<br/>(underhill_init)
participant HCL as Paravisor<br/>(openvmm_hcl)
participant Worker as VM Worker<br/>(underhill_vm)
end

Host->>Shim: 1. Load IGVM & Transfer Control
activate Shim

note over Shim: 2. Boot Shim Execution<br/>Hardware Init, Config Parse, Device Tree

par CPU Split
Shim->>Sidecar: APs Jump to Sidecar
activate Sidecar
note over Sidecar: Enter Dispatch Loop

Shim->>Kernel: BSP Jumps to Kernel Entry
deactivate Shim
activate Kernel
end

note over Kernel: 3. Linux Kernel Boot<br/>Init Subsystems, Load Drivers, Mount initrd

Kernel->>Init: Spawn PID 1
deactivate Kernel
activate Init

note over Init: 4. Userspace Initialization<br/>Mount /proc, /sys, /dev

Init->>HCL: Exec openvmm_hcl
deactivate Init
activate HCL

note over HCL: 5. Paravisor Startup<br/>Read Device Tree, Init Services

HCL->>Worker: Spawn Worker
activate Worker

par 6. VM Execution
note over HCL: Manage Policy & Host Comm
note over Worker: Run VTL0 VP Loop
note over Sidecar: Wait for Commands / Hotplug
end
Comment on lines 19 to 59
Copy link

Copilot AI Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Mermaid sequence diagram uses autonumber on line 7, which automatically numbers sequence diagram steps. However, manual numbers are also included in the message labels (e.g., "1. Load IGVM & Transfer Control") and notes (e.g., "2. Boot Shim Execution"). This creates redundant numbering that may confuse readers. Consider removing the manual number prefixes from the diagram labels since autonumber will handle the numbering automatically.

Copilot uses AI. Check for mistakes.
```

## 1. IGVM Loading

The boot process begins when the host VMM loads the OpenHCL IGVM package into VTL2 memory.
The IGVM package contains the initial code and data required to start the paravisor, including the boot shim, kernel, and initial ramdisk.
The host places these components at specific physical addresses defined in the IGVM header and carries the configuration blob (parameters and optional host device tree) into VTL2.

## 2. Boot Shim Execution (`openhcl_boot`)

The host transfers control to the entry point of the **Boot Shim**.

1. **Hardware Init:** The shim initializes the CPU state and memory management unit (MMU).
2. **Config Parsing:** It parses configuration from multiple sources:
* **IGVM Parameters:** Fixed parameters provided by the host that were generated at IGVM build time.
* **Host Device Tree:** A device tree provided by the host containing topology and resource information.
* **Command Line:** It parses the kernel command line, which can be supplied via IGVM or the host device tree.
3. **Device Tree:** It constructs a Device Tree that describes the hardware topology (CPUs, memory) to the Linux kernel.
4. **Sidecar Setup (x86_64):** The shim determines which CPUs will run Linux (typically just the BSP) and which will run the Sidecar (APs). It sets up control structures and directs Sidecar CPUs to the Sidecar entry point.
* **Sidecar Entry:** "Sidecar CPUs" jump directly to the Sidecar kernel entry point instead of the Linux kernel.
* **Dispatch Loop:** These CPUs enter a lightweight dispatch loop, waiting for commands.
5. **Kernel Handoff:** Finally, the BSP (and any Linux APs) jumps to the Linux kernel entry point, passing the Device Tree and command line arguments.

## 3. Linux Kernel Boot

The **Linux Kernel** takes over on the BSP and initializes the operating system environment. Sidecar CPUs remain in their dispatch loop until needed (e.g., hot-plugged for Linux tasks).

1. **Kernel Init:** The kernel initializes its subsystems (memory, scheduler, etc.).
2. **Driver Init:** It loads drivers for the paravisor hardware and standard devices.
3. **Root FS:** It mounts the initial ramdisk (initrd) as the root filesystem.
4. **Expose DT:** It exposes the boot-time Device Tree to userspace (e.g., under `/proc/device-tree`) for early consumers.
4. **User Space:** It spawns the first userspace process, `underhill_init` (PID 1).

## 4. Userspace Initialization (`underhill_init`)

`underhill_init` prepares the userspace environment.

1. **Filesystems:** It mounts essential pseudo-filesystems like `/proc`, `/sys`, and `/dev`.
2. **Environment:** It sets up environment variables and system limits.
3. **Exec:** It replaces itself with the main paravisor process, `/bin/openvmm_hcl`.

## 5. Paravisor Startup (`openvmm_hcl`)

The **Paravisor** process (`openvmm_hcl`) starts and initializes the virtualization services.

1. **Config Discovery:** It reads the system topology and configuration from `/proc/device-tree` and other kernel interfaces.
2. **Service Init:** It initializes internal services, such as the VTL0 management logic and host communication channels.
3. **Worker Spawn:** It spawns the **VM Worker** process (`underhill_vm`) to handle the high-performance VM partition loop.

## 6. VM Execution

At this point, the OpenHCL environment is fully established.
The `underhill_vm` process runs the VTL0 guest, handling exits and emulating devices, while `openvmm_hcl` manages the overall policy and communicates with the host.
29 changes: 29 additions & 0 deletions Guide/src/reference/architecture/openhcl/igvm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# IGVM Artifact

The Independent Guest Virtual Machine (IGVM) format describes the initial state of an isolated virtual machine. OpenHCL is delivered as an IGVM package.

> **Note:** For more details on the IGVM specification, see the [IGVM repository](https://github.com/microsoft/igvm).

## Purpose

The IGVM file serves as the "firmware image" for the OpenHCL paravisor. It allows the host VMM to:

1. Load the OpenHCL components into VTL2 memory.
2. Place them at specific, required physical addresses.
3. Pass initial configuration data to the paravisor.

## Package Contents

An OpenHCL IGVM package bundles the following artifacts:

- **Boot Shim (`openhcl_boot`):** The entry point for VTL2 execution.
- **Linux Kernel:** The operating system kernel.
- **Sidecar Kernel (x86_64):** The lightweight kernel for APs.
- **Initial Ramdisk (initrd):** The root filesystem containing userspace binaries (`underhill_init`, `openvmm_hcl`, etc.).
- **Memory Layout:** Directives specifying where each component should be loaded in memory.
- **Configuration:** Boot-time parameters (CPU topology, device settings, etc.) generated at IGVM build time.

## Build Process

The IGVM artifact is generated by the OpenHCL build system.
See [Building OpenHCL](../../../dev_guide/getting_started/build_openhcl.md) for instructions on how to build it.
102 changes: 102 additions & 0 deletions Guide/src/reference/architecture/openhcl/processes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# OpenHCL Processes and Components

This document describes the major software components and processes that make up the OpenHCL paravisor environment.

## Boot Shim (`openhcl_boot`)

The boot shim is the first code that executes in VTL2. It is responsible for early hardware initialization and preparing the environment for the Linux kernel.

**Source code:** [openhcl/openhcl_boot](https://github.com/microsoft/openvmm/tree/main/openhcl/openhcl_boot) | **Docs:** [openhcl_boot rustdoc](https://openvmm.dev/rustdoc/linux/openhcl_boot/index.html)

**Key Responsibilities:**

- **Hardware Initialization:** Sets up CPU state, enables MMU, and configures initial page tables.
- **Configuration Parsing:** Receives boot parameters from the host that were generated at IGVM build time.
- **Device Tree Construction:** Builds a device tree describing the hardware configuration (CPU topology, memory regions, devices).
- **Sidecar Initialization:** Sets up control structures for the Sidecar kernel (x86_64 only).
- **Kernel Handoff:** Transfers control to the Linux kernel.

## Linux Kernel

OpenHCL runs on top of a minimal, specialized Linux kernel. This kernel provides core operating system services such as memory management, scheduling, and device drivers.

**Key Responsibilities:**

- **Hardware Abstraction:** Manages CPU and memory resources.
- **Device Drivers:** Provides drivers for paravisor-specific hardware and standard devices.
- **Filesystem:** Mounts the initial ramdisk (initrd) as the root filesystem.
- **Process Management:** Launches the initial userspace process (`underhill_init`).

## Sidecar Kernel (x86_64)

On x86_64 systems, OpenHCL includes a "sidecar" kernel—a lightweight, bare-metal kernel that runs on a subset of CPUs to improve boot performance and reduce resource usage.

For more details, see the [Sidecar Architecture](./sidecar.md) page.

**Source code:** [openhcl/sidecar](https://github.com/microsoft/openvmm/tree/main/openhcl/sidecar) | **Docs:** [sidecar rustdoc](https://openvmm.dev/rustdoc/linux/sidecar/index.html)

**Key Responsibilities:**

- **Fast Boot:** Allows secondary CPUs (APs) to boot quickly without initializing the full Linux kernel.
- **Dispatch Loop:** Runs a minimal loop waiting for commands from the host or the main kernel.
- **On-Demand Conversion:** Can be converted to a full Linux CPU if required.

## Init Process (`underhill_init`)

`underhill_init` is the first userspace process (PID 1) started by the Linux kernel. It acts as the system service manager for the paravisor environment.

**Source code:** [openhcl/underhill_init](https://github.com/microsoft/openvmm/tree/main/openhcl/underhill_init) | **Docs:** [underhill_init rustdoc](https://openvmm.dev/rustdoc/linux/underhill_init/index.html)

**Key Responsibilities:**

- **System Setup:** Mounts necessary filesystems (e.g., `/proc`, `/sys`, `/dev`).
- **Environment Preparation:** Sets up the execution environment for the paravisor.
- **Process Launch:** `exec`s the main paravisor process (`openvmm_hcl`).

## Paravisor (`openvmm_hcl`)

`openvmm_hcl` is the central management process of the OpenHCL paravisor. It runs in userspace and orchestrates the virtualization services.

**Source code:** [openhcl/openvmm_hcl](https://github.com/microsoft/openvmm/tree/main/openhcl/openvmm_hcl) | **Docs:** [openvmm_hcl rustdoc](https://openvmm.dev/rustdoc/linux/openvmm_hcl/index.html)

**Key Responsibilities:**

- **Policy & Management:** Manages the lifecycle of the VM and enforces security policies.
- **Host Communication:** Interfaces with the host VMM to receive commands and report status.
- **Servicing:** Orchestrates save and restore operations (VTL2 servicing).
- **Worker Management:** Spawns and manages the VM worker process.

## VM Worker (`underhill_vm`)

The VM worker process (`underhill_vm`) is responsible for the high-performance data path of the virtual machine. It is spawned by `openvmm_hcl`.

**Source code:** [openhcl/underhill_core](https://github.com/microsoft/openvmm/tree/main/openhcl/underhill_core) | **Docs:** [underhill_core rustdoc](https://openvmm.dev/rustdoc/linux/underhill_core/index.html)

**Key Responsibilities:**

- **VP Loop:** Runs the virtual processor loop, handling VM exits.
- **Device Emulation:** Emulates devices for the guest VM.
- **I/O Processing:** Handles high-speed I/O operations.

## Diagnostics Server (`diag_server`)

The diagnostics server provides an interface for debugging and monitoring the OpenHCL environment.

**Source code:** [openhcl/diag_server](https://github.com/microsoft/openvmm/tree/main/openhcl/diag_server) | **Docs:** [diag_server rustdoc](https://openvmm.dev/rustdoc/linux/diag_server/index.html)

**Key Responsibilities:**

- **External Interface:** Listens on a VSOCK port for diagnostic connections.
- **Command Handling:** Processes diagnostic commands and queries.
- **Log Retrieval:** Provides access to system logs.

## Profiler Worker (`profiler_worker`)

The profiler worker is an on-demand process used for performance analysis.

**Source code:** [openhcl/profiler_worker](https://github.com/microsoft/openvmm/tree/main/openhcl/profiler_worker) | **Docs:** [profiler_worker rustdoc](https://openvmm.dev/rustdoc/linux/profiler_worker/index.html)

**Key Responsibilities:**

- **Performance Data Collection:** Collects profiling data (e.g., CPU usage, traces) when requested.
- **Isolation:** Runs in a separate process to minimize impact on the main workload.
Loading