Gaze-Driven Predictive Text Entry for Virtual Reality
A hands-free text input system for VR headsets, powered by the Dasher zooming interface and real-time language prediction.
Quick Start · How It Works · Architecture · VR Deployment
VRDasher brings the Dasher predictive text entry method into immersive VR environments. Instead of typing on a virtual keyboard, users navigate through a zooming stream of characters using their gaze direction — enabling efficient, hands-free text input.
The system supports multiple input modalities and automatically selects the best one available at runtime:
| Input Mode | Target Hardware | Mechanism |
|---|---|---|
| Eye Tracking | HTC VIVE Focus 3 / Vision | VIVE OpenXR eye gaze ray → canvas projection |
| Head Gaze | Meta Quest 2 / 3 / Pro, any OpenXR HMD | Camera forward ray → canvas projection |
| Mouse | Desktop / Unity Editor | Screen-space cursor position |
No manual configuration is needed — input selection is fully automatic.
- Predictive Language Model — PPM order-5 character prediction trained on English corpus; common sequences like
th→eenlarge to ~48% of screen, minimizing effort - Multi-Modal Input — Seamless switching between eye tracking, head gaze, and mouse with a unified
IDasherInputinterface - Cross-Platform XR — Targets HTC VIVE Focus 3/Vision (via VIVE OpenXR SDK) and Meta Quest family (via OpenXR)
- Runtime Scene Construction — Attach
DasherSceneSetupto an empty GameObject and press Play; the entire UI hierarchy is built automatically - VR Interaction Controls — Joystick-based canvas distance adjustment, controller-triggered recentering, and pause/resume via VR triggers
- Adaptive Gaze Smoothing — Configurable smoothing per input mode to reduce jitter while maintaining responsiveness
- Unity Editor:
2022.3.62f3(LTS) — other2022.3.xversions should work - Git for cloning
1. Clone the repository
git clone https://github.com/ManveerAnand/VRDasher.git
2. Open Unity Hub → Add project from disk → select the VRDasher/ folder
3. Let Unity restore packages on first open (requires internet)
4. Open (or create) a scene, add an empty GameObject, attach DasherSceneSetup
5. Press Play
Mouse controls:
| Action | Control |
|---|---|
| Select characters | Move mouse right |
| Undo / backtrack | Move mouse left |
| Choose character | Move mouse up / down |
| Clear all text | Backspace |
| Pause / Resume | Space |
Dasher is a zooming interface driven by continuous pointing gestures — you steer, not type.
1. Language Model (PPM-5) predicts next-character probabilities from context
2. Characters are arranged vertically, sized proportional to their probability
3. Pointer position controls zoom: right = select (write), left = undo
4. The zoom algorithm (OneStepTowards) maps target range [Y-X, Y+X] → fills [0, 4096]
5. When a character node covers the crosshair, it's selected and predictions update
6. High-probability sequences require minimal gaze movement
Assets/Scripts/
├── Core/
│ ├── DasherEngine.cs # Zooming algorithm (OneStepTowards), node tree management
│ ├── DasherNode.cs # Character tree node with probability bounds & colors
│ ├── DasherAlphabetLayout.cs # English alphabet layout with probability-based sizing
│ ├── DasherRenderer.cs # UI rendering with object pooling for smooth VR performance
│ └── LanguageModel.cs # PPM order-5 prediction model (trained on English corpus)
│
├── Input/
│ ├── IDasherInput.cs # Input abstraction interface (NormalizedPosition, IsActive)
│ ├── MouseInput.cs # Mouse input for desktop / editor testing
│ ├── HeadGazeInput.cs # Head gaze raycasting for Quest 2/3 and generic HMDs
│ ├── EyeTrackingInput.cs # VIVE OpenXR native eye tracking (Focus 3/Vision)
│ └── GazeCursor.cs # Visual gaze cursor that follows active input
│
├── DasherController.cs # Main controller — auto-detects input, handles VR interactions
└── DasherSceneSetup.cs # One-click runtime scene builder (canvas, UI, all wiring)
- Interface-based input abstraction (
IDasherInput) — Adding a new input modality (e.g., hand tracking) requires implementing a single interface - Runtime scene construction —
DasherSceneSetupbuilds the full hierarchy at Play, avoiding prefab dependency issues across Unity versions - No hard dependency on vendor SDKs — Eye tracking uses VIVE OpenXR directly; Meta SDK features are accessed via reflection, so the project compiles and runs without either SDK installed
File → Build Settings → Android→ Switch Platform- Ensure
XR Plug-in Management → OpenXRis enabled - Player Settings → Min API Level = 29
- Build and Run (connect headset via USB)
- Install Meta XR Core SDK from Unity Package Manager
File → Build Settings → Android→ Switch Platform- Ensure
XR Plug-in Management → Oculusis checked - Player Settings → Min API Level = 29, Target = 32
- Build and Run
| Action | Control |
|---|---|
| Start / Recenter canvas | Right trigger |
| Pause / Resume | Left trigger |
| Switch tracking mode (when paused) | Primary button (A/X) |
| Adjust canvas distance | Left joystick (up/down) |
Tip: On first launch in VR, the canvas automatically positions itself in front of you. Press the right trigger at any time to recenter.
Included in the repository:
Assets/— all scripts, scenes, and resourcesPackages/manifest.json&packages-lock.json— Unity package dependencies (auto-restored)ProjectSettings/— project configuration
Not included (generated locally by Unity):
Library/,Temp/,Obj/,Logs/,UserSettings/
Optional SDK:
- HTC VIVE OpenXR Plugin — Required for eye tracking on VIVE devices (included via packages)
- Meta XR Core SDK — Optional; needed only for Meta-specific VR features. The project compiles and runs without it
| Component | Technology |
|---|---|
| Engine | Unity 2022.3 LTS |
| Language | C# |
| XR Framework | Unity XR Interaction Toolkit, OpenXR |
| Eye Tracking | VIVE OpenXR SDK (XR_HTC_eye_tracker) |
| UI | TextMesh Pro, Unity Canvas (World Space) |
| Language Model | PPM order-5 (custom implementation) |
|
Manveer Anand IIIT Vadodara |
Kuldip Solanki IIIT Vadodara |
- Dasher Project — Original Dasher text entry system by the Inference Group, University of Cambridge
- dasher-web — Web implementation that inspired this VR adaptation
This project is developed as an academic project at the Indian Institute of Information Technology, Vadodara.