Skip to content

Latest commit

 

History

History
212 lines (157 loc) · 7.87 KB

File metadata and controls

212 lines (157 loc) · 7.87 KB

🥽 VRDasher

Gaze-Driven Predictive Text Entry for Virtual Reality
A hands-free text input system for VR headsets, powered by the Dasher zooming interface and real-time language prediction.

Quick Start · How It Works · Architecture · VR Deployment


Overview

VRDasher brings the Dasher predictive text entry method into immersive VR environments. Instead of typing on a virtual keyboard, users navigate through a zooming stream of characters using their gaze direction — enabling efficient, hands-free text input.

The system supports multiple input modalities and automatically selects the best one available at runtime:

Input Mode Target Hardware Mechanism
Eye Tracking HTC VIVE Focus 3 / Vision VIVE OpenXR eye gaze ray → canvas projection
Head Gaze Meta Quest 2 / 3 / Pro, any OpenXR HMD Camera forward ray → canvas projection
Mouse Desktop / Unity Editor Screen-space cursor position

No manual configuration is needed — input selection is fully automatic.


Key Features

  • Predictive Language Model — PPM order-5 character prediction trained on English corpus; common sequences like th→e enlarge to ~48% of screen, minimizing effort
  • Multi-Modal Input — Seamless switching between eye tracking, head gaze, and mouse with a unified IDasherInput interface
  • Cross-Platform XR — Targets HTC VIVE Focus 3/Vision (via VIVE OpenXR SDK) and Meta Quest family (via OpenXR)
  • Runtime Scene Construction — Attach DasherSceneSetup to an empty GameObject and press Play; the entire UI hierarchy is built automatically
  • VR Interaction Controls — Joystick-based canvas distance adjustment, controller-triggered recentering, and pause/resume via VR triggers
  • Adaptive Gaze Smoothing — Configurable smoothing per input mode to reduce jitter while maintaining responsiveness

Quick Start

Prerequisites

  • Unity Editor: 2022.3.62f3 (LTS) — other 2022.3.x versions should work
  • Git for cloning

Running in the Editor (No VR Headset Needed)

1. Clone the repository
   git clone https://github.com/ManveerAnand/VRDasher.git

2. Open Unity Hub → Add project from disk → select the VRDasher/ folder

3. Let Unity restore packages on first open (requires internet)

4. Open (or create) a scene, add an empty GameObject, attach DasherSceneSetup

5. Press Play

Mouse controls:

Action Control
Select characters Move mouse right
Undo / backtrack Move mouse left
Choose character Move mouse up / down
Clear all text Backspace
Pause / Resume Space

How It Works

Dasher is a zooming interface driven by continuous pointing gestures — you steer, not type.

1. Language Model (PPM-5) predicts next-character probabilities from context
2. Characters are arranged vertically, sized proportional to their probability
3. Pointer position controls zoom: right = select (write), left = undo
4. The zoom algorithm (OneStepTowards) maps target range [Y-X, Y+X] → fills [0, 4096]
5. When a character node covers the crosshair, it's selected and predictions update
6. High-probability sequences require minimal gaze movement

Architecture

Assets/Scripts/
├── Core/
│   ├── DasherEngine.cs          # Zooming algorithm (OneStepTowards), node tree management
│   ├── DasherNode.cs            # Character tree node with probability bounds & colors
│   ├── DasherAlphabetLayout.cs  # English alphabet layout with probability-based sizing
│   ├── DasherRenderer.cs        # UI rendering with object pooling for smooth VR performance
│   └── LanguageModel.cs         # PPM order-5 prediction model (trained on English corpus)
│
├── Input/
│   ├── IDasherInput.cs          # Input abstraction interface (NormalizedPosition, IsActive)
│   ├── MouseInput.cs            # Mouse input for desktop / editor testing
│   ├── HeadGazeInput.cs         # Head gaze raycasting for Quest 2/3 and generic HMDs
│   ├── EyeTrackingInput.cs      # VIVE OpenXR native eye tracking (Focus 3/Vision)
│   └── GazeCursor.cs            # Visual gaze cursor that follows active input
│
├── DasherController.cs          # Main controller — auto-detects input, handles VR interactions
└── DasherSceneSetup.cs          # One-click runtime scene builder (canvas, UI, all wiring)

Design Decisions

  • Interface-based input abstraction (IDasherInput) — Adding a new input modality (e.g., hand tracking) requires implementing a single interface
  • Runtime scene constructionDasherSceneSetup builds the full hierarchy at Play, avoiding prefab dependency issues across Unity versions
  • No hard dependency on vendor SDKs — Eye tracking uses VIVE OpenXR directly; Meta SDK features are accessed via reflection, so the project compiles and runs without either SDK installed

VR Deployment

Building for HTC VIVE Focus 3 / Vision

  1. File → Build Settings → Android → Switch Platform
  2. Ensure XR Plug-in Management → OpenXR is enabled
  3. Player Settings → Min API Level = 29
  4. Build and Run (connect headset via USB)

Building for Meta Quest

  1. Install Meta XR Core SDK from Unity Package Manager
  2. File → Build Settings → Android → Switch Platform
  3. Ensure XR Plug-in Management → Oculus is checked
  4. Player Settings → Min API Level = 29, Target = 32
  5. Build and Run

VR Controls

Action Control
Start / Recenter canvas Right trigger
Pause / Resume Left trigger
Switch tracking mode (when paused) Primary button (A/X)
Adjust canvas distance Left joystick (up/down)

Tip: On first launch in VR, the canvas automatically positions itself in front of you. Press the right trigger at any time to recenter.


Dependency Notes

Included in the repository:

  • Assets/ — all scripts, scenes, and resources
  • Packages/manifest.json & packages-lock.json — Unity package dependencies (auto-restored)
  • ProjectSettings/ — project configuration

Not included (generated locally by Unity):

  • Library/, Temp/, Obj/, Logs/, UserSettings/

Optional SDK:

  • HTC VIVE OpenXR Plugin — Required for eye tracking on VIVE devices (included via packages)
  • Meta XR Core SDK — Optional; needed only for Meta-specific VR features. The project compiles and runs without it

Tech Stack

Component Technology
Engine Unity 2022.3 LTS
Language C#
XR Framework Unity XR Interaction Toolkit, OpenXR
Eye Tracking VIVE OpenXR SDK (XR_HTC_eye_tracker)
UI TextMesh Pro, Unity Canvas (World Space)
Language Model PPM order-5 (custom implementation)

Contributors


Manveer Anand

IIIT Vadodara

Kuldip Solanki

IIIT Vadodara

Acknowledgments

  • Dasher Project — Original Dasher text entry system by the Inference Group, University of Cambridge
  • dasher-web — Web implementation that inspired this VR adaptation

License

This project is developed as an academic project at the Indian Institute of Information Technology, Vadodara.