Skip to content

A1: GF16 mantissa rounding — truncation vs round-to-nearest #545

@gHashTag

Description

@gHashTag

Audit Finding A1 — GF16 mantissa rounding

Source: R5 audit of ffi/src/lib.rs
Priority: High — affects numerical accuracy claims

Problem

In encode_gf16_from_u32() (ffi/src/lib.rs line ~55):

// Truncate mantissa: 23 bits → 9 bits (right shift by 14)
let gf16_mant: u16 = (mant >> 14) as u16;

This is truncation, not round-to-nearest-even (IEEE 754 default). Truncation introduces systematic bias: all values round down, never up.

Impact

  • Systematic negative bias in quantization error
  • φ-distance measurements in BENCH-007 may be slightly off
  • Inconsistency with IEEE 754 semantics

Expected fix

// Round-to-nearest: add 0.5 ULP before truncating
let round_bit = (mant >> 13) & 1;
let gf16_mant: u16 = ((mant >> 14) as u16) + round_bit as u16;

Evidence

Comment says "Truncate mantissa" — explicit truncation, not rounding.


φ² + φ⁻² = 3 · R5-AUDIT · A1

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingffi

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions