Skip to content
Francesco Maiomascio edited this page Jan 19, 2026 · 5 revisions

ICE Foundation exists because intelligent systems become unsafe the moment their execution semantics are implicit.

Most AI discussion optimizes inference: better models, better prompts, better agents.

But when a system:

  • runs for long periods
  • accumulates state
  • adapts over time
  • produces side effects
  • interacts with other systems

the dominant failure mode is no longer “bad answers”.

It is ungoverned execution.

Authority leaks.
State becomes irreproducible.
Recovery becomes guesswork.
Behavior becomes uninspectable.

ICE Foundation defines the axioms, structural invariants, and explicit boundaries that must hold for an ICE-compliant system to exist at all.

This is not a tooling problem.
It is a systems problem.


What You Will Find Here

This repository contains normative foundations only:

  • Axioms
    Foundational assumptions under which execution remains governable.

  • Structural Invariants
    Properties that must never be violated across implementations.

  • Boundaries
    Explicit statements of what the Foundation does not define or control.

If you are looking for implementation, execution semantics, or runtime behavior, this is the wrong repository by design.

Clone this wiki locally