Current AI voice systems fall into an Uncanny Valley of expression. Most rely on rigid emotion labels—happy, sad, angry—that flatten human affect into static presets and fail to respond meaningfully to context.
For interactive media, this creates a dead end:
Manual Voice Acting
Thousands of hours of branching recordings that are expensive, brittle, and still poorly aligned with moment-to-moment gameplay.
Flat Generative Voice
Synthetic speech that sounds human, but lacks the internal “emotional physics” required to track narrative tension, player agency, or evolving character state.
The result is dialogue that reacts to scripts, not to experience.
The Expressive Shell System (ESS) replaces discrete emotion tags with a continuous emotional manifold.
Rather than selecting a preset, ESS navigates emotional space. Grounded in established affective science—Valence, Arousal, and Dominance—the system represents emotion as position and movement within a low-dimensional geometry. This enables characters to evolve naturally with their environment.
ESS captures the expressive in-betweens: restrained delivery, conflicted states, rising urgency, and controlled intensity—subtleties that make performances feel intentional rather than triggered.
ESS functions as a real-time research controller bridging interactive signals and generative voice.
Dynamic Context Integration
Dialogue text is interpreted alongside live game telemetry—character health, proximity to danger, combat intensity, and narrative weight.
Dimensional Decoupling
Emotional direction (position on the sphere) is separated from expressive intensity (energy), allowing controlled combinations such as high dominance with low intensity or high arousal under restraint.
Procedural Expressivity
This makes it possible to produce nuanced states like quiet fury or anxious whispering directly from gameplay context, without authored emotion states or pre-baked variants.
These control coordinates guide prosody, timbre, and pacing during synthesis, producing performances that adapt continuously to player actions—not just to authored lines.
Q4 2025
Finalization of seamless emotional transition logic (spherical interpolation / Slerp-based morphing).
Q1 2026
Internal research prototype and experimental developer tooling for Unity, Unreal, and Godot.
H2 2026
Public research showcase within playable slices of Curse of the Dragonguard.
The Expressive Shell System is a foundational research framework for expressive control in interactive systems. We are seeking research collaborators interested in affective representation, generative voice, and real-time expressive modeling.
For Interactive Media Researchers
Explore continuous affect, uncertainty, and control geometry in generative performance.
For Narrative & Systems Designers
Investigate procedural performance as a first-class design signal.
Patent Status
ESS is Patent Pending
(U.S. Patent Office — Filed September 5, 2025)
Contact
Justin Sabatini (justin_sabatini@snazzygamesinc.com)
Principal Investigator
Snazzy Games Inc.