Why Rust?
Oxyde is built from the ground up in Rust — and that decision is central to its performance, safety, and long-term extensibility. Here’s why we chose Rust as the foundation for emotionally intelligent
🧠 1. Performance for Real-Time Games
Inference latency, memory recall, and emotion updates must happen within milliseconds
Rust’s zero-cost abstractions and control over memory let us build async pipelines that are both fast and predictable
Avoids GC pauses or runtime bottlenecks common in Python or JS
Real-time game logic can’t afford surprises. Rust delivers tight control.
🧯 2. Safety Without Sacrificing Speed
Borrow checker ensures no memory leaks or race conditions
Emotion state, memory stacks, and LLM streams are protected by design
Enables multi-threaded agent execution without crashing your game engine
🧩 3. Native Interop With Game Engines
Rust compiles to shared libraries (
.dll,.so,.dylib) easilyUsed directly by:
Unity (via FFI and
DllImport)Unreal Engine (via C++ bindings)
WebAssembly for browser runtimes
🔧 4. Extensibility With Strong Typing
traitsystem allows plug-and-play LLM providers and memory backendsConfigs are deserialized safely with full schema enforcement
Makes the codebase maintainable and modular as the project grows
🌍 5. WebAssembly-Ready
Oxyde runs natively in the browser with
wasm32-unknown-unknownPerfect for live demos, online sandboxing, or visual novels with agent NPCs
🧪 Why Not Python?
We love Python for LLM prototyping — but:
Real-time game loop
❌ GC-lag prone
✅ Frame-perfect
Embedded in engines
❌ Tricky bindings
✅ Easy FFI
Type safety
❌ Dynamic typing
✅ Strong static types
Multi-agent perf
❌ Slow threading
✅ Native threading
🛠️ Rust for AI Inference?
Yes — Oxyde wraps external LLMs via HTTP, but local model support (GGUF, Whisper, etc.) runs beautifully in Rust too, especially with libraries like tokenizers, llama-rs, or llm.
