# Why Rust?

#### 🧠 1. Performance for Real-Time Games

* Inference latency, memory recall, and emotion updates must happen **within milliseconds**
* Rust’s zero-cost abstractions and control over memory let us build async pipelines that are both fast and predictable
* Avoids GC pauses or runtime bottlenecks common in Python or JS

> Real-time game logic can’t afford surprises. Rust delivers tight control.

***

#### 🧯 2. Safety Without Sacrificing Speed

* Borrow checker ensures **no memory leaks or race conditions**
* Emotion state, memory stacks, and LLM streams are protected by design
* Enables multi-threaded agent execution without crashing your game engine

***

#### 🧩 3. Native Interop With Game Engines

* Rust compiles to shared libraries (`.dll`, `.so`, `.dylib`) easily
* Used directly by:
  * Unity (via FFI and `DllImport`)
  * Unreal Engine (via C++ bindings)
  * WebAssembly for browser runtimes

***

#### 🔧 4. Extensibility With Strong Typing

* `trait` system allows plug-and-play LLM providers and memory backends
* Configs are deserialized safely with full schema enforcement
* Makes the codebase maintainable and modular as the project grows

***

#### 🌍 5. WebAssembly-Ready

* Oxyde runs natively **in the browser** with `wasm32-unknown-unknown`
* Perfect for live demos, online sandboxing, or visual novels with agent NPCs

***

#### 🧪 Why Not Python?

We love Python for LLM prototyping — but:

| Requirement         | Python            | Rust (Oxyde)          |
| ------------------- | ----------------- | --------------------- |
| Real-time game loop | ❌ GC-lag prone    | ✅ Frame-perfect       |
| Embedded in engines | ❌ Tricky bindings | ✅ Easy FFI            |
| Type safety         | ❌ Dynamic typing  | ✅ Strong static types |
| Multi-agent perf    | ❌ Slow threading  | ✅ Native threading    |

***

#### 🛠️ Rust for AI Inference?

Yes — Oxyde wraps external LLMs via HTTP, but local model support (GGUF, Whisper, etc.) runs beautifully in Rust too, especially with libraries like `tokenizers`, `llama-rs`, or `llm`.
