Agent Architecture
Dive into the core building blocks of Oxyde agents — including memory recall, emotion modeling, goal prioritization, and inference routing — and how these systems interact to create emergent behavior.
🧠 Overview: How Agents Work
Oxyde agents are LLM-powered actors that:
Maintain internal state (emotion, memory, goals)
Perceive external input (chat, events, player actions)
Dynamically construct prompts from their state
Query a selected LLM for a response
Emit structured actions, emotions, or replies
🔁 The Agent Loop
Each frame or turn, agents:
[Input Received]
↓
[Update Emotion State]
↓
[Recall Relevant Memories]
↓
[Reweight & Reprioritize Goals]
↓
[Construct Prompt]
↓
[Route to LLM]
↓
[Parse & Act on Response]🧠 Emotional State
Agents maintain a 6D emotional vector:
Curiosity
Positive
Trust
Positive
Anger
Negative
Fear
Negative
Happiness
Positive
Energy
Neutral/modulatory
This vector:
Decays naturally over time
Is influenced by inputs or LLM responses
Shapes tone and memory recall bias
📘 Memory System
Each memory has:
content: what happenedtags: emotional tags (e.g., "trust", "fear")importance: how strong its weight isrecency: timestamp for freshness
At runtime, top-K memories are recalled and injected into prompts. Memory can be:
Short-term (session)
Long-term (persistent, WIP)
🎯 Goal Engine
Agents have a list of goal objects with:
descriptionpriority(float)optional
trigger(event or condition)
Goals can be reprioritized dynamically based on:
New memories
Player interaction
Emotional state
Agents behave differently when pursuing different goals — like “explore” vs. “protect”.
🔄 State Update Frequency
Agents can run:
Every frame (WASM, native)
Every dialogue event (Unity, Unreal)
Interval-based (idle simulation loops)
Runtime config controls this behavior per agent.
