block-quote On this pagechevron-down
copy Copy chevron-down
Agent Architecture Dive into the core building blocks of Oxyde agents — including memory recall, emotion modeling, goal prioritization, and inference routing — and how these systems interact to create emergent behavior.
🧠 Overview: How Agents Work
Oxyde agents are LLM-powered actors that:
Maintain internal state (emotion, memory, goals)
Perceive external input (chat, events, player actions)
Dynamically construct prompts from their state
Query a selected LLM for a response
Emit structured actions, emotions, or replies
🔁 The Agent Loop
Each frame or turn, agents:
Copy [Input Received]
↓
[Update Emotion State]
↓
[Recall Relevant Memories]
↓
[Reweight & Reprioritize Goals]
↓
[Construct Prompt]
↓
[Route to LLM]
↓
[Parse & Act on Response] 🧠 Emotional State
Agents maintain a 6D emotional vector:
This vector:
Decays naturally over time
Is influenced by inputs or LLM responses
Shapes tone and memory recall bias
📘 Memory System
Each memory has:
tags: emotional tags (e.g., "trust", "fear")
importance: how strong its weight is
recency: timestamp for freshness
At runtime, top-K memories are recalled and injected into prompts. Memory can be:
Long-term (persistent, WIP)
Agents have a list of goal objects with:
optional trigger (event or condition)
Goals can be reprioritized dynamically based on:
Agents behave differently when pursuing different goals — like “explore” vs. “protect”.
🔄 State Update Frequency
Agents can run:
Every frame (WASM, native)
Every dialogue event (Unity, Unreal)
Interval-based (idle simulation loops)
Runtime config controls this behavior per agent.