Configuring Agents
Define and customize agents using JSON: set up personality traits, emotional parameters, memory files, LLM preferences, and voice profiles — all without writing Rust.
📄 Agent Config Overview
Agents are fully defined in a single config.json file.
Example:
{
"agents": [
{
"id": "velma",
"personality": {
"curiosity": 0.7,
"trust": 0.4,
"anger": 0.1
},
"goals": [
{
"id": "explore_ruins",
"priority": 0.8
}
],
"memories": [
{
"content": "Player helped me escape the cave.",
"tags": ["trust"],
"importance": 0.9
}
],
"llm_profile": {
"preferred_provider": "groq"
}
}
]
}🧠 Personality
Set the emotional baseline of your agent:
"personality": {
"curiosity": 0.6,
"anger": 0.2,
"happiness": 0.8
}Values range
0.0to1.0These affect tone, memory bias, and behavioral tendencies
🎯 Goals
Each goal is:
{
"id": "explore_cave",
"priority": 0.9
}Goals can be reprioritized during runtime
The agent’s actions aim to resolve the top goal
🗃️ Memory"memories": [
{
"content": "Marcus gave me the map.",
"tags": ["trust", "plot_critical"],
"importance": 0.7
}
]Importance influences recall likelihood
Tags link to emotions and behavior trees
🔁 Inference Settings
LLM routing is configured per-agent or globally:
"llm_profile": {
"preferred_provider": "openai",
"fallbacks": ["groq", "local"]
}You can set provider priority or override at runtime
Local models supported via
llm_service.rs
💡 Tips
Use separate JSON files per agent and merge at build time
Goals and emotions can be modified by gameplay triggers
For testing, you can simulate config overrides via CLI
🔗 Related Pages
5. Inference System
8. API Reference →
config.rs
