# Configuring Agents

#### 📄 Agent Config Overview

Agents are fully defined in a single `config.json` file.

Example:

```json
{
  "agents": [
    {
      "id": "velma",
      "personality": {
        "curiosity": 0.7,
        "trust": 0.4,
        "anger": 0.1
      },
      "goals": [
        {
          "id": "explore_ruins",
          "priority": 0.8
        }
      ],
      "memories": [
        {
          "content": "Player helped me escape the cave.",
          "tags": ["trust"],
          "importance": 0.9
        }
      ],
      "llm_profile": {
        "preferred_provider": "groq"
      }
    }
  ]
}
```

***

#### 🧠 Personality

Set the **emotional baseline** of your agent:

```json
"personality": {
  "curiosity": 0.6,
  "anger": 0.2,
  "happiness": 0.8
}
```

* Values range `0.0` to `1.0`
* These affect tone, memory bias, and behavioral tendencies

***

#### 🎯 Goals

Each goal is:

```json
{
  "id": "explore_cave",
  "priority": 0.9
}
```

* Goals can be reprioritized during runtime
* The agent’s actions aim to resolve the top goal

***

#### 🗃️ Memory"memories": \[

```json
  {
    "content": "Marcus gave me the map.",
    "tags": ["trust", "plot_critical"],
    "importance": 0.7
  }
]
```

* Importance influences recall likelihood
* Tags link to emotions and behavior trees

***

#### 🔁 Inference Settings

LLM routing is configured per-agent or globally:

```json
"llm_profile": {
  "preferred_provider": "openai",
  "fallbacks": ["groq", "local"]
}
```

* You can set provider priority or override at runtime
* Local models supported via `llm_service.rs`

***

#### 💡 Tips

* Use separate JSON files per agent and merge at build time
* Goals and emotions can be modified by gameplay triggers
* For testing, you can simulate config overrides via CLI

***

#### 🔗 Related Pages

* 5\. Inference System
* 8\. API Reference → `config.rs`
