# Collinear AI ## Docs - [Authentication](https://docs.collinear.ai/api-reference/authentication.md): API key setup and authentication for the SimLab API - [Rollouts](https://docs.collinear.ai/api-reference/rollouts.md): List rollouts and retrieve rollout artifacts - [Runs](https://docs.collinear.ai/api-reference/runs.md): Launch, list, and manage agent evaluation runs - [Scenarios](https://docs.collinear.ai/api-reference/scenarios.md): List templates, tasks, rubrics, NPC profiles, seed data, and verifier bundles - [BaseAgent](https://docs.collinear.ai/api-reference/sdk/base-agent.md): The core contract for building custom agents - [BaseEnvironment](https://docs.collinear.ai/api-reference/sdk/base-environment.md): The environment abstraction for interacting with tool servers - [RunArtifacts](https://docs.collinear.ai/api-reference/sdk/run-artifacts.md): The data object agents populate during execution - [Task Generation](https://docs.collinear.ai/api-reference/task-generation.md): Generate tasks from tool definitions - [Test Model Configuration](https://docs.collinear.ai/api-reference/test-model-config.md): Validate a model configuration with a test completion - [Tool Server Protocol](https://docs.collinear.ai/api-reference/tool-server-protocol.md): HTTP interface that all tool servers expose - [Verifier Output](https://docs.collinear.ai/api-reference/verifier-output.md): Output format from verifier evaluation - [NPCs](https://docs.collinear.ai/core-concepts/npcs.md): Simulated users with configurable traits and personas - [Sandbox](https://docs.collinear.ai/core-concepts/sandbox.md): Isolated execution environments for agent rollouts - [Seed Data](https://docs.collinear.ai/core-concepts/seed-data.md): Domain-specific data injected into environments to create realistic starting conditions - [Task Generator](https://docs.collinear.ai/core-concepts/task-generator.md): How Simulation Lab generates tasks from templates, seed data, NPCs, and toolsets - [Verifiers](https://docs.collinear.ai/core-concepts/verifiers.md): Programmatic and rubric-based evaluation of agent behavior. - [What is Collinear?](https://docs.collinear.ai/introduction.md): An interactive playground where agents learn new skills and improve existing capabilities. - [Bring Your Own Agent](https://docs.collinear.ai/simulation-lab/bring-your-own-agent.md): Integrate your own agent implementation with Simulation Lab - [Getting Started](https://docs.collinear.ai/simulation-lab/getting-started.md): Install the CLI and run your first evaluation - [Overview](https://docs.collinear.ai/simulation-lab/overview.md): A simulation lab to train, test and refine agents for the real world. - [Scenarios](https://docs.collinear.ai/simulation-lab/scenarios.md): Scenario templates and composition for simulation scenarios - [Tasks](https://docs.collinear.ai/simulation-lab/tasks-and-verifiers.md): Task format, generation flow, and pre-built tasks - [Toolsets](https://docs.collinear.ai/simulation-lab/toolsets.md): Built-in tools, custom tool definitions, and MCP server integration - [Understanding Results](https://docs.collinear.ai/simulation-lab/understanding-results.md): Rollout traces, scoring, statistical confidence, and state diffs - [Verifiers & Reward Models](https://docs.collinear.ai/simulation-lab/verifiers-and-reward-models.md): Programmatic verifiers and rubric-based reward models for agent evaluation ## Optional - [Blog](https://blog.collinear.ai)