# Instrument AI Agents | Sentry for Connect

With [Sentry AI Agent Monitoring](https://docs.sentry.io/product/insights/ai/agents/dashboard.md), you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.

As a prerequisite to setting up AI Agent Monitoring with JavaScript, you'll need to first [set up tracing](https://docs.sentry.io/platforms/javascript/guides/connect/tracing.md). Once this is done, the JavaScript SDK will automatically instrument AI agents created with supported libraries. If that doesn't fit your use case, you can use custom instrumentation described below.

## [Automatic Instrumentation](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#automatic-instrumentation)

The JavaScript SDK supports automatic instrumentation for some AI libraries. We recommend adding their integrations to your Sentry configuration to automatically capture spans for AI agents.

* [Vercel AI SDK](https://docs.sentry.io/platforms/javascript/guides/connect/configuration/integrations/vercelai.md)
* [OpenAI](https://docs.sentry.io/platforms/javascript/guides/connect/configuration/integrations/openai.md)
* [Anthropic](https://docs.sentry.io/platforms/javascript/guides/connect/configuration/integrations/anthropic.md)
* [Google Gen AI SDK](https://docs.sentry.io/platforms/javascript/guides/connect/configuration/integrations/google-genai.md)
* [LangChain](https://docs.sentry.io/platforms/javascript/guides/connect/configuration/integrations/langchain.md)

## [Manual Instrumentation](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#manual-instrumentation)

If you're using a library that Sentry does not automatically instrument, you can manually instrument your code to capture spans. For your AI agents data to show up in the Sentry [AI Agents Insights](https://sentry.io/orgredirect/organizations/:orgslug/insights/ai/agents/), two spans must be created and have well-defined names and data attributes. See below.

## [Spans](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#spans)

### [Invoke Agent Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#invoke-agent-span)

Invoke Agent span attributes

Describes AI agent invocation.

The [@sentry\_sdk.trace()](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation.md#span-templates) decorator can also be used to create this span.

* The spans `op` MUST be `"gen_ai.invoke_agent"`.
* The span `name` SHOULD be `"invoke_agent {gen_ai.agent.name}"`.
* The `gen_ai.operation.name` attribute MUST be `"invoke_agent"`.
* The `gen_ai.agent.name` attribute SHOULD be set to the agent's name. (e.g. `"Weather Agent"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

| Data Attribute                         | Type   | Requirement Level | Description                                                                          | Example                                                                                                           |
| -------------------------------------- | ------ | ----------------- | ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
| `gen_ai.request.available_tools`       | string | optional          | List of objects describing the available tools. **\[0]**                             | `"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"` |
| `gen_ai.request.frequency_penalty`     | float  | optional          | Model configuration parameter.                                                       | `0.5`                                                                                                             |
| `gen_ai.request.max_tokens`            | int    | optional          | Model configuration parameter.                                                       | `500`                                                                                                             |
| `gen_ai.request.messages`              | string | optional          | List of objects describing the messages (prompts) sent to the LLM **\[0]**, **\[1]** | `"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"`                  |
| `gen_ai.request.presence_penalty`      | float  | optional          | Model configuration parameter.                                                       | `0.5`                                                                                                             |
| `gen_ai.request.temperature`           | float  | optional          | Model configuration parameter.                                                       | `0.1`                                                                                                             |
| `gen_ai.request.top_p`                 | float  | optional          | Model configuration parameter.                                                       | `0.7`                                                                                                             |
| `gen_ai.response.tool_calls`           | string | optional          | The tool calls in the model’s response. **\[0]**                                     | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`                          |
| `gen_ai.response.text`                 | string | optional          | The text representation of the model’s responses. **\[0]**                           | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`                                       |
| `gen_ai.usage.input_tokens.cached`     | int    | optional          | The number of cached tokens used in the AI input (prompt)                            | `50`                                                                                                              |
| `gen_ai.usage.input_tokens`            | int    | optional          | The number of tokens used in the AI input (prompt).                                  | `10`                                                                                                              |
| `gen_ai.usage.output_tokens.reasoning` | int    | optional          | The number of tokens used for reasoning.                                             | `30`                                                                                                              |
| `gen_ai.usage.output_tokens`           | int    | optional          | The number of tokens used in the AI response.                                        | `100`                                                                                                             |
| `gen_ai.usage.total_tokens`            | int    | optional          | The total number of tokens used to process the prompt. (input and output)            | `190`                                                                                                             |

* **\[0]:** Span attributes only allow primitive data types (like `int`, `float`, `boolean`, `string`). This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
* **\[1]:** Each message item uses the format `{role:"", content:""}`. The `role` can be `"user"`, `"assistant"`, or `"system"`. The `content` can be either a string or a list of dictionaries.

#### [Example of an Invoke Agent Span:](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#example-of-an-invoke-agent-span)

```javascript
// some example agent implementation for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Agent implementation
    return {
      output: "The weather in Paris is sunny",
      usage: {
        inputTokens: 15,
        outputTokens: 8,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.invoke_agent",
    name: `invoke_agent ${myAgent.name}`,
    attributes: {
      "gen_ai.operation.name": "invoke_agent",
      "gen_ai.system": myAgent.modelProvider,
      "gen_ai.request.model": myAgent.model,
      "gen_ai.agent.name": myAgent.name,
    },
  },
  async (span) => {
    // run the agent
    const result = await myAgent.run();

    // set agent response
    // we assume result.output is a string
    // type of `gen_ai.response.text` needs to be a string
    span.setAttribute(
      "gen_ai.response.text",
      JSON.stringify([result.output]),
    );

    // set token usage
    // we assume the result includes the tokens used
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);
```

### [AI Client Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#ai-client-span)

AI Client span attributes

This span represents a request to an AI model or service that generates a response or requests a tool call based on the input prompt.

The [@sentry\_sdk.trace()](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation.md#span-templates) decorator can also be used to create this span.

* The span `op` MUST be `"gen_ai.{gen_ai.operation.name}"`. (e.g. `"gen_ai.chat"`)
* The span `name` SHOULD be `{gen_ai.operation.name} {gen_ai.request.model}"`. (e.g. `"chat o3-mini"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

| Data Attribute                         | Type   | Requirement Level | Description                                                                          | Example                                                                                                           |
| -------------------------------------- | ------ | ----------------- | ------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
| `gen_ai.request.available_tools`       | string | optional          | List of objects describing the available tools. **\[0]**                             | `"[{\"name\": \"random_number\", \"description\": \"...\"}, {\"name\": \"query_db\", \"description\": \"...\"}]"` |
| `gen_ai.request.frequency_penalty`     | float  | optional          | Model configuration parameter.                                                       | `0.5`                                                                                                             |
| `gen_ai.request.max_tokens`            | int    | optional          | Model configuration parameter.                                                       | `500`                                                                                                             |
| `gen_ai.request.messages`              | string | optional          | List of objects describing the messages (prompts) sent to the LLM **\[0]**, **\[1]** | `"[{\"role\": \"system\", \"content\": [{...}]}, {\"role\": \"system\", \"content\": [{...}]}]"`                  |
| `gen_ai.request.presence_penalty`      | float  | optional          | Model configuration parameter.                                                       | `0.5`                                                                                                             |
| `gen_ai.request.temperature`           | float  | optional          | Model configuration parameter.                                                       | `0.1`                                                                                                             |
| `gen_ai.request.top_p`                 | float  | optional          | Model configuration parameter.                                                       | `0.7`                                                                                                             |
| `gen_ai.response.tool_calls`           | string | optional          | The tool calls in the model's response. **\[0]**                                     | `"[{\"name\": \"random_number\", \"type\": \"function_call\", \"arguments\": \"...\"}]"`                          |
| `gen_ai.response.text`                 | string | optional          | The text representation of the model's responses. **\[0]**                           | `"[\"The weather in Paris is rainy\", \"The weather in London is sunny\"]"`                                       |
| `gen_ai.usage.input_tokens.cached`     | int    | optional          | The number of cached tokens used in the AI input (prompt)                            | `50`                                                                                                              |
| `gen_ai.usage.input_tokens`            | int    | optional          | The number of tokens used in the AI input (prompt).                                  | `10`                                                                                                              |
| `gen_ai.usage.output_tokens.reasoning` | int    | optional          | The number of tokens used for reasoning.                                             | `30`                                                                                                              |
| `gen_ai.usage.output_tokens`           | int    | optional          | The number of tokens used in the AI response.                                        | `100`                                                                                                             |
| `gen_ai.usage.total_tokens`            | int    | optional          | The total number of tokens used to process the prompt. (input and output)            | `190`                                                                                                             |

* **\[0]:** Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `"[{\"foo\": \"bar\"}]"`.
* **\[1]:** Each message item uses the format `{role:"", content:""}`. The `role` can be `"user"`, `"assistant"`, or `"system"`. The `content` can be either a string or a list of dictionaries.

#### [Example AI Client Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#example-ai-client-span)

```javascript
// some example implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "o3-mini",
  modelConfig: {
    temperature: 0.1,
    presencePenalty: 0.5,
  },
  async createMessage(messages, maxTokens) {
    // AI implementation
    return {
      output:
        "Here's a joke: Why don't scientists trust atoms? Because they make up everything!",
      usage: {
        inputTokens: 12,
        outputTokens: 24,
      },
    };
  },
};

Sentry.startSpan(
  {
    op: "gen_ai.chat",
    name: `chat ${myAi.model}`,
    attributes: {
      "gen_ai.operation.name": "chat",
      "gen_ai.system": myAi.modelProvider,
      "gen_ai.request.model": myAi.model,
    },
  },
  async (span) => {
    // set up messages for LLM
    const maxTokens = 1024;
    const prompt = "Tell me a joke";
    const messages = [{ role: "user", content: prompt }];

    // set chat request data
    span.setAttribute("gen_ai.request.messages", JSON.stringify(messages));
    span.setAttribute("gen_ai.request.max_tokens", maxTokens);
    span.setAttribute(
      "gen_ai.request.temperature",
      myAi.modelConfig.temperature,
    );
    span.setAttribute(
      "gen_ai.request.presence_penalty",
      myAi.modelConfig.presencePenalty,
    );

    // ask the LLM
    const result = await myAi.createMessage(messages, maxTokens);

    // set response
    // we assume result.output is a string
    // type of `gen_ai.response.text` needs to be a string
    span.setAttribute(
      "gen_ai.response.text",
      JSON.stringify([result.output]),
    );

    // set token usage
    // we assume the result includes the tokens used
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );

    return result;
  },
);
```

### [Execute Tool Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#execute-tool-span)

Execute Tool span attributes

Describes a tool execution.

The [@sentry\_sdk.trace()](https://docs.sentry.io/platforms/python/tracing/instrumentation/custom-instrumentation.md#span-templates) decorator can also be used to create this span.

* The span `op` MUST be `"gen_ai.execute_tool"`.
* The span `name` SHOULD be `"execute_tool {gen_ai.tool.name}"`. (e.g. `"execute_tool query_database"`)
* The `gen_ai.tool.name` attribute SHOULD be set to the name of the tool. (e.g. `"query_database"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

| Data Attribute            | Type   | Requirement Level | Description                                          | Example                                    |
| ------------------------- | ------ | ----------------- | ---------------------------------------------------- | ------------------------------------------ |
| `gen_ai.tool.description` | string | optional          | Description of the tool executed.                    | `"Tool returning a random number"`         |
| `gen_ai.tool.input`       | string | optional          | Input that was given to the executed tool as string. | `"{\"max\":10}"`                           |
| `gen_ai.tool.name`        | string | optional          | Name of the tool executed.                           | `"random_number"`                          |
| `gen_ai.tool.output`      | string | optional          | The output from the tool.                            | `"7"`                                      |
| `gen_ai.tool.type`        | string | optional          | The type of the tools.                               | `"function"`; `"extension"`; `"datastore"` |

#### [Example Execute Tool Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#example-execute-tool-span)

```javascript
// some example implementation for demonstration
const myAi = {
  modelProvider: "openai",
  model: "o3-mini",
  async createMessage(messages, maxTokens) {
    // AI implementation that returns tool calls
    return {
      toolCalls: [
        {
          name: "random_number",
          description: "Generate a random number",
          arguments: { max: 10 },
        },
      ],
    };
  },
};

const prompt = "Generate a random number between 0 and 10";
const messages = [{ role: "user", content: prompt }];

// First, make the AI call
const result = await Sentry.startSpan(
  { op: "gen_ai.chat", name: `chat ${myAi.model}` },
  () => myAi.createMessage(messages, 1024),
);

// Check if we should call a tool
if (result.toolCalls && result.toolCalls.length > 0) {
  const tool = result.toolCalls[0];

  await Sentry.startSpan(
    {
      op: "gen_ai.execute_tool",
      name: `execute_tool ${tool.name}`,
      attributes: {
        "gen_ai.system": myAi.modelProvider,
        "gen_ai.request.model": myAi.model,
        "gen_ai.tool.type": "function",
        "gen_ai.tool.name": tool.name,
        "gen_ai.tool.description": tool.description,
        "gen_ai.tool.input": JSON.stringify(tool.arguments),
      },
    },
    async (span) => {
      // run tool (example implementation)
      const toolResult = Math.floor(Math.random() * tool.arguments.max);

      // set tool result
      span.setAttribute("gen_ai.tool.output", String(toolResult));

      return toolResult;
    },
  );
}
```

### [Handoff Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#handoff-span)

Handoff span attributes

A span that describes the handoff from one agent to another.

* The spans `op` MUST be `"gen_ai.handoff"`.
* The spans `name` SHOULD be `"handoff from {from_agent} to {to_agent}"`.
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#common-span-attributes) SHOULD be set.

#### [Example of a Handoff Span](https://docs.sentry.io/platforms/javascript/guides/connect/tracing/instrumentation/ai-agents-module.md#example-of-a-handoff-span)

```javascript
// some example agent implementation for demonstration
const myAgent = {
  name: "Weather Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Agent implementation
    return {
      handoffTo: "Travel Agent",
      output:
        "I need to handoff to the travel agent for booking recommendations",
    };
  },
};

const otherAgent = {
  name: "Travel Agent",
  modelProvider: "openai",
  model: "o3-mini",
  async run() {
    // Other agent implementation
    return { output: "Here are some travel recommendations..." };
  },
};

// First agent execution
const result = await Sentry.startSpan(
  { op: "gen_ai.invoke_agent", name: `invoke_agent ${myAgent.name}` },
  () => myAgent.run(),
);

// Check if we should handoff to another agent
if (result.handoffTo) {
  // Create handoff span
  await Sentry.startSpan(
    {
      op: "gen_ai.handoff",
      name: `handoff from ${myAgent.name} to ${otherAgent.name}`,
      attributes: {
        "gen_ai.system": myAgent.modelProvider,
        "gen_ai.request.model": myAgent.model,
      },
    },
    () => {
      // the handoff span just marks the handoff
      // no actual work is done here
    },
  );

  // Execute the other agent
  await Sentry.startSpan(
    { op: "gen_ai.invoke_agent", name: `invoke_agent ${otherAgent.name}` },
    () => otherAgent.run(),
  );
}
```

Common span attributes

Some attributes are common to all AI Agents spans:

| Data Attribute          | Type   | Requirement Level | Description                                                                               | Example           |
| ----------------------- | ------ | ----------------- | ----------------------------------------------------------------------------------------- | ----------------- |
| `gen_ai.system`         | string | required          | The Generative AI product as identified by the client or server instrumentation. **\[0]** | `"openai"`        |
| `gen_ai.request.model`  | string | required          | The name of the AI model a request is being made to.                                      | `"o3-mini"`       |
| `gen_ai.operation.name` | string | optional          | The name of the operation being performed. **\[1]**                                       | `"chat"`          |
| `gen_ai.agent.name`     | string | optional          | The name of the agent this span belongs to.                                               | `"Weather Agent"` |

**\[0]** Well-defined values for data attribute `gen_ai.system`:

| Value               | Description                       |
| ------------------- | --------------------------------- |
| `"anthropic"`       | Anthropic                         |
| `"aws.bedrock"`     | AWS Bedrock                       |
| `"az.ai.inference"` | Azure AI Inference                |
| `"az.ai.openai"`    | Azure OpenAI                      |
| `"cohere"`          | Cohere                            |
| `"deepseek"`        | DeepSeek                          |
| `"gcp.gemini"`      | Gemini                            |
| `"gcp.gen_ai"`      | Any Google generative AI endpoint |
| `"gcp.vertex_ai"`   | Vertex AI                         |
| `"groq"`            | Groq                              |
| `"ibm.watsonx.ai"`  | IBM Watsonx AI                    |
| `"mistral_ai"`      | Mistral AI                        |
| `"openai"`          | OpenAI                            |
| `"perplexity"`      | Perplexity                        |
| `"xai"`             | xAI                               |

**\[1]** Well-defined values for data attribute `gen_ai.operation.name`:

| Value                | Description                                                             |
| -------------------- | ----------------------------------------------------------------------- |
| `"chat"`             | Chat completion operation such as OpenAI Chat API                       |
| `"create_agent"`     | Create GenAI agent                                                      |
| `"embeddings"`       | Embeddings operation such as OpenAI Create embeddings API               |
| `"execute_tool"`     | Execute a tool                                                          |
| `"generate_content"` | Multimodal content generation operation such as Gemini Generate Content |
| `"invoke_agent"`     | Invoke GenAI agent                                                      |
