diff --git a/.chloggen/1469.yaml b/.chloggen/1469.yaml
new file mode 100644
index 0000000000..7d7b4ba5cc
--- /dev/null
+++ b/.chloggen/1469.yaml
@@ -0,0 +1,5 @@
+change_type: 'enhancement'
+component: gen_ai
+note: Yamlify gen_ai events and clean up examples.
+
+issues: [1469]
diff --git a/.chloggen/clarify-temporary-destination.yaml b/.chloggen/clarify-temporary-destination.yaml
new file mode 100644
index 0000000000..fb6d9e52b9
--- /dev/null
+++ b/.chloggen/clarify-temporary-destination.yaml
@@ -0,0 +1,5 @@
+change_type: enhancement
+component: messaging
+note: Further clarify `{destination}` value on span names
+issues: [1635]
+subtext:
diff --git a/.markdown_link_check_config.json b/.markdown_link_check_config.json
index e85f6c0b20..980cc95414 100644
--- a/.markdown_link_check_config.json
+++ b/.markdown_link_check_config.json
@@ -6,6 +6,9 @@
{
"pattern": "^https://github\\.com/open-telemetry/semantic-conventions/(issues|pull|actions)"
},
+ {
+ "pattern": "^https://blogs.oracle.com/linux/post/understanding-linux-kernel-memory-statistics$"
+ },
{
"pattern": "^#"
}
diff --git a/docs/gen-ai/gen-ai-events.md b/docs/gen-ai/gen-ai-events.md
index e786696f29..4c12ee55d5 100644
--- a/docs/gen-ai/gen-ai-events.md
+++ b/docs/gen-ai/gen-ai-events.md
@@ -10,15 +10,11 @@ linkTitle: Generative AI events
-- [Common attributes](#common-attributes)
-- [System event](#system-event)
-- [User event](#user-event)
-- [Assistant event](#assistant-event)
- - [`ToolCall` object](#toolcall-object)
- - [`Function` object](#function-object)
-- [Tool event](#tool-event)
-- [Choice event](#choice-event)
- - [`Message` object](#message-object)
+- [Event: `gen_ai.system.message`](#event-gen_aisystemmessage)
+- [Event: `gen_ai.user.message`](#event-gen_aiusermessage)
+- [Event: `gen_ai.assistant.message`](#event-gen_aiassistantmessage)
+- [Event: `gen_ai.tool.message`](#event-gen_aitoolmessage)
+- [Event: `gen_ai.choice`](#event-gen_aichoice)
- [Custom events](#custom-events)
- [Examples](#examples)
- [Chat completion](#chat-completion)
@@ -29,7 +25,7 @@ linkTitle: Generative AI events
GenAI instrumentations MAY capture user inputs sent to the model and responses received from it as [events](https://github.com/open-telemetry/opentelemetry-specification/tree/v1.39.0/specification/logs/event-api.md).
-> Note:
+> [!NOTE]
> Event API is experimental and not yet available in some languages. Check [spec-compliance matrix](https://github.com/open-telemetry/opentelemetry-specification/blob/main/spec-compliance-matrix.md#events) to see the implementation status in corresponding language.
Instrumentations MAY capture inputs and outputs if and only if application has enabled the collection of this data.
@@ -50,17 +46,21 @@ Telemetry consumers SHOULD expect to receive unknown body fields.
Instrumentations SHOULD NOT capture undocumented body fields and MUST follow the documented defaults for known fields.
Instrumentations MAY offer configuration options allowing to disable events or allowing to capture all fields.
-## Common attributes
+## Event: `gen_ai.system.message`
-The following attributes apply to all GenAI events.
-
-
+
+**Status:** ![Experimental](https://img.shields.io/badge/-experimental-blue)
+
+The event name MUST be `gen_ai.system.message`.
+
+This event describes the system instructions passed to the GenAI model.
+
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
@@ -89,97 +89,281 @@ If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `vertex_ai` | Vertex AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+**Body fields:**
+
+| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| `content` | undefined | The contents of the system message. | `You're a helpful bot` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `role` | string | The actual role of the message author as passed in the message. | `system`; `instruction` | `Conditionally Required` if available and not equal to `system`. | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
-## System event
+## Event: `gen_ai.user.message`
-This event describes the instructions passed to the GenAI model.
+
+
+
+
+
+
-The event name MUST be `gen_ai.system.message`.
+**Status:** ![Experimental](https://img.shields.io/badge/-experimental-blue)
-| Body Field | Type | Description | Examples | Requirement Level |
-|---|---|---|---|---|
-| `role` | string | The actual role of the message author as passed in the message. | `"system"`, `"instructions"` | `Conditionally Required`: if available and not equal to `system` |
-| `content` | `AnyValue` | The contents of the system message. | `"You're a friendly bot that answers questions about OpenTelemetry."` | `Opt-In` |
+The event name MUST be `gen_ai.user.message`.
-## User event
+This event describes the user message passed to the GenAI model.
-This event describes the prompt message specified by the user.
+| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
-The event name MUST be `gen_ai.user.message`.
+**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
+by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+
+The actual GenAI product may differ from the one identified by the client.
+For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
+is set to `openai` based on the instrumentation's best knowledge.
-| Body Field | Type | Description | Examples | Requirement Level |
-|---|---|---|---|---|
-| `role` | string | The actual role of the message author as passed in the message. | `"user"`, `"customer"` | `Conditionally Required`: if available and if not equal to `user` |
-| `content` | `AnyValue` | The contents of the user message. | `What telemetry is reported by OpenAI instrumentations?` | `Opt-In` |
+For custom model, a custom friendly name SHOULD be used.
+If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
-## Assistant event
+---
-This event describes the assistant message.
+`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `anthropic` | Anthropic | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `aws.bedrock` | AWS Bedrock | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `az.ai.inference` | Azure AI Inference | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `cohere` | Cohere | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `ibm.watsonx.ai` | IBM Watsonx AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `vertex_ai` | Vertex AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**Body fields:**
+
+| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| `content` | undefined | The contents of the user message. | `What's the weather in Paris?` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `role` | string | The actual role of the message author as passed in the message. | `user`; `customer` | `Conditionally Required` if available and not equal to `user`. | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+
+
+
+
+
+## Event: `gen_ai.assistant.message`
+
+
+
+
+
+
+
+
+**Status:** ![Experimental](https://img.shields.io/badge/-experimental-blue)
The event name MUST be `gen_ai.assistant.message`.
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|--------------|--------------------------------|----------------------------------------|-------------------------------------------------|-------------------|
-| `role` | string | The actual role of the message author as passed in the message. | `"assistant"`, `"bot"` | `Conditionally Required`: if available and if not equal to `assistant` |
-| `content` | `AnyValue` | The contents of the assistant message. | `Spans, events, metrics defined by the GenAI semantic conventions.` | `Opt-In` |
-| `tool_calls` | [ToolCall](#toolcall-object)[] | The tool calls generated by the model, such as function calls. | `[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}, "type":"function"}]` | `Conditionally Required`: if available |
+This event describes the assistant message passed to GenAI system.
-### `ToolCall` object
+| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|------------|-----------------------------|------------------------------------|-------------------------------------------------|-------------------|
-| `id` | string | The id of the tool call | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |
-| `type` | string | The type of the tool | `function` | `Required` |
-| `function` | [Function](#function-object)| The function that the model called | `{"name":"get_link_to_otel_semconv", "arguments":{"semconv":"gen_ai"}}` | `Required` |
+**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
+by `gen_ai.request.model` and `gen_ai.response.model` attributes.
-### `Function` object
+The actual GenAI product may differ from the one identified by the client.
+For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
+is set to `openai` based on the instrumentation's best knowledge.
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|-------------|------------|----------------------------------------|----------------------------|-------------------|
-| `name` | string | The name of the function to call | `get_link_to_otel_semconv` | `Required` |
-| `arguments` | `AnyValue` | The arguments to pass the the function | `{"semconv": "gen_ai"}` | `Opt-In` |
+For custom model, a custom friendly name SHOULD be used.
+If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
-## Tool event
+---
-This event describes the output of the tool or function submitted back to the model.
+`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `anthropic` | Anthropic | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `aws.bedrock` | AWS Bedrock | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `az.ai.inference` | Azure AI Inference | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `cohere` | Cohere | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `ibm.watsonx.ai` | IBM Watsonx AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `vertex_ai` | Vertex AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**Body fields:**
+
+| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| `content` | undefined | The contents of the tool message. | `The weather in Paris is rainy and overcast, with temperatures around 57°F` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `role` | string | The actual role of the message author as passed in the message. | `assistant`; `bot` | `Conditionally Required` if available and not equal to `assistant`. | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `tool_calls`: | map[] | The tool calls generated by the model, such as function calls. | | `Conditionally Required` if available | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `function`: | map | The function call. | | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `arguments` | undefined | The arguments of the function as provided in the LLM response. [1] | `{\"location\": \"Paris\"}` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `name` | string | The name of the function. | `get_weather` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `id` | string | The id of the tool call. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `type` | enum | The type of the tool. | `function` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**[1]:** Models usually return arguments as a JSON string. In this case, it's RECOMMENDED to provide arguments as is without attempting to deserialize them.
+Semantic conventions for individual systems MAY specify a different type for arguments field.
+
+`type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `function` | Function | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+
+
+
+
+
+## Event: `gen_ai.tool.message`
+
+
+
+
+
+
+
+
+**Status:** ![Experimental](https://img.shields.io/badge/-experimental-blue)
The event name MUST be `gen_ai.tool.message`.
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|----------------|--------|-----------------------------------------------|---------------------------------|-------------------|
-| `role` | string | The actual role of the message author as passed in the message. | `"tool"`, `"function"` | `Conditionally Required`: if available and if not equal to `tool` |
-| `content` | AnyValue | The contents of the tool message. | `opentelemetry.io` | `Opt-In` |
-| `id` | string | Tool call that this message is responding to. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` |
+This event describes the response from a tool or function call passed to the GenAI model.
-## Choice event
+| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
-This event describes model-generated individual chat response (choice).
-If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
+**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
+by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+
+The actual GenAI product may differ from the one identified by the client.
+For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
+is set to `openai` based on the instrumentation's best knowledge.
+
+For custom model, a custom friendly name SHOULD be used.
+If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
+
+---
+
+`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `anthropic` | Anthropic | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `aws.bedrock` | AWS Bedrock | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `az.ai.inference` | Azure AI Inference | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `cohere` | Cohere | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `ibm.watsonx.ai` | IBM Watsonx AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `vertex_ai` | Vertex AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
-When response is streamed, instrumentations that report response events MUST reconstruct and report the full message and MUST NOT report individual chunks as events.
-If the request to GenAI model fails with an error before content is received, instrumentation SHOULD report an event with truncated content (if enabled). If `finish_reason` was not received, it MUST be set to `error`.
+**Body fields:**
+
+| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| `content` | undefined | The contents of the tool message. | `rainy, 57°F` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `id` | string | Tool call id that this message is responding to. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `role` | string | The actual role of the message author as passed in the message. | `tool`; `function` | `Conditionally Required` if available and not equal to `tool`. | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+
+
+
+
+
+## Event: `gen_ai.choice`
+
+
+
+
+
+
+
+
+**Status:** ![Experimental](https://img.shields.io/badge/-experimental-blue)
The event name MUST be `gen_ai.choice`.
-Choice event body has the following fields:
+This event describes the Gen AI response message.
+
+| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| [`gen_ai.system`](/docs/attributes-registry/gen-ai.md) | string | The Generative AI product as identified by the client or server instrumentation. [1] | `openai` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**[1] `gen_ai.system`:** The `gen_ai.system` describes a family of GenAI models with specific model identified
+by `gen_ai.request.model` and `gen_ai.response.model` attributes.
+
+The actual GenAI product may differ from the one identified by the client.
+For example, when using OpenAI client libraries to communicate with Mistral, the `gen_ai.system`
+is set to `openai` based on the instrumentation's best knowledge.
+
+For custom model, a custom friendly name SHOULD be used.
+If none of these options apply, the `gen_ai.system` SHOULD be set to `_OTHER`.
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|-----------------|----------------------------|-------------------------------------------------|----------------------------------------|-------------------|
-| `finish_reason` | string | The reason the model stopped generating tokens. | `stop`, `tool_calls`, `content_filter` | `Required` |
-| `index` | int | The index of the choice in the list of choices. | `1` | `Required` |
-| `message` | [Message](#message-object) | GenAI response message | `{"content":"The OpenAI semantic conventions are available at opentelemetry.io"}` | `Recommended` |
+---
-### `Message` object
+`gen_ai.system` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
-| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) |
-|----------------|--------------------------------|-----------------------------------------------|---------------------------------|-------------------|
-| `role` | string | The actual role of the message author as passed in the message. | `"assistant"`, `"bot"` | `Conditionally Required`: if available and if not equal to `assistant` |
-| `content` | `AnyValue` | The contents of the assistant message. | `Spans, events, metrics defined by the GenAI semantic conventions.` | `Opt-In` |
-| `tool_calls` | [ToolCall](#toolcall-object)[] | The tool calls generated by the model, such as function calls. | `[{"id":"call_mszuSIzqtI65i1wAUOE8w5H4", "function":{"name":"get_link_to_otel_semconv", "arguments":"{\"semconv\":\"gen_ai\"}"}, "type":"function"}]` | `Conditionally Required`: if available |
+| Value | Description | Stability |
+|---|---|---|
+| `anthropic` | Anthropic | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `aws.bedrock` | AWS Bedrock | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `az.ai.inference` | Azure AI Inference | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `cohere` | Cohere | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `ibm.watsonx.ai` | IBM Watsonx AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `openai` | OpenAI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `vertex_ai` | Vertex AI | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**Body fields:**
+
+| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
+|---|---|---|---|---|---|
+| `finish_reason` | enum | The reason the model stopped generating tokens. | `stop`; `tool_calls`; `content_filter` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `index` | int | The index of the choice in the list of choices. | `0`; `1` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `message`: | map | GenAI response message. | | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `content` | undefined | The contents of the assistant message. | `The weather in Paris is rainy and overcast, with temperatures around 57°F` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `role` | string | The actual role of the message author as passed in the message. | `assistant`; `bot` | `Conditionally Required` if available and not equal to `assistant`. | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `tool_calls`: | map[] | The tool calls generated by the model, such as function calls. | | `Conditionally Required` if available | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `function`: | map | The function that the model called. | | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `arguments` | undefined | The arguments of the function as provided in the LLM response. [1] | `{\"location\": \"Paris\"}` | `Opt-In` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `name` | string | The name of the function. | `get_weather` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `id` | string | The id of the tool call. | `call_mszuSIzqtI65i1wAUOE8w5H4` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `type` | enum | The type of the tool. | `function` | `Required` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+**[1]:** Models usually return arguments as a JSON string. In this case, it's RECOMMENDED to provide arguments as is without attempting to deserialize them.
+Semantic conventions for individual systems MAY specify a different type for arguments field.
+
+`finish_reason` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `content_filter` | Content Filter | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `error` | Error | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `length` | Length | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `stop` | Stop | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+| `tool_calls` | Tool Calls | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+`type` has the following list of well-known values. If one of them applies, then the respective value MUST be used; otherwise, a custom value MAY be used.
+
+| Value | Description | Stability |
+|---|---|---|
+| `function` | Function | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
+
+
+
+
+
## Custom events
@@ -190,15 +374,27 @@ SHOULD follow `gen_ai.{gen_ai.system}.*` naming pattern for system-specific even
### Chat completion
-This example covers the following scenario:
-
-- user requests chat completion from OpenAI GPT-4 model for the following prompt:
- - System message: `You're a friendly bot that answers questions about OpenTelemetry.`
- - User message: `How to instrument GenAI library with OTel?`
-
-- The model responds with `"Follow GenAI semantic conventions available at opentelemetry.io."` message
-
-Span:
+This is an example of telemetry generated for a chat completion call with system and user messages.
+
+```mermaid
+%%{init:
+{
+ "sequence": { "messageAlign": "left", "htmlLabels":true },
+ "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
+}
+}%%
+sequenceDiagram
+ participant A as Application
+ participant I as Instrumented Client
+ participant M as Model
+ A->>+I: #U+200D
+ I->>M: gen_ai.system.message: You are a helpful bot
gen_ai.user.message: Tell me a joke about OpenTelemetry
+ Note left of I: GenAI Client span
+ I-->M: gen_ai.choice: Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
+ I-->>-A: #U+200D
+```
+
+**GenAI Client span:**
| Attribute name | Value |
|---------------------------------|--------------------------------------------|
@@ -213,79 +409,97 @@ Span:
| `gen_ai.usage.input_tokens` | `52` |
| `gen_ai.response.finish_reasons`| `["stop"]` |
-Events:
+**Events:**
-1. `gen_ai.system.message`.
+1. `gen_ai.system.message`
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body | `{"content": "You're a friendly bot that answers questions about OpenTelemetry."}` |
+ | Event body (with content enabled) | `{"content": "You're a helpful bot"}` |
2. `gen_ai.user.message`
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
+ | Event body (with content enabled) | `{"content":"Tell me a joke about OpenTelemetry"}` |
3. `gen_ai.choice`
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}` |
+ | Event body (with content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
| Event body (without content) | `{"index":0,"finish_reason":"stop","message":{}}` |
### Tools
-This example covers the following scenario:
-
-1. Application requests chat completion from OpenAI GPT-4 model and provides a function definition.
-
- - Application provides the following prompt:
- - User message: `How to instrument GenAI library with OTel?`
- - Application defines a tool (a function) names `get_link_to_otel_semconv` with single string argument named `semconv`
-
-2. The model responds with a tool call request which application executes
-3. The application requests chat completion again now with the tool execution result
+This is an example of telemetry generated for a chat completion call with user message and function definition
+that results in a model requesting application to call provided function. Application executes a function and
+requests another completion now with the tool response.
+
+```mermaid
+%%{init:
+{
+ "sequence": { "messageAlign": "left", "htmlLabels":true },
+ "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
+}
+}%%
+sequenceDiagram
+ participant A as Application
+ participant I as Instrumented Client
+ participant M as Model
+ A->>+I: #U+200D
+ I->>M: gen_ai.user.message: What's the weather in Paris?
+ Note left of I: GenAI Client span 1
+ I-->M: gen_ai.choice: Call to the get_weather tool with Paris as the location argument.
+ I-->>-A: #U+200D
+ A -->> A: parse tool parameters
execute tool
update chat history
+ A->>+I: #U+200D
+ I->>M: gen_ai.user.message: What's the weather in Paris?
gen_ai.assistant.message: get_weather tool call
gen_ai.tool.message: rainy, 57°F
+ Note left of I: GenAI Client span 2
+ I-->M: gen_ai.choice: The weather in Paris is rainy and overcast, with temperatures around 57°F
+ I-->>-A: #U+200D
+```
Here's the telemetry generated for each step in this scenario:
-1. Chat completion resulting in a tool call.
+**GenAI Client span 1:**
- | Attribute name | Value |
- |---------------------|-------------------------------------------------------|
- | Span name | `"chat gpt-4"` |
- | `gen_ai.system` | `"openai"` |
- | `gen_ai.request.model`| `"gpt-4"` |
- | `gen_ai.request.max_tokens`| `200` |
- | `gen_ai.request.top_p`| `1.0` |
- | `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
- | `gen_ai.response.model`| `"gpt-4-0613"` |
- | `gen_ai.usage.output_tokens`| `17` |
- | `gen_ai.usage.input_tokens`| `47` |
- | `gen_ai.response.finish_reasons`| `["tool_calls"]` |
+| Attribute name | Value |
+|---------------------|-------------------------------------------------------|
+| Span name | `"chat gpt-4"` |
+| `gen_ai.system` | `"openai"` |
+| `gen_ai.request.model`| `"gpt-4"` |
+| `gen_ai.request.max_tokens`| `200` |
+| `gen_ai.request.top_p`| `1.0` |
+| `gen_ai.response.id`| `"chatcmpl-9J3uIL87gldCFtiIbyaOvTeYBRA3l"` |
+| `gen_ai.response.model`| `"gpt-4-0613"` |
+| `gen_ai.usage.output_tokens`| `17` |
+| `gen_ai.usage.input_tokens`| `47` |
+| `gen_ai.response.finish_reasons`| `["tool_calls"]` |
- Events parented to this span:
+ **Events**:
- - `gen_ai.user.message` (not reported when capturing content is disabled)
+ All the following events are parented to the **GenAI chat span 1**.
+
+ 1. `gen_ai.user.message` (not reported when capturing content is disabled)
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
+ | Event body | `{"content":"What's the weather in Paris?"}` |
- - `gen_ai.choice`
+ 2. `gen_ai.choice`
| Property | Value |
|---------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}` |
- | Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}` |
+ | Event body (with content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
+ | Event body (without content) | `{"index":0,"finish_reason":"tool_calls","message":{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
-2. Application executes the tool call. Application may create span which is not covered by this semantic convention.
-3. Final chat completion call
+**GenAI Client span 2:**
| Attribute name | Value |
|---------------------------------|-------------------------------------------------------|
@@ -300,55 +514,66 @@ Here's the telemetry generated for each step in this scenario:
| `gen_ai.usage.input_tokens` | `47` |
| `gen_ai.response.finish_reasons`| `["stop"]` |
- Events parented to this span:
- (in this example, the event content matches the original messages, but applications may also drop messages or change their content)
+ **Events**:
+
+ All the following events are parented to the **GenAI chat span 2**.
+
+ In this example, the event content matches the original messages, but applications may also drop messages or change their content.
- - `gen_ai.user.message` (not reported when capturing content is not enabled)
+ 1. `gen_ai.user.message`
| Property | Value |
|----------------------------------|------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body | `{"content":"How to instrument GenAI library with OTel?"}` |
+ | Event body | `{"content":"What's the weather in Paris?"}` |
- - `gen_ai.assistant.message`
+ 2. `gen_ai.assistant.message`
| Property | Value |
|----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv","arguments":"{\"semconv\":\"GenAI\"}"},"type":"function"}]}` |
- | Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_link_to_otel_semconv"},"type":"function"}]}` |
+ | Event body (content enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather","arguments":"{\"location\":\"Paris\"}"},"type":"function"}]}` |
+ | Event body (content not enabled) | `{"tool_calls":[{"id":"call_VSPygqKTWdrhaFErNvMV18Yl","function":{"name":"get_weather"},"type":"function"}]}` |
- - `gen_ai.tool.message`
+ 3. `gen_ai.tool.message`
| Property | Value |
|----------------------------------|------------------------------------------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (content enabled) | `{"content":"opentelemetry.io/semconv/gen-ai","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
+ | Event body (content enabled) | `{"content":"rainy, 57°F","id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
| Event body (content not enabled) | `{"id":"call_VSPygqKTWdrhaFErNvMV18Yl"}` |
- - `gen_ai.choice`
+ 4. `gen_ai.choice`
| Property | Value |
|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow OTel semconv available at opentelemetry.io/semconv/gen-ai"}}` |
+ | Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"The weather in Paris is rainy and overcast, with temperatures around 57°F"}}` |
| Event body (content not enabled) | `{"index":0,"finish_reason":"stop","message":{}}` |
### Chat completion with multiple choices
-This example covers the following scenario:
-
-- user requests 2 chat completion from OpenAI GPT-4 model for the following prompt:
-
- - System message: `You're a friendly bot that answers questions about OpenTelemetry.`
- - User message: `How to instrument GenAI library with OTel?`
-
-- The model responds with two choices
-
- - `"Follow GenAI semantic conventions available at opentelemetry.io."` message
- - `"Use OpenAI instrumentation library."` message
-
-Span:
+This example covers the scenario when user requests model to generate two completions for the same prompt :
+
+```mermaid
+%%{init:
+{
+ "sequence": { "messageAlign": "left", "htmlLabels":true },
+ "themeVariables": { "noteBkgColor" : "green", "noteTextColor": "black", "activationBkgColor": "green", "htmlLabels":true }
+}
+}%%
+sequenceDiagram
+ participant A as Application
+ participant I as Instrumented Client
+ participant M as Model
+ A->>+I: #U+200D
+ I->>M: gen_ai.system.message - "You are a helpful bot"
gen_ai.user.message - "Tell me a joke about OpenTelemetry"
+ Note left of I: GenAI Client span
+ I-->M: gen_ai.choice - Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!
gen_ai.choice - Why did OpenTelemetry get promoted? It had great span of control!
+ I-->>-A: #U+200D
+```
+
+**GenAI Client Span**:
| Attribute name | Value |
|---------------------|--------------------------------------------|
@@ -361,24 +586,26 @@ Span:
| `gen_ai.response.model`| `"gpt-4-0613"` |
| `gen_ai.usage.output_tokens`| `77` |
| `gen_ai.usage.input_tokens`| `52` |
-| `gen_ai.response.finish_reasons`| `["stop"]` |
+| `gen_ai.response.finish_reasons`| `["stop", "stop"]` |
+
+**Events**:
-Events:
+All events are parented to the GenAI chat span above.
1. `gen_ai.system.message`: the same as in the [Chat Completion](#chat-completion) example
-2. `gen_ai.user.message`: the same as in the previous example
+2. `gen_ai.user.message`: the same as in the [Chat Completion](#chat-completion) example
3. `gen_ai.choice`
| Property | Value |
|------------------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Follow GenAI semantic conventions available at opentelemetry.io."}}` |
+ | Event body (content enabled) | `{"index":0,"finish_reason":"stop","message":{"content":"Why did the developer bring OpenTelemetry to the party? Because it always knows how to trace the fun!"}}` |
4. `gen_ai.choice`
| Property | Value |
|------------------------------|-------------------------------------------------------|
| `gen_ai.system` | `"openai"` |
- | Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Use OpenAI instrumentation library."}}` |
+ | Event body (content enabled) | `{"index":1,"finish_reason":"stop","message":{"content":"Why did OpenTelemetry get promoted? It had great span of control!"}}` |
[DocumentStatus]: https://opentelemetry.io/docs/specs/otel/document-status
diff --git a/docs/hardware/common.md b/docs/hardware/common.md
index 1cd19a999b..8cf4f354cc 100644
--- a/docs/hardware/common.md
+++ b/docs/hardware/common.md
@@ -246,7 +246,7 @@ This metric is [recommended][MetricRecommended].
| -------- | --------------- | ----------- | -------------- | --------- |
| `hw.status` | UpDownCounter | `1` | Operational status: `1` (true) or `0` (false) for each of the possible states [1] | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
-**[1]:** `hw.status` is currently specified as an *UpDownCounter* but would ideally be represented using a [*StateSet* as defined in OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#stateset). This semantic convention will be updated once *StateSet* is specified in OpenTelemetry. This planned change is not expected to have any consequence on the way users query their timeseries backend to retrieve the values of `hw.status` over time.
+**[1]:** `hw.status` is currently specified as an *UpDownCounter* but would ideally be represented using a [*StateSet* as defined in OpenMetrics](https://github.com/prometheus/OpenMetrics/blob/v1.0.0/specification/OpenMetrics.md#stateset). This semantic convention will be updated once *StateSet* is specified in OpenTelemetry. This planned change is not expected to have any consequence on the way users query their timeseries backend to retrieve the values of `hw.status` over time.
| Attribute | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|
diff --git a/docs/messaging/messaging-spans.md b/docs/messaging/messaging-spans.md
index 73a90bdd81..cda4fcfa54 100644
--- a/docs/messaging/messaging-spans.md
+++ b/docs/messaging/messaging-spans.md
@@ -191,19 +191,24 @@ Messaging spans SHOULD follow the overall [guidelines for span names](https://gi
-The **span name** SHOULD be `{messaging.operation.name} {destination}` (see below for the exact definition of the [`{destination}`](#destination-placeholder) placeholder).
+The **span name** SHOULD be `{messaging.operation.name} {destination}`
+(see below for the exact definition of the [`{destination}`](#destination-placeholder) placeholder).
-Semantic conventions for individual messaging systems MAY specify different span name format and then MUST document it in semantic conventions for specific messaging technologies.
+Semantic conventions for individual messaging systems MAY specify different
+span name format and then MUST document it in semantic conventions
+for specific messaging technologies.
-The `{destination}` SHOULD describe the entity that the operation is performed against
+The `{destination}`
+SHOULD describe the entity that the operation is performed against
and SHOULD adhere to one of the following values, provided they are accessible:
1. `messaging.destination.template` SHOULD be used when it is available.
2. `messaging.destination.name` SHOULD be used when the destination is known to be neither [temporary nor anonymous](#temporary-and-anonymous-destinations).
3. `server.address:server.port` SHOULD be used only for operations not targeting any specific destination(s).
-If a corresponding `{destination}` value is not available for a specific operation, the instrumentation SHOULD omit the `{destination}`.
+If a (low-cardinality) corresponding `{destination}` value is not available for
+a specific operation, the instrumentation SHOULD omit the `{destination}`.
Examples:
diff --git a/model/gen-ai/events.yaml b/model/gen-ai/events.yaml
index e8e3b40967..ac97d5402c 100644
--- a/model/gen-ai/events.yaml
+++ b/model/gen-ai/events.yaml
@@ -12,32 +12,172 @@ groups:
type: event
stability: experimental
brief: >
- This event describes the instructions passed to the GenAI system inside the prompt.
+ This event describes the system instructions passed to the GenAI model.
extends: gen_ai.common.event.attributes
+ body:
+ id: gen_ai.system.message
+ requirement_level: opt_in
+ type: map
+ fields:
+ - id: content
+ type: undefined
+ stability: experimental
+ brief: >
+ The contents of the system message.
+ examples: ["You're a helpful bot"]
+ requirement_level: opt_in
+ - id: role
+ type: string
+ stability: experimental
+ brief: >
+ The actual role of the message author as passed in the message.
+ examples: ["system", "instruction"]
+ requirement_level:
+ conditionally_required: if available and not equal to `system`.
- id: event.gen_ai.user.message
name: gen_ai.user.message
type: event
stability: experimental
brief: >
- This event describes the prompt message specified by the user.
+ This event describes the user message passed to the GenAI model.
extends: gen_ai.common.event.attributes
+ body:
+ id: gen_ai.user.message
+ requirement_level: opt_in
+ type: map
+ fields:
+ - id: content
+ type: undefined
+ stability: experimental
+ brief: >
+ The contents of the user message.
+ examples: ["What's the weather in Paris?"]
+ requirement_level: opt_in
+ - id: role
+ type: string
+ stability: experimental
+ brief: >
+ The actual role of the message author as passed in the message.
+ examples: ["user", "customer"]
+ requirement_level:
+ conditionally_required: if available and not equal to `user`.
- id: event.gen_ai.assistant.message
name: gen_ai.assistant.message
type: event
stability: experimental
brief: >
- This event describes the assistant message passed to GenAI system or received from it.
+ This event describes the assistant message passed to GenAI system.
extends: gen_ai.common.event.attributes
+ body:
+ id: gen_ai.assistant.message
+ requirement_level: opt_in
+ type: map
+ fields:
+ - id: content
+ type: undefined
+ stability: experimental
+ brief: >
+ The contents of the tool message.
+ examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
+ requirement_level: opt_in
+ - id: role
+ type: string
+ stability: experimental
+ brief: >
+ The actual role of the message author as passed in the message.
+ examples: ["assistant", "bot"]
+ requirement_level:
+ conditionally_required: if available and not equal to `assistant`.
+ - id: tool_calls
+ type: map[]
+ stability: experimental
+ brief: >
+ The tool calls generated by the model, such as function calls.
+ requirement_level:
+ conditionally_required: if available
+ fields:
+ - id: id
+ type: string
+ stability: experimental
+ brief: >
+ The id of the tool call.
+ examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
+ requirement_level: required
+ - id: type
+ type: enum
+ members:
+ - id: function
+ value: 'function'
+ brief: Function
+ stability: experimental
+ brief: >
+ The type of the tool.
+ examples: ["function"]
+ requirement_level: required
+ - id: function
+ type: map
+ stability: experimental
+ brief: >
+ The function call.
+ requirement_level: required
+ fields:
+ - id: name
+ type: string
+ stability: experimental
+ brief: >
+ The name of the function.
+ examples: ["get_weather"]
+ requirement_level: required
+ - id: arguments
+ type: undefined
+ stability: experimental
+ brief: >
+ The arguments of the function as provided in the LLM response.
+ note: >
+ Models usually return arguments as a JSON string. In this case, it's
+ RECOMMENDED to provide arguments as is without attempting to deserialize them.
+
+ Semantic conventions for individual systems MAY specify a different type for
+ arguments field.
+ examples: ['{\"location\": \"Paris\"}']
+ requirement_level: opt_in
- id: event.gen_ai.tool.message
name: gen_ai.tool.message
type: event
stability: experimental
brief: >
- This event describes the tool or function response message.
+ This event describes the response from a tool or function call passed to the GenAI model.
extends: gen_ai.common.event.attributes
+ body:
+ id: gen_ai.tool.message
+ requirement_level: opt_in
+ type: map
+ fields:
+ - id: content
+ type: undefined
+ stability: experimental
+ brief: >
+ The contents of the tool message.
+ examples: ["rainy, 57°F"]
+ requirement_level: opt_in
+ - id: role
+ type: string
+ stability: experimental
+ brief: >
+ The actual role of the message author as passed in the message.
+ examples: ["tool", "function"]
+ requirement_level:
+ conditionally_required: if available and not equal to `tool`.
+ - id: id
+ type: string
+ stability: experimental
+ brief: >
+ Tool call id that this message is responding to.
+ examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
+ requirement_level: required
- id: event.gen_ai.choice
name: gen_ai.choice
@@ -46,3 +186,123 @@ groups:
brief: >
This event describes the Gen AI response message.
extends: gen_ai.common.event.attributes
+ body:
+ id: gen_ai.choice
+ requirement_level: opt_in
+ type: map
+ note: >
+ If GenAI model returns multiple choices, each choice SHOULD be recorded as an individual event.
+ When response is streamed, instrumentations that report response events MUST reconstruct and report
+ the full message and MUST NOT report individual chunks as events.
+ If the request to GenAI model fails with an error before content is received,
+ instrumentation SHOULD report an event with truncated content (if enabled).
+ If `finish_reason` was not received, it MUST be set to `error`.
+ fields:
+ - id: index
+ type: int
+ stability: experimental
+ brief: >
+ The index of the choice in the list of choices.
+ examples: [0, 1]
+ requirement_level: required
+ - id: finish_reason
+ type: enum
+ members:
+ - id: stop
+ value: 'stop'
+ stability: experimental
+ brief: Stop
+ - id: tool_calls
+ value: 'tool_calls'
+ stability: experimental
+ brief: Tool Calls
+ - id: content_filter
+ value: 'content_filter'
+ stability: experimental
+ brief: Content Filter
+ - id: length
+ value: 'length'
+ stability: experimental
+ brief: Length
+ - id: error
+ value: 'error'
+ stability: experimental
+ brief: Error
+ stability: experimental
+ brief: >
+ The reason the model stopped generating tokens.
+ requirement_level: required
+ - id: message
+ type: map
+ stability: experimental
+ brief: >
+ GenAI response message.
+ requirement_level: recommended
+ fields:
+ - id: content
+ type: undefined
+ stability: experimental
+ brief: >
+ The contents of the assistant message.
+ examples: ["The weather in Paris is rainy and overcast, with temperatures around 57°F"]
+ requirement_level: opt_in
+ - id: role
+ type: string
+ stability: experimental
+ brief: >
+ The actual role of the message author as passed in the message.
+ examples: ["assistant", "bot"]
+ requirement_level:
+ conditionally_required: if available and not equal to `assistant`.
+ - id: tool_calls
+ type: map[]
+ stability: experimental
+ brief: >
+ The tool calls generated by the model, such as function calls.
+ requirement_level:
+ conditionally_required: if available
+ fields:
+ - id: id
+ type: string
+ stability: experimental
+ brief: >
+ The id of the tool call.
+ examples: ["call_mszuSIzqtI65i1wAUOE8w5H4"]
+ requirement_level: required
+ - id: type
+ type: enum
+ members:
+ - id: function
+ value: 'function'
+ brief: Function
+ stability: experimental
+ brief: >
+ The type of the tool.
+ requirement_level: required
+ - id: function
+ type: map
+ stability: experimental
+ brief: >
+ The function that the model called.
+ requirement_level: required
+ fields:
+ - id: name
+ type: string
+ stability: experimental
+ brief: >
+ The name of the function.
+ examples: ["get_weather"]
+ requirement_level: required
+ - id: arguments
+ type: undefined
+ stability: experimental
+ brief: >
+ The arguments of the function as provided in the LLM response.
+ note: >
+ Models usually return arguments as a JSON string. In this case, it's
+ RECOMMENDED to provide arguments as is without attempting to deserialize them.
+
+ Semantic conventions for individual systems MAY specify a different type for
+ arguments field.
+ examples: ['{\"location\": \"Paris\"}']
+ requirement_level: opt_in
diff --git a/model/hardware/common-metrics.yaml b/model/hardware/common-metrics.yaml
index 9ae9de1cd8..d57ea702e1 100644
--- a/model/hardware/common-metrics.yaml
+++ b/model/hardware/common-metrics.yaml
@@ -56,7 +56,7 @@ groups:
extends: metric.hw.attributes
note: >
`hw.status` is currently specified as an *UpDownCounter* but would ideally be represented using a
- [*StateSet* as defined in OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#stateset).
+ [*StateSet* as defined in OpenMetrics](https://github.com/prometheus/OpenMetrics/blob/v1.0.0/specification/OpenMetrics.md#stateset).
This semantic convention will be updated once *StateSet* is specified in OpenTelemetry. This planned change
is not expected to have any consequence on the way users query their timeseries backend to retrieve the
values of `hw.status` over time.
diff --git a/templates/registry/markdown/body_field_table.j2 b/templates/registry/markdown/body_field_table.j2
index ca97a077f0..b600f6dd7f 100644
--- a/templates/registry/markdown/body_field_table.j2
+++ b/templates/registry/markdown/body_field_table.j2
@@ -6,7 +6,10 @@
{% macro flatten(fields, ns, depth) %}{% if fields %}{% for f in fields | sort(attribute="id") %}
{% set ns.flat = [ns.flat, [{'field':f,'depth':depth}]] | flatten %}{% if f.fields %}{% set _= flatten(f.fields, ns, depth + 1) %}{% endif %}
{% endfor %}{% endif %}{% endmacro %}
-{% macro field_name(field, depth) %}{% set name= " " * 2 * depth ~ '`' ~ field.id ~ '`' %}{% if field.type == "map" %}{{ name ~ ":"}}{% else %}{{ name }}{% endif %}{% endmacro %}
+{% macro field_name(field, depth) -%}
+{%- set name= " " * 2 * depth ~ '`' ~ field.id ~ '`' -%}
+{%- if (field.type == "map") or (field.type == "map[]") %}{{ name ~ ":"}}{% else -%}
+{{ name }}{% endif %}{% endmacro %}
{#- Macro for creating body table -#}
{% macro generate(fields) %}{% if (fields | length > 0) %}{% set ns = namespace(flat=[])%}{% set _ = flatten(fields, ns, 0) %}| Body Field | Type | Description | Examples | [Requirement Level](https://opentelemetry.io/docs/specs/semconv/general/attribute-requirement-level/) | Stability |
|---|---|---|---|---|---|