mirror of
https://github.com/BillyOutlast/posthog.com.git
synced 2026-02-04 03:11:21 +01:00
LLMA docs update (#12603)
Co-authored-by: Ian Vanagas <34755028+ivanagas@users.noreply.github.com>
This commit is contained in:
@@ -1,30 +1,37 @@
|
||||
An embedding is a single call to an embedding model to convert text into a vector representation.
|
||||
|
||||
Event Name: `$ai_embedding`
|
||||
**Event name**: `$ai_embedding`
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group related AI events together)<br/>Must contain only letters, numbers, and special characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code><br/>Example: `d9222e05-8708-41b8-98ea-d4a21849e761` |
|
||||
### Core properties
|
||||
|
||||
| **Property** | **Description** |
|
||||
|--------------|-----------------|
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group related AI events together). Must contain only letters, numbers, and special characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code> <br/>Example: `d9222e05-8708-41b8-98ea-d4a21849e761` |
|
||||
| `$ai_span_id` | *(Optional)* Unique identifier for this embedding operation |
|
||||
| `$ai_span_name` | *(Optional)* Name given to this embedding operation <br/>Example: `embed_user_query`, `index_document` |
|
||||
| `$ai_parent_id` | *(Optional)* Parent span ID for tree-view grouping |
|
||||
| `$ai_model` | The embedding model used<br/>Example: `text-embedding-3-small`, `text-embedding-ada-002` |
|
||||
| `$ai_provider` | The LLM provider<br/>Example: `openai`, `cohere`, `voyage` |
|
||||
| `$ai_input` | The text to embed<br/>Example: `"Tell me a fun fact about hedgehogs"` or array of strings for batch embeddings |
|
||||
| `$ai_input_tokens` | The number of tokens in the input |
|
||||
| `$ai_latency` | **Optional**<br/>The latency of the embedding call in seconds |
|
||||
| `$ai_http_status` | **Optional**<br/>The HTTP status code of the response |
|
||||
| `$ai_base_url` | **Optional**<br/>The base URL of the LLM provider<br/>Example: `https://api.openai.com/v1` |
|
||||
| `$ai_request_url` | **Optional**<br/>The full URL of the request made to the embedding API<br/>Example: `https://api.openai.com/v1/embeddings` |
|
||||
| `$ai_is_error` | **Optional**<br/>Boolean to indicate if the request was an error |
|
||||
| `$ai_error` | **Optional**<br/>The error message or object if the embedding failed |
|
||||
| **Cost Properties** | *Optional - If not provided, costs will be calculated automatically from token counts* |
|
||||
| `$ai_input_cost_usd` | **Optional**<br/>The cost in USD of the input tokens |
|
||||
| `$ai_output_cost_usd` | **Optional**<br/>The cost in USD of the output tokens (usually 0 for embeddings) |
|
||||
| `$ai_total_cost_usd` | **Optional**<br/>The total cost in USD |
|
||||
| **Span/Trace Properties** | |
|
||||
| `$ai_span_id` | **Optional**<br/>Unique identifier for this embedding operation |
|
||||
| `$ai_span_name` | **Optional**<br/>Name given to this embedding operation<br/>Example: `embed_user_query`, `index_document` |
|
||||
| `$ai_parent_id` | **Optional**<br/>Parent span ID for tree view grouping |
|
||||
| `$ai_latency` | *(Optional)* The latency of the embedding call in seconds |
|
||||
| `$ai_http_status` | *(Optional)* The HTTP status code of the response |
|
||||
| `$ai_base_url` | *(Optional)* The base URL of the LLM provider<br/>Example: `https://api.openai.com/v1` |
|
||||
| `$ai_request_url` | *(Optional)* The full URL of the request made to the embedding API<br/>Example: `https://api.openai.com/v1/embeddings` |
|
||||
| `$ai_is_error` | *(Optional)* Boolean to indicate if the request was an error |
|
||||
| `$ai_error` | *(Optional)* The error message or object if the embedding failed |
|
||||
|
||||
### Example
|
||||
### Cost properties
|
||||
|
||||
Cost properties are optional as we can automatically calculate them from model and token counts. If you want, you can provide your own cost property instead.
|
||||
|
||||
| **Property** | **Description** |
|
||||
|--------------|-----------------|
|
||||
| `$ai_input_cost_usd` | *(Optional)* Cost in USD for input tokens |
|
||||
| `$ai_output_cost_usd` | *(Optional)* Cost in USD for output tokens (usually 0 for embeddings) |
|
||||
| `$ai_total_cost_usd` | *(Optional)* Total cost in USD |
|
||||
|
||||
### Example API call
|
||||
|
||||
```bash
|
||||
curl -X POST "<ph_client_api_host>/i/v0/e/" \
|
||||
|
||||
@@ -1,40 +1,56 @@
|
||||
A generation is a single call to an LLM.
|
||||
|
||||
Event Name: `$ai_generation`
|
||||
**Event name**: `$ai_generation`
|
||||
|
||||
### Core properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group AI events) like `conversation_id`<br/>Must contain only letters, numbers, and special characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code> |
|
||||
| `$ai_model` | The model used <br/>Example: `gpt-3.5-turbo` |
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group AI events) like `conversation_id`<br/>Must contain only letters, numbers, and special characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code> <br/>Example: `d9222e05-8708-41b8-98ea-d4a21849e761` |
|
||||
| `$ai_span_id` | *(Optional)* Unique identifier for this generation |
|
||||
| `$ai_span_name` | *(Optional)* Name given to this generation <br/>Example: `summarize_text` |
|
||||
| `$ai_parent_id` | *(Optional)* Parent span ID for tree view grouping |
|
||||
| `$ai_model` | The model used <br/>Example: `gpt-5-mini` |
|
||||
| `$ai_provider` | The LLM provider <br/>Example: `openai`, `anthropic`, `gemini` |
|
||||
| `$ai_input` | List of messages sent to the LLM <br/>Example: `[{"role": "user", "content": [{"type": "text", "text": "What's in this image?"}, {"type": "image", "image": "https://example.com/image.jpg"}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]` |
|
||||
| `$ai_input_tokens` | The number of tokens in the input (often found in response.usage) |
|
||||
| `$ai_output_choices` | List of response choices from the LLM <br/>Example: `[{"role": "assistant", "content": [{"type": "text", "text": "I can see a hedgehog in the image."}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]` |
|
||||
| `$ai_output_tokens` | The number of tokens in the output (often found in response.usage) |
|
||||
| `$ai_latency` | **Optional**<br/>The latency of the LLM call in seconds |
|
||||
| `$ai_http_status` | **Optional**<br/>The HTTP status code of the response |
|
||||
| `$ai_base_url` | **Optional**<br/>The base URL of the LLM provider <br/>Example: `https://api.openai.com/v1` |
|
||||
| `$ai_request_url` | **Optional**<br/>The full URL of the request made to the LLM API <br/>Example: `https://api.openai.com/v1/chat/completions` |
|
||||
| `$ai_is_error` | **Optional**<br/>Boolean to indicate if the request was an error |
|
||||
| `$ai_error` | **Optional**<br/>The error message or object |
|
||||
| **Cost Properties** | *Optional - If not provided, costs will be calculated automatically from token counts* |
|
||||
| `$ai_input_cost_usd` | **Optional**<br/>The cost in USD of the input tokens |
|
||||
| `$ai_output_cost_usd` | **Optional**<br/>The cost in USD of the output tokens |
|
||||
| `$ai_total_cost_usd` | **Optional**<br/>The total cost in USD (input + output) |
|
||||
| **Cache Properties** | |
|
||||
| `$ai_cache_read_input_tokens` | **Optional**<br/>Number of tokens read from cache |
|
||||
| `$ai_cache_creation_input_tokens` | **Optional**<br/>Number of tokens written to cache (Anthropic-specific) |
|
||||
| **Model Parameters** | |
|
||||
| `$ai_temperature` | **Optional**<br/>Temperature parameter used in the LLM request |
|
||||
| `$ai_stream` | **Optional**<br/>Whether the response was streamed |
|
||||
| `$ai_max_tokens` | **Optional**<br/>Maximum tokens setting for the LLM response |
|
||||
| `$ai_tools` | **Optional**<br/>Tools/functions available to the LLM <br/>Example: `[{"type": "function", "function": {"name": "get_weather", "parameters": {...}}}]` |
|
||||
| **Span/Trace Properties** | |
|
||||
| `$ai_span_id` | **Optional**<br/>Unique identifier for this generation |
|
||||
| `$ai_span_name` | **Optional**<br/>Name given to this generation <br/>Example: `summarize_text` |
|
||||
| `$ai_parent_id` | **Optional**<br/>Parent span ID for tree view grouping |
|
||||
| `$ai_latency` | *(Optional)* The latency of the LLM call in seconds |
|
||||
| `$ai_http_status` | *(Optional)* The HTTP status code of the response |
|
||||
| `$ai_base_url` | *(Optional)* The base URL of the LLM provider <br/>Example: `https://api.openai.com/v1` |
|
||||
| `$ai_request_url` | *(Optional)* The full URL of the request made to the LLM API <br/>Example: `https://api.openai.com/v1/chat/completions` |
|
||||
| `$ai_is_error` | *(Optional)* Boolean to indicate if the request was an error |
|
||||
| `$ai_error` | *(Optional)* The error message or object |
|
||||
|
||||
### Cost properties
|
||||
|
||||
Cost properties are optional as we can automatically calculate them from model and token counts. If you want, you can provide your own cost property instead.
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_input_cost_usd` | *(Optional)* The cost in USD of the input tokens |
|
||||
| `$ai_output_cost_usd` | *(Optional)* The cost in USD of the output tokens |
|
||||
| `$ai_total_cost_usd` | *(Optional)* The total cost in USD (input + output) |
|
||||
|
||||
### Cache properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_cache_read_input_tokens` | *(Optional)* Number of tokens read from cache |
|
||||
| `$ai_cache_creation_input_tokens` | *(Optional)* Number of tokens written to cache (Anthropic-specific) |
|
||||
|
||||
### Model parameters
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_temperature` | *(Optional)* Temperature parameter used in the LLM request |
|
||||
| `$ai_stream` | *(Optional)* Whether the response was streamed |
|
||||
| `$ai_max_tokens` | *(Optional)* Maximum tokens setting for the LLM response |
|
||||
| `$ai_tools` | *(Optional)* Tools/functions available to the LLM <br/>Example: `[{"type": "function", "function": {"name": "get_weather", "parameters": {...}}}]` |
|
||||
|
||||
### Example API call
|
||||
|
||||
### Example
|
||||
```bash
|
||||
curl -X POST "<ph_client_api_host>/i/v0/e/" \
|
||||
-H "Content-Type: application/json" \
|
||||
|
||||
@@ -28,6 +28,11 @@ const LLMInstallationPlatforms = () => {
|
||||
url: '/docs/llm-analytics/installation/vercel-ai',
|
||||
image: 'https://res.cloudinary.com/dmukukwp6/image/upload/vercel_ded5edb1ef.svg',
|
||||
},
|
||||
{
|
||||
label: 'OpenRouter',
|
||||
url: '/docs/llm-analytics/installation/openrouter',
|
||||
image: 'https://res.cloudinary.com/dmukukwp6/image/upload/openrouterai_8260e8b011.png',
|
||||
},
|
||||
]
|
||||
|
||||
return <List className="grid sm:grid-cols-2" items={platforms} />
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
| Property | Description |
|
||||
|---------- | -------------|
|
||||
| `$ai_model` | The specific model, like `gpt-5-mini` or `claude-4-sonnet` |
|
||||
| `$ai_latency` | The latency of the LLM call in seconds |
|
||||
| `$ai_tools` | Tools and functions available to the LLM |
|
||||
| `$ai_input` | List of messages sent to the LLM |
|
||||
| `$ai_input_tokens` | The number of tokens in the input (often found in response.usage) |
|
||||
| `$ai_output_choices` | List of response choices from the LLM |
|
||||
| `$ai_output_tokens` | The number of tokens in the output (often found in `response.usage`) |
|
||||
| `$ai_total_cost_usd` | The total cost in USD (input + output) |
|
||||
| [...](/docs/llm-analytics/generations#event-properties) | See [full list](/docs/llm-analytics/generations#event-properties) of properties|
|
||||
| | |
|
||||
@@ -1,20 +1,22 @@
|
||||
A span is a single action within your application, such as a function call or vector database search.
|
||||
|
||||
Event Name: `$ai_span`
|
||||
**Event name**: `$ai_span`
|
||||
|
||||
### Core properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group related AI events together)<br/>Must contain only letters, numbers, and the following characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code><br/>Example: `d9222e05-8708-41b8-98ea-d4a21849e761` |
|
||||
| `$ai_span_id` | *(Optional)* Unique identifier for this span<br/>Example: `bdf42359-9364-4db7-8958-c001f28c9255` |
|
||||
| `$ai_span_name` | *(Optional)* The name of the span<br/>Example: `vector_search`, `data_retrieval`, `tool_call` |
|
||||
| `$ai_parent_id` | *(Optional)* Parent ID for tree view grouping (`trace_id` or another `span_id`)<br/>Example: `537b7988-0186-494f-a313-77a5a8f7db26` |
|
||||
| `$ai_input_state` | The input state of the span<br/>Example: `{"query": "search for documents about hedgehogs"}` or any JSON-serializable state |
|
||||
| `$ai_output_state` | The output state of the span<br/>Example: `{"results": ["doc1", "doc2"], "count": 2}` or any JSON-serializable state |
|
||||
| `$ai_latency` | **Optional**<br/>The latency of the span in seconds<br/>Example: `0.361` |
|
||||
| `$ai_span_name` | **Optional**<br/>The name of the span<br/>Example: `vector_search`, `data_retrieval`, `tool_call` |
|
||||
| `$ai_span_id` | **Optional**<br/>Unique identifier for this span<br/>Example: `bdf42359-9364-4db7-8958-c001f28c9255` |
|
||||
| `$ai_parent_id` | **Optional**<br/>Parent ID for tree view grouping (trace_id or another span_id)<br/>Example: `537b7988-0186-494f-a313-77a5a8f7db26` |
|
||||
| `$ai_is_error` | **Optional**<br/>Boolean to indicate if the span encountered an error |
|
||||
| `$ai_error` | **Optional**<br/>The error message or object if the span failed<br/>Example: `{"message": "Connection timeout", "code": "TIMEOUT"}` |
|
||||
| `$ai_latency` | *(Optional)* The latency of the span in seconds<br/>Example: `0.361` |
|
||||
| `$ai_is_error` | *(Optional)* Boolean to indicate if the span encountered an error |
|
||||
| `$ai_error` | *(Optional)* The error message or object if the span failed<br/>Example: `{"message": "Connection timeout", "code": "TIMEOUT"}` |
|
||||
|
||||
### Example
|
||||
### Example API call
|
||||
|
||||
```bash
|
||||
curl -X POST "<ph_client_api_host>/i/v0/e/" \
|
||||
|
||||
@@ -1,16 +1,18 @@
|
||||
A trace is a group that contains multiple spans, generations, and embeddings. Traces can be manually sent as events or appear as pseudo-events automatically created from child events.
|
||||
|
||||
Event Name: `$ai_trace`
|
||||
**Event name**: `$ai_trace`
|
||||
|
||||
### Core properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `$ai_trace_id` | The trace ID (a UUID to group related AI events together)<br/>Must contain only letters, numbers, and special characters: `-`, `_`, `~`, `.`, `@`, `(`, `)`, `!`, `'`, `:`, <code>\|</code><br/>Example: `d9222e05-8708-41b8-98ea-d4a21849e761` |
|
||||
| `$ai_input_state` | The input of the whole trace<br/>Example: `[{"role": "user", "content": "What's the weather in SF?"}]` or any JSON-serializable state |
|
||||
| `$ai_output_state` | The output of the whole trace<br/>Example: `[{"role": "assistant", "content": "The weather in San Francisco is..."}]` or any JSON-serializable state |
|
||||
| `$ai_latency` | **Optional**<br/>The latency of the trace in seconds |
|
||||
| `$ai_span_name` | **Optional**<br/>The name of the trace<br/>Example: `chat_completion`, `rag_pipeline` |
|
||||
| `$ai_is_error` | **Optional**<br/>Boolean to indicate if the trace encountered an error |
|
||||
| `$ai_error` | **Optional**<br/>The error message or object if the trace failed |
|
||||
| `$ai_latency` | *(Optional)* The latency of the trace in seconds |
|
||||
| `$ai_span_name` | *(Optional)* The name of the trace<br/>Example: `chat_completion`, `rag_pipeline` |
|
||||
| `$ai_is_error` | *(Optional)* Boolean to indicate if the trace encountered an error |
|
||||
| `$ai_error` | *(Optional)* The error message or object if the trace failed |
|
||||
|
||||
### Pseudo-trace Events
|
||||
|
||||
@@ -25,7 +27,7 @@ This means you can either:
|
||||
1. Send explicit `$ai_trace` events to control the trace metadata
|
||||
2. Let PostHog automatically create pseudo-traces from your generation/span events
|
||||
|
||||
### Example
|
||||
### Example API call
|
||||
|
||||
```bash
|
||||
curl -X POST "<ph_client_api_host>/i/v0/e/" \
|
||||
|
||||
@@ -1,9 +1,5 @@
|
||||
---
|
||||
title: Embeddings
|
||||
availability:
|
||||
free: full
|
||||
selfServe: full
|
||||
enterprise: full
|
||||
---
|
||||
|
||||
import EmbeddingEvent from "./_snippets/embedding-event.mdx"
|
||||
|
||||
@@ -1,24 +1,102 @@
|
||||
---
|
||||
title: Generations
|
||||
availability:
|
||||
free: full
|
||||
selfServe: full
|
||||
enterprise: full
|
||||
---
|
||||
|
||||
import GenerationEvent from "./_snippets/generation-event.mdx"
|
||||
import NotableGenerationProperties from "./_snippets/notable-generation-properties.mdx"
|
||||
|
||||
Generations are an event that capture an LLM request. The [generations tab](https://app.posthog.com/llm-analytics/generations) lists them along with the properties autocaptured by PostHog like the person, model, total cost, token usage, and more.
|
||||
|
||||
When you expand a generation, it includes the properties, metadata, a conversation history, the role (system, user, assistant), input content, and output content.
|
||||
Generations are events that capture LLM calls and their responses. They represent interactions and conversations with an AI model. Generations are tracked as `$ai_generation` events and can be used to create and visualize [insights](/docs/product-analytics/insights) just like other PostHog events.
|
||||
|
||||
<ProductScreenshot
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/llma_generations_8a94f2cd01.png"
|
||||
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/llma_generations_dark_50e206f940.png"
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/ai_generations_f687da8aaa.png"
|
||||
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/ai_generations_dark_51d8bae273.png"
|
||||
alt="$ai_generation events"
|
||||
classes="rounded"
|
||||
/>
|
||||
<Caption>
|
||||
View recent AI generation events in the <b>Activity</b> tab
|
||||
</Caption>
|
||||
|
||||
The **LLM analytics** > [**Generations** tab](https://app.posthog.com/llm-analytics/generations) displays a list of generations, along with a preview of key autocaptured properties. You can filter and search for generations by various properties.
|
||||
|
||||
<ProductScreenshot
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/llm_generations_b12119af33.png"
|
||||
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/llm_geneerations_dark_03c996e8ad.png"
|
||||
alt="LLM generations"
|
||||
classes="rounded"
|
||||
/>
|
||||
|
||||
<Caption>
|
||||
Preview and filter AI generations in the <b>Generations</b> tab
|
||||
</Caption>
|
||||
|
||||
|
||||
## What does each generation capture?
|
||||
|
||||
A generation event records the AI model’s inputs, generated output, and additional metadata – like token usage, latency, and cost – for each LLM call.
|
||||
|
||||
PostHog automatically logs and displays the generation and its data within a conversation view for contextual debugging and analysis. You can also view the raw JSON payload.
|
||||
|
||||
<ProductVideo
|
||||
videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/ai_generation_in_app_18f37057ca.mp4"
|
||||
alt="AI generation in-app view"
|
||||
autoPlay="true"
|
||||
loop="true"
|
||||
/>
|
||||
|
||||
You can expect each generation to have the following properties (in addition to the [default event properties](/docs/data/events#default-properties)):
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
When calling LLMs with our [SDK wrappers](/docs/llm-analytics/installation), you can also enrich the `$ai_generation` event with your own [custom properties](/docs/llm-analytics/custom-properties) and PostHog attributes like groups and distinct IDs for identified users.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
```python
|
||||
response = client.responses.create(
|
||||
model="gpt-5-mini",
|
||||
input=[
|
||||
{"role": "user", "content": "Tell me a fun fact about hedgehogs"}
|
||||
],
|
||||
posthog_distinct_id="user_123", # optional
|
||||
posthog_trace_id="trace_123", # optional
|
||||
posthog_properties={"custom_property": "value"}, # optional
|
||||
posthog_groups={"company": "company_id_in_your_db"}, # optional
|
||||
posthog_privacy_mode=False # optional
|
||||
)
|
||||
```
|
||||
|
||||
```ts
|
||||
const completion = await openai.responses.create({
|
||||
model: "gpt-5-mini",
|
||||
input: [{ role: "user", content: "Tell me a fun fact about hedgehogs" }],
|
||||
posthogDistinctId: "user_123", // optional
|
||||
posthogTraceId: "trace_123", // optional
|
||||
posthogProperties: { custom_property: "value" }, // optional
|
||||
posthogGroups: { company: "company_id_in_your_db" }, // optional
|
||||
posthogPrivacyMode: false // optional
|
||||
});
|
||||
```
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
## How are generations, traces, and spans related?
|
||||
|
||||
Generations are nested under [spans](/docs/llm-analytics/spans) and [traces](/docs/llm-analytics/traces).
|
||||
|
||||
A trace is the top-level entity that groups all related LLM operations, including spans and generations, together.
|
||||
|
||||
Spans are individual operations within a trace. Some spans represent generations, which are also uniquely identified using the `$ai_span_id` property. However, most spans track other types of LLM operations such as tool calls, RAG retrieval, data processing, and more.
|
||||
|
||||
<ProductScreenshot
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/llm_spans_151fd2701a.png"
|
||||
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/llm_spans_dark_6ce1cab5b9.png"
|
||||
alt="LLM trace tree"
|
||||
classes="rounded"
|
||||
/>
|
||||
|
||||
<Caption>Generations and spans are nested within a trace</Caption>
|
||||
|
||||
## Event properties
|
||||
|
||||
<GenerationEvent />
|
||||
@@ -2,8 +2,8 @@ import { CalloutBox } from 'components/Docs/CalloutBox'
|
||||
|
||||
<CalloutBox type="fyi" icon="IconInfo" title="Proxy note">
|
||||
|
||||
These SDKs **do not** proxy your calls, they only fire off an async call to PostHog in the background to send the data.
|
||||
These SDKs **do not** proxy your calls. They only fire off an async call to PostHog in the background to send the data.
|
||||
|
||||
You can also use LLM analytics with other SDKs or our API, but you will need to capture the data manually via the capture method. See schema in the [manual capture section](/docs/llm-analytics/manual-capture) for more details.
|
||||
You can also use LLM analytics with other SDKs or our API, but you will need to capture the data in the right format. See the schema in the [manual capture section](/docs/llm-analytics/manual-capture) for more details.
|
||||
|
||||
</CalloutBox>
|
||||
@@ -5,6 +5,7 @@ showStepsToc: true
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -48,7 +49,7 @@ npm install @anthropic-ai/sdk
|
||||
|
||||
<Step title="Initialize PostHog and the Anthropic wrapper" badge="required">
|
||||
|
||||
In the spot where you initialize the Anthropic SDK, import PostHog and our Anthropic wrapper, initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), and pass it to our Anthropic wrapper.
|
||||
Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass it to our Anthropic wrapper.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
@@ -90,7 +91,9 @@ const client = new Anthropic({
|
||||
|
||||
<Step title="Call Anthropic LLMs" badge="required">
|
||||
|
||||
Now, when you use the Anthropic SDK, it automatically captures many properties into PostHog including `$ai_input`, `$ai_input_tokens`, `$ai_cache_read_input_tokens`, `$ai_cache_creation_input_tokens`, `$ai_latency`, `$ai_tools`, `$ai_model`, `$ai_model_parameters`, `$ai_output_choices`, and `$ai_output_tokens`.
|
||||
Now, when you use the Anthropic SDK, PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ showStepsToc: true
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -48,8 +49,8 @@ npm install @google/genai
|
||||
</Step>
|
||||
|
||||
<Step title="Initialize PostHog and Google Gen AI client" badge="required">
|
||||
|
||||
In the spot where you initialize the Google Gen AI SDK, import PostHog and our Google Gen AI wrapper, initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), and pass it to our wrapper.
|
||||
|
||||
Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass it to our Google Gen AI wrapper.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
@@ -89,7 +90,9 @@ const client = new GoogleGenAI({
|
||||
|
||||
<Step title="Call Google Gen AI LLMs" badge="required">
|
||||
|
||||
Now, when you use the Google Gen AI SDK, it automatically captures many properties into PostHog including `$ai_input`, `$ai_input_tokens`, `$ai_cache_read_input_tokens`, `$ai_cache_creation_input_tokens`, `$ai_latency`, `$ai_tools`, `$ai_model`, `$ai_model_parameters`, `$ai_output_choices`, and `$ai_output_tokens`.
|
||||
Now, when you use the Google Gen AI SDK, PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ showStepsToc: true
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -48,7 +49,7 @@ npm install langchain @langchain/core @posthog/ai
|
||||
|
||||
<Step title="Initialize PostHog and LangChain" badge="required">
|
||||
|
||||
In the spot where you make your OpenAI calls, import PostHog, LangChain, and our LangChain `CallbackHandler`. Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), and pass it to the `CallbackHandler`.
|
||||
Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass it to the LangChain `CallbackHandler` wrapper.
|
||||
|
||||
Optionally, you can provide a user distinct ID, trace ID, PostHog properties, [groups](/docs/product-analytics/group-analytics), and privacy mode.
|
||||
|
||||
@@ -151,7 +152,11 @@ phClient.shutdown();
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
This automatically captures many properties into PostHog including `$ai_input`, `$ai_input_tokens`, `$ai_latency`, `$ai_model`, `$ai_model_parameters`, `$ai_output_choices`, and `$ai_output_tokens`. It also automatically creates a trace hierarchy based on how LangChain components are nested.
|
||||
PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
It also automatically creates a trace hierarchy based on how LangChain components are nested.
|
||||
|
||||
</Step>
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ showStepsToc: true
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -42,13 +43,11 @@ npm install openai
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
<LLMsSDKsCallout />
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Initialize PostHog and OpenAI client" badge="required">
|
||||
|
||||
In the spot where you initialize the OpenAI SDK, import PostHog and our OpenAI wrapper, initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), and pass it to our OpenAI wrapper.
|
||||
Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass it to our OpenAI wrapper.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
@@ -91,11 +90,17 @@ phClient.shutdown()
|
||||
|
||||
> **Note:** This also works with the `AsyncOpenAI` client.
|
||||
|
||||
|
||||
<LLMsSDKsCallout />
|
||||
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Call OpenAI LLMs" badge="required">
|
||||
|
||||
Now, when you use the OpenAI SDK, it automatically captures many properties into PostHog including `$ai_input`, `$ai_input_tokens`, `$ai_cache_read_input_tokens`, `$ai_latency`, `$ai_model`, `$ai_model_parameters`, `$ai_reasoning_tokens`, `$ai_tools`, `$ai_output_choices`, and `$ai_output_tokens`.
|
||||
Now, when you use the OpenAI SDK, PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.
|
||||
|
||||
|
||||
155
contents/docs/llm-analytics/installation/openrouter.mdx
Normal file
155
contents/docs/llm-analytics/installation/openrouter.mdx
Normal file
@@ -0,0 +1,155 @@
|
||||
---
|
||||
title: OpenRouter LLM analytics installation
|
||||
showStepsToc: true
|
||||
---
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
<Steps>
|
||||
|
||||
<Step title="Install the PostHog SDK" badge="required">
|
||||
|
||||
Setting up analytics starts with installing the PostHog SDK for your language. LLM analytics works best with our Python and Node SDKs.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
```bash file=Python
|
||||
pip install posthog
|
||||
```
|
||||
|
||||
```bash file=Node
|
||||
npm install @posthog/ai posthog-node
|
||||
```
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Install the OpenAI SDK" badge="required">
|
||||
|
||||
Install the OpenAI SDK:
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
```bash file=Python
|
||||
pip install openai
|
||||
```
|
||||
|
||||
```bash file=Node
|
||||
npm install openai
|
||||
```
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Initialize PostHog and OpenAI client" badge="required">
|
||||
|
||||
We call OpenRouter through the OpenAI client and generate a response. We’ll use PostHog’s OpenAI provider to capture all the details of the call.
|
||||
|
||||
Initialize PostHog with your PostHog project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass the PostHog client along with the OpenRouter config (the base URL and API key) to our OpenAI wrapper.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
```python
|
||||
from posthog.ai.openai import OpenAI
|
||||
from posthog import Posthog
|
||||
|
||||
posthog = Posthog(
|
||||
"<ph_project_api_key>",
|
||||
host="<ph_client_api_host>"
|
||||
)
|
||||
|
||||
client = OpenAI(
|
||||
baseURL="https://openrouter.ai/api/v1",
|
||||
api_key="<openrouter_api_key>",
|
||||
posthog_client=posthog # This is an optional parameter. If it is not provided, a default client will be used.
|
||||
)
|
||||
```
|
||||
|
||||
```ts
|
||||
import { OpenAI } from '@posthog/ai'
|
||||
import { PostHog } from 'posthog-node'
|
||||
|
||||
const phClient = new PostHog(
|
||||
'<ph_project_api_key>',
|
||||
{ host: '<ph_client_api_host>' }
|
||||
);
|
||||
|
||||
const openai = new OpenAI({
|
||||
baseURL: 'https://openrouter.ai/api/v1',
|
||||
apiKey: '<openrouter_api_key>',
|
||||
posthog: phClient,
|
||||
});
|
||||
|
||||
// ... your code here ...
|
||||
|
||||
// IMPORTANT: Shutdown the client when you're done to ensure all events are sent
|
||||
phClient.shutdown()
|
||||
```
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
> **Note:** This also works with the `AsyncOpenAI` client.
|
||||
|
||||
|
||||
<LLMsSDKsCallout />
|
||||
|
||||
|
||||
</Step>
|
||||
|
||||
<Step title="Call OpenRouter" badge="required">
|
||||
|
||||
Now, when you call OpenRouter with the OpenAI SDK, PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
You can also capture or modify additional properties with the distinct ID, trace ID, properties, groups, and privacy mode parameters.
|
||||
|
||||
<MultiLanguage>
|
||||
|
||||
```python
|
||||
response = client.responses.create(
|
||||
model="gpt-5-mini",
|
||||
input=[
|
||||
{"role": "user", "content": "Tell me a fun fact about hedgehogs"}
|
||||
],
|
||||
posthog_distinct_id="user_123", # optional
|
||||
posthog_trace_id="trace_123", # optional
|
||||
posthog_properties={"conversation_id": "abc123", "paid": True}, # optional
|
||||
posthog_groups={"company": "company_id_in_your_db"}, # optional
|
||||
posthog_privacy_mode=False # optional
|
||||
)
|
||||
|
||||
print(response.choices[0].message.content)
|
||||
```
|
||||
|
||||
|
||||
```ts
|
||||
const completion = await openai.responses.create({
|
||||
model: "gpt-5-mini",
|
||||
input: [{ role: "user", content: "Tell me a fun fact about hedgehogs" }],
|
||||
posthogDistinctId: "user_123", // optional
|
||||
posthogTraceId: "trace_123", // optional
|
||||
posthogProperties: { conversation_id: "abc123", paid: true }, // optional
|
||||
posthogGroups: { company: "company_id_in_your_db" }, // optional
|
||||
posthogPrivacyMode: false // optional
|
||||
});
|
||||
|
||||
console.log(completion.choices[0].message.content)
|
||||
```
|
||||
|
||||
</MultiLanguage>
|
||||
|
||||
> **Notes:**
|
||||
> - We also support the old `chat.completions` API.
|
||||
> - This works with responses where `stream=True`.
|
||||
> - If you want to capture LLM events anonymously, **don't** pass a distinct ID to the request. See our docs on [anonymous vs identified events](/docs/data/anonymous-vs-identified-events) to learn more.
|
||||
|
||||
</Step>
|
||||
|
||||
<VerifyLLMEventsStep />
|
||||
|
||||
</Steps>
|
||||
@@ -5,6 +5,8 @@ showStepsToc: true
|
||||
|
||||
import LLMsSDKsCallout from './_snippets/llms-sdks-callout.mdx'
|
||||
import VerifyLLMEventsStep from './_snippets/verify-llm-events-step.mdx'
|
||||
import NotableGenerationProperties from '../_snippets/notable-generation-properties.mdx'
|
||||
|
||||
|
||||
<Steps>
|
||||
|
||||
@@ -32,7 +34,7 @@ npm install ai @ai-sdk/openai
|
||||
|
||||
<Step title="Initialize PostHog and Vercel AI" badge="required">
|
||||
|
||||
In the spot where you initialize the Vercel AI SDK, import PostHog and our `withTracing` wrapper, initialize PostHog with your project API key and host from [your project settings](https://us.posthog.com/settings/project), and pass it to the `withTracing` wrapper.
|
||||
Initialize PostHog with your project API key and host from [your project settings](https://app.posthog.com/settings/project), then pass the Vercel AI OpenAI client and the PostHog client to the `withTracing` wrapper.
|
||||
|
||||
```ts
|
||||
import { PostHog } from "posthog-node";
|
||||
@@ -65,7 +67,11 @@ phClient.shutdown()
|
||||
|
||||
<Step title="Call Vercel AI" badge="required">
|
||||
|
||||
Now, when you use the Vercel AI SDK, it automatically captures many properties into PostHog including `$ai_input`, `$ai_input_tokens`, `$ai_latency`, `$ai_model`, `$ai_model_parameters`, `$ai_output_choices`, and `$ai_output_tokens`. This works for both `text` and `image` message types.
|
||||
Now, when you use the Vercel AI SDK, PostHog automatically captures an `$ai_generation` event along with these properties:
|
||||
|
||||
<NotableGenerationProperties />
|
||||
|
||||
This works for both `text` and `image` message types.
|
||||
|
||||
You can also capture or modify additional properties with the `posthogDistinctId`, `posthogTraceId`, `posthogProperties`, `posthogGroups`, and `posthogPrivacyMode` parameters.
|
||||
|
||||
|
||||
@@ -1,21 +1,24 @@
|
||||
---
|
||||
title: LLM analytics integrations
|
||||
title: LLM analytics third-party integrations
|
||||
availability:
|
||||
free: full
|
||||
selfServe: full
|
||||
enterprise: full
|
||||
---
|
||||
|
||||
Beyond our native [LLM analytics product](/docs/llm-analytics/installation), we've teamed up with various LLM platforms to track metrics for LLM apps. This makes it easy to answer questions like:
|
||||
import { CalloutBox } from 'components/Docs/CalloutBox'
|
||||
|
||||
- What are my LLM costs by customer, model, and in total?
|
||||
- How many of my users are interacting with my LLM features?
|
||||
- Are there generation latency spikes?
|
||||
- Does interacting with LLM features correlate with other metrics (retention, usage, revenue, etc.)?
|
||||
Outside of our native [LLM analytics product](/docs/llm-analytics/start-here), we've teamed up with various third-party platforms to track metrics for LLM apps.
|
||||
|
||||
## Supported integrations
|
||||
<CalloutBox title="Note">
|
||||
|
||||
Currently, we support integrations for the following platforms:
|
||||
These third-party integrations are no longer officially maintained. These guides may still be helpful, but we recommend using our [LLM analytics solution](/docs/llm-analytics/start-here) for a fully supported option.
|
||||
|
||||
</CalloutBox>
|
||||
|
||||
## Third-party integrations
|
||||
|
||||
Currently, we have integrations for the following platforms:
|
||||
|
||||
- [Langfuse](/docs/llm-analytics/integrations/langfuse-posthog)
|
||||
- [Helicone](/docs/llm-analytics/integrations/helicone-posthog)
|
||||
@@ -30,6 +33,6 @@ To create your own dashboard from a template:
|
||||
|
||||
1. Go the [dashboard tab](https://app.posthog.com/dashboard) in PostHog.
|
||||
2. Click the **New dashboard** button in the top right.
|
||||
3. Select **LLM metrics – [name of the integration you installed]** from the list of templates.
|
||||
3. Select **LLM metrics – [name of the third-party integration you installed]** from the list of templates.
|
||||
|
||||

|
||||
|
||||
@@ -8,7 +8,7 @@ import TraceEvent from "./_snippets/trace-event.mdx"
|
||||
import SpanEvent from "./_snippets/span-event.mdx"
|
||||
import EmbeddingEvent from "./_snippets/embedding-event.mdx"
|
||||
|
||||
If you're using a different SDK or the API, you can manually capture the data by calling the `capture` method.
|
||||
If you're using a different SDK or the API, you can manually capture the data by calling the `capture` method or using the [capture API](/docs/api/capture).
|
||||
|
||||
<Tab.Group tabs={['Generation', 'Trace', 'Span', 'Embedding']}>
|
||||
<Tab.List>
|
||||
|
||||
@@ -1,17 +1,23 @@
|
||||
---
|
||||
title: Spans
|
||||
availability:
|
||||
free: full
|
||||
selfServe: full
|
||||
enterprise: full
|
||||
---
|
||||
|
||||
import SpanEvent from "./_snippets/span-event.mdx"
|
||||
|
||||
Spans are individual operations within your LLM application like function calls, vector searches, or data retrieval steps. They provide granular visibility into your application's execution flow beyond just LLM calls.
|
||||
Spans are units of work within an LLM [trace](/docs/llm-analytics/traces). These are events that represent individual operations and discrete durations within your LLM application, like function calls, vector searches, or data retrieval steps, providing granular visibility into the execution flow.
|
||||
|
||||
While [generations](/docs/llm-analytics/generations) capture LLM interactions and [traces](/docs/llm-analytics/traces) group related operations, spans track atomic operations that make up your workflow:
|
||||
<ProductScreenshot
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/llm_spans_151fd2701a.png"
|
||||
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/llm_spans_dark_6ce1cab5b9.png"
|
||||
alt="LLM trace tree"
|
||||
classes="rounded"
|
||||
/>
|
||||
|
||||
<Caption>Spans are nested and displayed within a trace</Caption>
|
||||
|
||||
PostHog captures spans to track atomic operations that make up your LLM workflow. For example:
|
||||
|
||||
- **[Generations](/docs/llm-analytics/generations)** - LLM calls and interactions
|
||||
- **Vector database searches** - Document and embedding retrieval
|
||||
- **Tool/function calls** - API calls, calculations, database queries
|
||||
- **RAG pipeline steps** - Retrieval, reranking, context building
|
||||
|
||||
@@ -37,30 +37,38 @@ Install PostHog SDK
|
||||
</QuestLogItem>
|
||||
|
||||
<QuestLogItem
|
||||
title="Record generations"
|
||||
title="Track AI generations"
|
||||
subtitle="Required"
|
||||
icon="IconRecord"
|
||||
>
|
||||
|
||||
Once you've installed the SDK, every LLM call automatically becomes a [generation](/docs/llm-analytics/generations) – a detailed record of what went in and what came out. Each generation captures:
|
||||
|
||||
- 📝 Complete conversation context (inputs and outputs)
|
||||
- 🔢 Token counts and usage metrics
|
||||
- ⏱️ Response latency and performance data
|
||||
- 💸 Automatic cost calculation based on model pricing
|
||||
- 🔗 Trace IDs to group related LLM calls together
|
||||
- Complete conversation context (inputs and outputs)
|
||||
- Token counts and usage metrics
|
||||
- Response latency and performance data
|
||||
- Automatic cost calculation based on model pricing
|
||||
- Trace IDs to group related LLM calls together
|
||||
|
||||
|
||||
<ProductVideo
|
||||
videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/ai_generation_in_app_18f37057ca.mp4"
|
||||
alt="AI generation in-app view"
|
||||
autoPlay="true"
|
||||
loop="true"
|
||||
/>
|
||||
|
||||
PostHog's SDK wrappers handle all the heavy lifting. Use your LLM provider as normal and we'll capture everything automatically.
|
||||
|
||||
<CallToAction type="primary" to="/docs/llm-analytics/generations">
|
||||
Learn about generations
|
||||
</CallToAction>
|
||||
<CallToAction type="primary" to="/docs/llm-analytics/generations">
|
||||
Learn about generations
|
||||
</CallToAction>
|
||||
|
||||
</QuestLogItem>
|
||||
|
||||
<QuestLogItem
|
||||
title="Evaluate model usage"
|
||||
subtitle="Required"
|
||||
subtitle="Recommended"
|
||||
icon="IconLineGraph"
|
||||
>
|
||||
|
||||
@@ -130,7 +138,7 @@ PostHog's SDK wrappers handle all the heavy lifting. Use your LLM provider as no
|
||||
|
||||
PostHog LLM analytics is designed to be cost-effective with a generous free tier and transparent usage-based pricing. Since we don't charge per seat, more than 90% of companies use PostHog for free.
|
||||
|
||||
### TL;DR
|
||||
### TL;DR 💸
|
||||
|
||||
- No credit card required to start
|
||||
- First 100K LLM events per month are free with 30-day retention
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
---
|
||||
title: Traces
|
||||
availability:
|
||||
free: full
|
||||
selfServe: full
|
||||
enterprise: full
|
||||
---
|
||||
|
||||
import TraceEvent from "./_snippets/trace-event.mdx"
|
||||
|
||||
Traces are a collection of generations and spans that capture a full interaction between a user and an LLM. The [traces tab](https://app.posthog.com/llm-analytics/traces) lists them along with the properties autocaptured by PostHog like the person, total cost, total latency, and more.
|
||||
Traces are a collection of [generations](/docs/llm-analytics/generations) and [spans](/docs/llm-analytics/spans) that capture a full interaction between a user and an LLM. The [traces tab](https://app.posthog.com/llm-analytics/traces) lists them along with the properties autocaptured by PostHog like the person, total cost, total latency, and more.
|
||||
|
||||
Clicking on a trace opens a timeline of the interaction with all the generation and span events enabling you to see the entire conversation, details about the trace, and the individual generation and span events.
|
||||
## Trace timeline
|
||||
|
||||
Clicking on a trace opens a timeline of the interaction with all the generation and span events. The trace timeline enables you to see the entire conversation, profiling details, and the individual generations and spans.
|
||||
|
||||
<ProductScreenshot
|
||||
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/llma_traces_25e203aa50.png"
|
||||
@@ -19,6 +17,40 @@ Clicking on a trace opens a timeline of the interaction with all the generation
|
||||
classes="rounded"
|
||||
/>
|
||||
|
||||
<Caption>A trace presents LLM event data in a timeline, tree-structured view </Caption>
|
||||
|
||||
|
||||
## AI event hierarchy
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
|
||||
A[<strong>$ai_trace</strong>]
|
||||
|
||||
B[<strong>$ai_generation</strong>]
|
||||
|
||||
C@{ shape: processes, label: "<strong>$ai_spans</strong>" }
|
||||
|
||||
D[<strong>$ai_generation</strong>]
|
||||
|
||||
E@{ shape: processes, label: "<strong>$ai_spans</strong>" }
|
||||
|
||||
F[<strong>$ai_generation</strong>]
|
||||
|
||||
A --> B
|
||||
A --> C
|
||||
C --> D
|
||||
C --> E
|
||||
E --> F
|
||||
```
|
||||
|
||||
Traces consist of the following event hierarchy:
|
||||
|
||||
1. A trace is the top-level event entity.
|
||||
2. A trace can contain multiple spans and generations.
|
||||
3. A span can be the parent of other spans.
|
||||
4. A generation can be the child of a span or trace.
|
||||
|
||||
## Event properties
|
||||
|
||||
<TraceEvent />
|
||||
|
||||
@@ -9,7 +9,7 @@ Your first 100,000 `$ai_model` events each month are free – i.e. if you never
|
||||
|
||||
After this, we charge a small amount for each `$ai_*` event you send.
|
||||
|
||||
Go to the [pricing page](/pricing) to use our calculator to get an estimate. You can also view an estimate on [your billing page](https://us.posthog.com/organization/billing).
|
||||
Go to the [pricing page](/pricing) to use our calculator to get an estimate. You can also view an estimate on [your billing page](https://app.posthog.com/organization/billing).
|
||||
|
||||
## Why can't I see any of my LLM events?
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Tutorials and guides
|
||||
title: More LLM analytics tutorials
|
||||
sidebar: Docs
|
||||
showTitle: true
|
||||
---
|
||||
|
||||
BIN
contents/images/docs/llms/openrouterai.png
Normal file
BIN
contents/images/docs/llms/openrouterai.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 5.9 KiB |
@@ -110,7 +110,7 @@ export const Step: React.FC<StepProps & { number?: number }> = ({
|
||||
<em>{subtitle}</em>
|
||||
</div>
|
||||
)}
|
||||
<div className="mt-4 mb-4 overflow-x-auto overflow-y-hidden pl-2">{children}</div>
|
||||
<div className="mt-4 mb-4 overflow-x-hidden overflow-y-hidden pl-2">{children}</div>
|
||||
</div>
|
||||
</li>
|
||||
)
|
||||
|
||||
@@ -4285,6 +4285,10 @@ export const docsMenu = {
|
||||
name: 'Vercel AI SDK',
|
||||
url: '/docs/llm-analytics/installation/vercel-ai',
|
||||
},
|
||||
{
|
||||
name: 'OpenRouter',
|
||||
url: '/docs/llm-analytics/installation/openrouter',
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
@@ -4304,13 +4308,6 @@ export const docsMenu = {
|
||||
color: 'seagreen',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Traces',
|
||||
url: '/docs/llm-analytics/traces',
|
||||
icon: 'IconUserPaths',
|
||||
color: 'orange',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Spans',
|
||||
url: '/docs/llm-analytics/spans',
|
||||
@@ -4318,6 +4315,13 @@ export const docsMenu = {
|
||||
color: 'blue',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Traces',
|
||||
url: '/docs/llm-analytics/traces',
|
||||
icon: 'IconUserPaths',
|
||||
color: 'orange',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Embeddings',
|
||||
url: '/docs/llm-analytics/embeddings',
|
||||
@@ -4367,13 +4371,6 @@ export const docsMenu = {
|
||||
color: 'green',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Tutorials and guides',
|
||||
url: '/docs/llm-analytics/tutorials',
|
||||
icon: 'IconGraduationCap',
|
||||
color: 'blue',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Resources',
|
||||
},
|
||||
@@ -4384,7 +4381,14 @@ export const docsMenu = {
|
||||
color: 'purple',
|
||||
},
|
||||
{
|
||||
name: 'Integrations',
|
||||
name: 'More tutorials',
|
||||
url: '/docs/llm-analytics/tutorials',
|
||||
icon: 'IconGraduationCap',
|
||||
color: 'blue',
|
||||
featured: true,
|
||||
},
|
||||
{
|
||||
name: 'Third-party integrations',
|
||||
url: '/docs/llm-analytics/integrations',
|
||||
icon: 'IconApps',
|
||||
featured: true,
|
||||
|
||||
@@ -1560,7 +1560,7 @@
|
||||
},
|
||||
{
|
||||
"source": "/docs/ai-engineering/traces-generations",
|
||||
"destination": "/docs/llm-analytics/traces-generations"
|
||||
"destination": "/docs/llm-analytics/generations"
|
||||
},
|
||||
{
|
||||
"source": "/docs/ai-engineering/dashboard",
|
||||
|
||||
Reference in New Issue
Block a user