Manual capture LLM analytics installation
- 1
Capture LLM events manually
If you're using a different server-side SDK or prefer to use the API, you can manually capture the data by calling the
capturemethod or using the capture API.Capture via API
1. Install
2. Initialize PostHog
3. Capture Event
1. Install
2. Initialize PostHog
3. Capture Event
1. Install
2. Initialize PostHog
3. Capture Event
1. Install
2. Initialize PostHog
3. Capture Event
1. Install
2. Initialize PostHog
3. Capture Event
Event Properties
Each event type has specific properties. See the tabs below for detailed property documentation for each event type.
A generation is a single call to an LLM.
Event name:
$ai_generationCore properties
Property Description $ai_trace_idThe trace ID (a UUID to group AI events) like
conversation_id
Must contain only letters, numbers, and special characters:-,_,~,.,@,(,),!,',:,|
Example:d9222e05-8708-41b8-98ea-d4a21849e761$ai_session_id(Optional) Groups related traces together. Use this to organize traces by whatever grouping makes sense for your application (user sessions, workflows, conversations, or other logical boundaries).
Example:session-abc-123,conv-user-456$ai_span_id(Optional) Unique identifier for this generation
$ai_span_name(Optional) Name given to this generation
Example:summarize_text$ai_parent_id(Optional) Parent span ID for tree view grouping
$ai_modelThe model used
Example:gpt-5-mini$ai_providerThe LLM provider
Example:openai,anthropic,gemini$ai_inputList of messages sent to the LLM. Each message should have a
roleproperty with one of:"user","system", or"assistant"
Example:$ai_input_tokensThe number of tokens in the input (often found in response.usage)
$ai_output_choicesList of response choices from the LLM. Each choice should have a
roleproperty with one of:"user","system", or"assistant"
Example:$ai_output_tokensThe number of tokens in the output (often found in response.usage)
$ai_latency(Optional) The latency of the LLM call in seconds
$ai_http_status(Optional) The HTTP status code of the response
$ai_base_url(Optional) The base URL of the LLM provider
Example:https://api.openai.com/v1$ai_request_url(Optional) The full URL of the request made to the LLM API
Example:https://api.openai.com/v1/chat/completions$ai_is_error(Optional) Boolean to indicate if the request was an error
$ai_error(Optional) The error message or object
Cost properties
Cost properties are optional as we can automatically calculate them from model and token counts. If you want, you can provide your own cost properties or custom pricing instead.
Pre-calculated costs
Property Description $ai_input_cost_usd(Optional) The cost in USD of the input tokens
$ai_output_cost_usd(Optional) The cost in USD of the output tokens
$ai_request_cost_usd(Optional) The cost in USD for the requests
$ai_web_search_cost_usd(Optional) The cost in USD for the web searches
$ai_total_cost_usd(Optional) The total cost in USD (sum of all cost components)
Custom pricing
Property Description $ai_input_token_price(Optional) Price per input token (used to calculate
$ai_input_cost_usd)$ai_output_token_price(Optional) Price per output token (used to calculate
$ai_output_cost_usd)$ai_cache_read_token_price(Optional) Price per cached token read
$ai_cache_write_token_price(Optional) Price per cached token write
$ai_request_price(Optional) Price per request
$ai_request_count(Optional) Number of requests (defaults to 1 if
$ai_request_priceis set)$ai_web_search_price(Optional) Price per web search
$ai_web_search_count(Optional) Number of web searches performed
Cache properties
Property Description $ai_cache_read_input_tokens(Optional) Number of tokens read from cache
$ai_cache_creation_input_tokens(Optional) Number of tokens written to cache (Anthropic-specific)
Model parameters
Property Description $ai_temperature(Optional) Temperature parameter used in the LLM request
$ai_stream(Optional) Whether the response was streamed
$ai_max_tokens(Optional) Maximum tokens setting for the LLM response
$ai_tools(Optional) Tools/functions available to the LLM
Example:A trace is a group that contains multiple spans, generations, and embeddings. Traces can be manually sent as events or appear as pseudo-events automatically created from child events.
Event name:
$ai_traceCore properties
Property Description $ai_trace_idThe trace ID (a UUID to group related AI events together)
Must contain only letters, numbers, and special characters:-,_,~,.,@,(,),!,',:,|
Example:d9222e05-8708-41b8-98ea-d4a21849e761$ai_session_id(Optional) Groups related traces together. Use this to organize traces by whatever grouping makes sense for your application (user sessions, workflows, conversations, or other logical boundaries).
Example:session-abc-123,conv-user-456$ai_input_stateThe input of the whole trace
Example:or any JSON-serializable state
$ai_output_stateThe output of the whole trace
Example:or any JSON-serializable state
$ai_latency(Optional) The latency of the trace in seconds
$ai_span_name(Optional) The name of the trace
Example:chat_completion,rag_pipeline$ai_is_error(Optional) Boolean to indicate if the trace encountered an error
$ai_error(Optional) The error message or object if the trace failed
Pseudo-trace Events
When you send generation (
$ai_generation), span ($ai_span), or embedding ($ai_embedding) events with a$ai_trace_id, PostHog automatically creates a pseudo-trace event that appears in the dashboard as a parent grouping. These pseudo-traces:- Are not actual events in your data
- Automatically aggregate metrics from child events (latency, tokens, costs)
- Provide a hierarchical view of your AI operations
- Do not require sending an explicit
$ai_traceevent
This means you can either:
- Send explicit
$ai_traceevents to control the trace metadata - Let PostHog automatically create pseudo-traces from your generation/span events
A span is a single action within your application, such as a function call or vector database search.
Event name:
$ai_spanCore properties
Property Description $ai_trace_idThe trace ID (a UUID to group related AI events together)
Must contain only letters, numbers, and the following characters:-,_,~,.,@,(,),!,',:,|
Example:d9222e05-8708-41b8-98ea-d4a21849e761$ai_session_id(Optional) Groups related traces together. Use this to organize traces by whatever grouping makes sense for your application (user sessions, workflows, conversations, or other logical boundaries).
Example:session-abc-123,conv-user-456$ai_span_id(Optional) Unique identifier for this span
Example:bdf42359-9364-4db7-8958-c001f28c9255$ai_span_name(Optional) The name of the span
Example:vector_search,data_retrieval,tool_call$ai_parent_id(Optional) Parent ID for tree view grouping (
trace_idor anotherspan_id)
Example:537b7988-0186-494f-a313-77a5a8f7db26$ai_input_stateThe input state of the span
Example:or any JSON-serializable state
$ai_output_stateThe output state of the span
Example:or any JSON-serializable state
$ai_latency(Optional) The latency of the span in seconds
Example:0.361$ai_is_error(Optional) Boolean to indicate if the span encountered an error
$ai_error(Optional) The error message or object if the span failed
Example:An embedding is a single call to an embedding model to convert text into a vector representation.
Event name:
$ai_embeddingCore properties
Property Description $ai_trace_idThe trace ID (a UUID to group related AI events together). Must contain only letters, numbers, and special characters:
-,_,~,.,@,(,),!,',:,|
Example:d9222e05-8708-41b8-98ea-d4a21849e761$ai_session_id(Optional) Groups related traces together. Use this to organize traces by whatever grouping makes sense for your application (user sessions, workflows, conversations, or other logical boundaries).
Example:session-abc-123,conv-user-456$ai_span_id(Optional) Unique identifier for this embedding operation
$ai_span_name(Optional) Name given to this embedding operation
Example:embed_user_query,index_document$ai_parent_id(Optional) Parent span ID for tree-view grouping
$ai_modelThe embedding model used
Example:text-embedding-3-small,text-embedding-ada-002$ai_providerThe LLM provider
Example:openai,cohere,voyage$ai_inputThe text to embed
Example:"Tell me a fun fact about hedgehogs"or array of strings for batch embeddings$ai_input_tokensThe number of tokens in the input
$ai_latency(Optional) The latency of the embedding call in seconds
$ai_http_status(Optional) The HTTP status code of the response
$ai_base_url(Optional) The base URL of the LLM provider
Example:https://api.openai.com/v1$ai_request_url(Optional) The full URL of the request made to the embedding API
Example:https://api.openai.com/v1/embeddings$ai_is_error(Optional) Boolean to indicate if the request was an error
$ai_error(Optional) The error message or object if the embedding failed
Cost properties
Cost properties are optional as we can automatically calculate them from model and token counts. If you want, you can provide your own cost property instead.
Property Description $ai_input_cost_usd(Optional) Cost in USD for input tokens
$ai_output_cost_usd(Optional) Cost in USD for output tokens (usually 0 for embeddings)
$ai_total_cost_usd(Optional) Total cost in USD