SpanAttributes
Contextual attributes capturing operation details, LLM configuration, usage metrics, and conversation data
Types
Properties
AI agent ARN
Entity that invoked the AI agent
AI agent name
AI agent orchestrator use case
AI agent type
AI agent version number
Number of input tokens that were retrieved from cache
Number of input tokens that were written to cache in this request
Amazon Connect contact identifier
Input message collection sent to LLM
Amazon Connect instance ARN
Action being performed
Output message collection received from LLM
AI prompt name
AI prompt type
AI prompt version number
Model provider identifier (e.g., aws.bedrock)
Maximum tokens configured for generation
LLM model ID for request (e.g., anthropic.claude-3-sonnet)
Generation termination reasons (e.g., stop, max_tokens)
Actual model used for response (usually matches requestModel)
Session name
System prompt instructions
Sampling temperature for generation
Number of input tokens in prompt
Number of output tokens in response
Total tokens consumed (input + output)