ModelInvocationInput
The input for the pre-processing step.
-
The
type
matches the agent step. -
The
text
contains the prompt. -
The
inferenceConfiguration
,parserMode
, andoverrideLambda
values are set in the PromptOverrideConfiguration object that was set when the agent was created or updated.
Contents
- inferenceConfiguration
-
Specifications about the inference parameters that were provided alongside the prompt. These are specified in the PromptOverrideConfiguration object that was set when the agent was created or updated. For more information, see Inference parameters for foundation models.
Type: InferenceConfiguration object
Required: No
- overrideLambda
-
The ARN of the Lambda function to use when parsing the raw foundation model output in parts of the agent sequence.
Type: String
Required: No
- parserMode
-
Specifies whether to override the default parser Lambda function when parsing the raw foundation model output in the part of the agent sequence defined by the
promptType
.Type: String
Valid Values:
DEFAULT | OVERRIDDEN
Required: No
- promptCreationMode
-
Specifies whether the default prompt template was
OVERRIDDEN
. If it was, thebasePromptTemplate
that was set in the PromptOverrideConfiguration object when the agent was created or updated is used instead.Type: String
Valid Values:
DEFAULT | OVERRIDDEN
Required: No
- text
-
The text that prompted the agent at this step.
Type: String
Required: No
- traceId
-
The unique identifier of the trace.
Type: String
Length Constraints: Minimum length of 2. Maximum length of 16.
Required: No
- type
-
The step in the agent sequence.
Type: String
Valid Values:
PRE_PROCESSING | ORCHESTRATION | KNOWLEDGE_BASE_RESPONSE_GENERATION | POST_PROCESSING | ROUTING_CLASSIFIER
Required: No
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: