Use an inference profile in model invocation
You can use a cross region inference profile in place of a foundation model to route requests to multiple Regions. To track costs and usage for a model, in one or multiple Regions, you can use an application inference profile. To learn how to use an inference profile when running model inference, choose the tab for your preferred method, and then follow the steps:
- Console
-
In the console, the only inference profile you can use is the US Anthropic Claude 3 Opus inference profile in the US East (N. Virginia) region.
To use this inference profile, switch to the US East (N. Virginia) region. Do one of the following and select the Anthropic Claude 3 Opus model and Cross region inference as the Throughput when you reach the step to select a model:
-
To use the inference profile in the text generation playground, follow the steps at Generate responses in the console using playgrounds.
-
To use the inference profile in model evaluation, follow the console steps at Starting an automatic model evaluation job in Amazon Bedrock.
-
- API
-
You can use an inference profile when running inference from any Region that is included in it with the following API operations:
-
InvokeModel or InvokeModelWithResponseStream – To use an inference profile in model invocation, follow the steps at Submit a single prompt with InvokeModel and specify the Amazon Resource Name (ARN) of the inference profile in the
modelId
field. For an example, see Use an inference profile in model invocation. -
Converse or ConverseStream – To use an inference profile in model invocation with the Converse API, follow the steps at Carry out a conversation with the Converse API operations and specify the ARN of the inference profile in the
modelId
field. For an example, see Use an inference profile in a conversation. -
RetrieveAndGenerate – To use an inference profile when generating responses from the results of querying a knowledge base, follow the steps in the API tab in Test your knowledge base with queries and responses and specify the ARN of the inference profile in the
modelArn
field. For more information, see Use an inference proflie to generate a response. -
CreateEvaluationJob – To submit an inference profile for model evaluation, follow the steps in the API tab in Starting an automatic model evaluation job in Amazon Bedrock and specify the ARN of the inference profile in the
modelIdentifier
field. -
CreatePrompt – To use an inference profile when generating a response for a prompt you create in Prompt management, follow the steps in the API tab in Create a prompt using Prompt management and specify the ARN of the inference profile in the
modelId
field. -
CreateFlow – To use an inference profile when generating a response for an inline prompt that you define within a prompt node in a flow, follow the steps in the API tab in Create a flow in Amazon Bedrock. In defining the prompt node, specify the ARN of the inference profile in the
modelId
field. -
CreateDataSource – To use an inference profile when parsing non-textual information in a data source, follow the steps in the API section in Parsing options for your data source and specify the ARN of the inference profile in the
modelArn
field.
Note
If you're using a cross-region (system-defined) inference profile, you can use either the ARN or the ID of the inference profile.
-