When running model inference in on-demand mode, your requests might be restricted by service quotas or during peak usage times. Cross-region inference enables you to seamlessly manage unplanned traffic bursts by utilizing compute across different AWS Regions. With cross-region inference, you can distribute traffic across multiple AWS Regions, enabling higher throughput.
You can also increase throughput for a model by purchasing Provisioned Throughput. Inference profiles currently don't support Provisioned Throughput.
To see the Regions and models with which you can use inference profiles to run cross-region inference, refer to Supported Regions and models for inference profiles.
Cross-region (system-defined) inference profiles are named after the model that they support and defined by the Regions that they support. To understand how a cross-region inference profile handles your requests, review the following definitions:
-
Source Region – The Region from which you make the API request that specifies the inference profile.
-
Destination Region – A Region to which the Amazon Bedrock service can route the request from your source Region.
You invoke a cross-region inference profile from a source Region and the Amazon Bedrock service routes your request to any of the destination Regions defined in the inference profile.
Note
Some inference profiles route to different destination Regions depending on the source Region from which you call it. For example, if you call us.anthropic.claude-3-haiku-20240307-v1:0
from US East (Ohio), it can route requests to us-east-1
, us-east-2
, or us-west-2
, but if you call it from US West (Oregon), it can route requests to only us-east-1
and us-west-2
.
To check the source and destination Regions for an inference profile, you can do one of the following:
-
Expand the corresponding section in the list of supported cross-region inference profiles.
-
Send a GetInferenceProfile request with an Amazon Bedrock control plane endpoint from a source Region and specify the Amazon Resource Name (ARN) or ID of the inference profile in the
inferenceProfileIdentifier
field. Themodels
field in the response maps to a list of model ARNs, in which you can identify each destination Region.
Note
Inference profiles are immutable, meaning that we don't add new Regions to an existing inference profile. However, we might create new inference profiles that incorporate new Regions. You can update your systems to use these inference profiles by changing the IDs in your setup to the new ones.
Note the following information about cross-region inference:
-
There's no additional routing cost for using cross-region inference. The price is calculated based on the region from which you call an inference profile. For information about pricing, see Amazon Bedrock pricing
. -
When using cross-region inference, your throughput can reach up to double the default quotas in the region that the inference profile is in. The increase in throughput only applies to invocation performed via inference profiles, the regular quota still applies if you opt for in-region model invocation request. For example, if you invoke the US Anthropic Claude 3 Sonnet inference profile in us-east-1, your throughput can reach up to 1,000 requests per minute and 2,000,000 tokens per minute. To see the default quotas for on-demand throughput, refer to the Runtime quotas section in Quotas for Amazon Bedrock or use the Service Quotas console.
-
Cross-region inference requests are kept within the regions that are part of the inference profile that was used. For example, a request made with an EU inference profile is kept within EU regions.
Use a cross-region (system-defined) inference profile
To use cross-region inference, you include an inference profile when running model inference in the following ways:
-
On-demand model inference – Specify the ID of the inference profile as the
modelId
when sending an InvokeModel, InvokeModelWithResponseStream, Converse, or ConverseStream request. An inference profile defines one or more Regions to which it can route inference requests originating from your source Region. Use of cross-region inference increases throughput and performance by dynamically routing model invocation requests across the regions defined in inference profile. Routing factors in user traffic, demand and utilization of resources. For more information, see Submit prompts and generate responses with model inference -
Batch inference – Submit requests asynchronously with batch inference by specifying the ID of the inference profile as the
modelId
when sending a CreateModelInvocationJob request. Using an inference profile lets you utilize compute across multiple AWS Regions and achieve faster processing times for your batch jobs. After the job is complete, you can retrieve the output files from the Amazon S3 bucket in the source region. -
Knowledge base response generation – You can use cross-region inference when generating a response after querying a knowledge base. For more information, see Test your knowledge base with queries and responses.
-
Model evaluation – You can submit an inference profile as a model to evaluate when submitting a model evaluation job. For more information, see Evaluate the performance of Amazon Bedrock resources.
-
Prompt management – You can use cross-region inference when generating a response for a prompt you created in Prompt management. For more information, see Construct and store reusable prompts with Prompt management in Amazon Bedrock
-
Prompt flows – You can use cross-region inference when generating a response for a prompt you define inline in a prompt node in a prompt flow. For more information, see Build an end-to-end generative AI workflow with Amazon Bedrock Flows.
To learn how to use an inference profile to send model invocation requests across Regions, see Use an inference profile in model invocation.
To learn more about cross-region inference, see Getting started with cross-region inference in Amazon Bedrock