Amazon Nova multimodal understanding models are available for use for inferencing through the Invoke API (InvokeModel, InvokeModelWithResponseStream) and the Converse API (Converse and ConverseStream). To create conversational applications see Carry out a conversation with the Converse API operations. Both of the API methods (Invoke and Converse) follow a very similar request pattern, for more information on API schema and Python code examples see How to Invoke Amazon Nova Understanding Models.
Important
The timeout period for inference calls to Amazon Nova is 60 minutes.
By default, AWS SDK clients timeout after 1 minute. We recommend that you increase the
read timeout period of your AWS SDK client to at least 60 minutes. For example, in the
AWS Python botocore SDK, change the value of the read_timeout
field in
botocore.config
The default inference parameters can be found in the Complete request schema section of the Amazon Nova User Guide.
To find the model ID for Amazon Nova models, see Supported foundation models in Amazon Bedrock. To check if a feature is supported for Amazon Nova models, see Supported models and model features. For more code examples, see Code examples for Amazon Bedrock using AWS SDKs.
Foundation models in Amazon Bedrock support input and output modalities, which vary from model to model. To check the modalities that Amazon Nova models support, see Modality Support. To check which Amazon Bedrock features the Amazon Nova models support, see Supported foundation models in Amazon Bedrock. To check the AWS Regions that Amazon Nova models are available in, see Supported foundation models in Amazon Bedrock.
When you make inference calls with Amazon Nova models, you must include a prompt for the model. For general information about creating prompts for the models that Amazon Bedrock supports, see Prompt engineering concepts. For Amazon Nova specific prompt information, see the Amazon Nova prompt engineering guide.