Mistral AI models
This section describes the request parameters and response fields for Mistral AI models. Use this information to make inference calls to Mistral AI models with the InvokeModel and InvokeModelWithResponseStream (streaming) operations. This section also includes Python code examples that shows how to call Mistral AI models. To use a model in an inference operation, you need the model ID for the model. To get the model ID, see Supported foundation models in Amazon Bedrock. Some models also work with the Converse API. To check if the Converse API supports a specific Mistral AI model, see Supported models and model features. For more code examples, see Code examples for Amazon Bedrock using AWS SDKs.
Foundation models in Amazon Bedrock support input and output modalities, which vary from model to model. To check the modalities that Mistral AI models support, see Supported foundation models in Amazon Bedrock. To check which Amazon Bedrock features the Mistral AI models support, see Supported foundation models in Amazon Bedrock. To check which AWS Regions that Mistral AI models are available in, see Supported foundation models in Amazon Bedrock.
When you make inference calls with Mistral AI models, you include a prompt for the model. For general information
about creating prompts for the models that Amazon Bedrock supports, see
Prompt engineering concepts.
For Mistral AI specific prompt information, see the Mistral AI prompt engineering guide