Inference request parameters and response fields for foundation models
The topics in this section describe the request parameters and response fields for the models that Amazon Bedrock supplies. When you make inference calls to models with the model invocation (InvokeModel, InvokeModelWithResponseStream, Converse, and ConverseStream) API operations, you include request parameters depending on the model that you're using.
If you created a custom model, use the same inference parameters as the foundation model from which it was customized.
Before viewing model parameters for different models, you should familiarize yourself with what model inference is by reading the following chapter: Submit prompts and generate responses with model inference.
Refer to the following pages for more information about different models in Amazon Bedrock:
-
For a table of models and their IDs to use with the model invocation API operations, the Regions they're supported in, and the general features that they support, see Supported foundation models in Amazon Bedrock.
-
For a table of the Amazon Bedrock Regions that each model is supported in, see Model support by AWS Region in Amazon Bedrock.
-
For a table of the Amazon Bedrock features that each model supports, see Model support by feature.
-
To check if the Converse API (
Converse
andConverseStream
) supports a specific model, see Supported models and model features. -
When you make inference calls to a model, you include a prompt for the model. For general information about creating prompts for the models that Amazon Bedrock supports, see Prompt engineering concepts.
-
For code examples, see Code examples for Amazon Bedrock using AWS SDKs.
Select a topic to learn about models for that provider and their parameters.