Amazon Bedrock foundation model information
A foundation model is an Artificial Intellgence model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. This section provides information about the foundation models (FM) that you can use in Amazon Bedrock, such as the features that models support and the AWS regions in which models are available. For information about the foundation models that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock.
You must request access to a model before you can use it. After doing so, you can then use FMs in the following ways.
-
Run inference by sending prompts to a model and generating responses. The playgrounds offer a user-friendly interface in the AWS Management Console for generating text, images, or chats. See the Output modality column to determine the models you can use in each playground.
Note
The console playgrounds don't support running inference on embeddings models. Use the API to run inference on embeddings models.
-
Evaluate models to compare outputs and determine the best model for your use-case.
-
Set up a knowledge base with the help of an embeddings model. Then use a text model to generate responses to queries.
-
Create an agent and use a model to run inference on prompts to carry out orchestration.
-
Customize a model by feeding training and validation data to adjust model parameters for your use-case. To use a customized model, you must purchase Provisioned Throughput for it.
-
Purchase Provisioned Throughput for a model to increase throughput for it.
To use an FM with the Amazon Bedrock API, you need to determine the appropriate model ID to use. Refer to the following table to determine where to find the model ID that you need to use.
Use case | How to find the model ID |
---|---|
Use a base model | Look up the ID in the base model IDs chart |
Purchase Provisioned Throughput for a base model | Look up the ID in the model IDs for Provisioned Throughput chart and use it as the modelId in the CreateProvisionedModelThroughput request. |
Purchase Provisioned Throughput for a custom model | Use the name of the custom model or its ARN as the modelId in the CreateProvisionedModelThroughput request. |
Use a provisioned model | After you create a Provisioned Throughput, it returns a provisionedModelArn . This ARN is the model ID. |
Use a custom model | Purchase Provisioned Throughput for the custom model and use the returned provisionedModelArn as the model ID. |
For example code, see the documentation for the feature you are using and also Code examples for Amazon Bedrock using AWS SDKs.
Topics
- Get information about foundation models
- Supported foundation models in Amazon Bedrock
- Model support by AWS Region in Amazon Bedrock
- Feature support by AWS Region in Amazon Bedrock
- Model support by feature
- Inference request parameters and response fields for foundation models
- Custom model hyperparameters
- Model lifecycle