Submit prompts and generate responses using the API - Amazon Bedrock

Submit prompts and generate responses using the API

Amazon Bedrock offers two primary model invocation API operations for inference:

  • InvokeModel – Submit a single prompt and generate a response based on that prompt.

  • Converse – Submit a single prompt or a conversation and generate responses based on those prompts. Offers more flexibility than InvokeModel by allowing you to include previous prompts and responses for context.

You can also stream responses with the streaming versions of these API operations, InvokeModelWithResponseStream and ConverseStream.

For model inference, you need to determine the following parameters:

  • Model ID – The ID or Amazon Resource Name (ARN) of the model or inference profile to use in the modelId field for inference. The following table describes how to find IDs for different types of resources:

    Model type Description Find ID in console Find ID in API Relevant documentation
    Base model A foundation model from a provider. Choose Base models from the left navigation pane, search for a model, and look for the Model ID. Send a GetFoundationModel or ListFoundationModels request and find the modelId in the response. See a list of IDs at Supported foundation models in Amazon Bedrock.
    Inference profile Increases throughput by allowing invocation of a model in multiple regions. Choose Cross-region inference from the left navigation pane and look for an Inference profile ID. Send a GetInferenceProfile or ListInferenceProfiles request and find the inferenceProfileId in the response. See a list of IDs at Supported Regions and models for inference profiles.
    Prompt A prompt that was constructed using Prompt management. Choose Prompt management from the left navigation pane, select a prompt in the Prompts section, and look for the Prompt ARN. Send a GetPrompt or ListPrompts request and find the promptArn in the response. Learn about creating a prompt in Prompt management at Construct and store reusable prompts with Prompt management in Amazon Bedrock.
    Provisioned Throughput Provides a higher level of throughput for a model at a fixed cost. Choose Provisioned Throughput from the left navigation pane, select a Provisioned Throughput, and look for the ARN. Send a GetProvisionedModelThroughput or ListProvisionedModelThroughputs request and find the provisionedModelArn in the response. Learn how to purchase a Provisioned Throughput for a model at Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock.
    Custom model A model whose parameters are shifted from a foundation model based on training data. After purchasing Provisioned Throughput for a custom model, follow the steps to find the ID for the Provisioned Throughput. After purchasing Provisioned Throughput for a custom model, follow the steps to find the ID for the Provisioned Throughput. Learn how to customize a model at Customize your model to improve its performance for your use case. After customization, you must purchase Provisioned Throughput for it and use the ID of the Provisioned Throughput.
  • Request body – Contains the inference parameters for a model and other configurations. Each base model has its own inference parameters. The inference parameters for a custom or provisioned model depends on the base model from which it was created. For more information, see Inference request parameters and response fields for foundation models.

Select a topic to learn how to use the model invocation APIs.