View a markdown version of this page

InferenceConfiguration - Amazon Bedrock

InferenceConfiguration

Inference parameters for a model.

Contents

maxTokens

The maximum number of tokens to generate.

Type: Integer

Valid Range: Minimum value of 1.

Required: No

stopSequences

Stop sequences that end generation.

Type: Array of strings

Array Members: Minimum number of 0 items. Maximum number of 2500 items.

Length Constraints: Minimum length of 1.

Required: No

temperature

The temperature for sampling.

Type: Float

Valid Range: Minimum value of 0. Maximum value of 1.

Required: No

topP

The top-p value for nucleus sampling.

Type: Float

Valid Range: Minimum value of 0. Maximum value of 1.

Required: No

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: