interface PromptModelInferenceConfigurationProperty
Language | Type name |
---|---|
![]() | Amazon.CDK.aws_bedrock.CfnFlowVersion.PromptModelInferenceConfigurationProperty |
![]() | github.com/aws/aws-cdk-go/awscdk/v2/awsbedrock#CfnFlowVersion_PromptModelInferenceConfigurationProperty |
![]() | software.amazon.awscdk.services.bedrock.CfnFlowVersion.PromptModelInferenceConfigurationProperty |
![]() | aws_cdk.aws_bedrock.CfnFlowVersion.PromptModelInferenceConfigurationProperty |
![]() | aws-cdk-lib » aws_bedrock » CfnFlowVersion » PromptModelInferenceConfigurationProperty |
Contains inference configurations related to model inference for a prompt.
For more information, see Inference parameters .
Example
// The code below shows an example of how to instantiate this type.
// The values are placeholders you should change.
import { aws_bedrock as bedrock } from 'aws-cdk-lib';
const promptModelInferenceConfigurationProperty: bedrock.CfnFlowVersion.PromptModelInferenceConfigurationProperty = {
maxTokens: 123,
stopSequences: ['stopSequences'],
temperature: 123,
topP: 123,
};
Properties
Name | Type | Description |
---|---|---|
max | number | The maximum number of tokens to return in the response. |
stop | string[] | A list of strings that define sequences after which the model will stop generating. |
temperature? | number | Controls the randomness of the response. |
top | number | The percentage of most-likely candidates that the model considers for the next token. |
maxTokens?
Type:
number
(optional)
The maximum number of tokens to return in the response.
stopSequences?
Type:
string[]
(optional)
A list of strings that define sequences after which the model will stop generating.
temperature?
Type:
number
(optional)
Controls the randomness of the response.
Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
topP?
Type:
number
(optional)
The percentage of most-likely candidates that the model considers for the next token.