Interface CfnFlowVersion.PromptModelInferenceConfigurationProperty
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnFlowVersion.PromptModelInferenceConfigurationProperty.Jsii$Proxy
- Enclosing class:
CfnFlowVersion
@Stability(Stable)
public static interface CfnFlowVersion.PromptModelInferenceConfigurationProperty
extends software.amazon.jsii.JsiiSerializable
Contains inference configurations related to model inference for a prompt.
For more information, see Inference parameters .
Example:
// The code below shows an example of how to instantiate this type. // The values are placeholders you should change. import software.amazon.awscdk.services.bedrock.*; PromptModelInferenceConfigurationProperty promptModelInferenceConfigurationProperty = PromptModelInferenceConfigurationProperty.builder() .maxTokens(123) .stopSequences(List.of("stopSequences")) .temperature(123) .topK(123) .topP(123) .build();
- See Also:
-
Nested Class Summary
Modifier and TypeInterfaceDescriptionstatic final class
A builder forCfnFlowVersion.PromptModelInferenceConfigurationProperty
static final class
An implementation forCfnFlowVersion.PromptModelInferenceConfigurationProperty
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
default Number
The maximum number of tokens to return in the response.A list of strings that define sequences after which the model will stop generating.default Number
Controls the randomness of the response.default Number
getTopK()
The number of most-likely candidates that the model considers for the next token during generation.default Number
getTopP()
The percentage of most-likely candidates that the model considers for the next token.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getMaxTokens
The maximum number of tokens to return in the response.- See Also:
-
getStopSequences
A list of strings that define sequences after which the model will stop generating.- See Also:
-
getTemperature
Controls the randomness of the response.Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
- See Also:
-
getTopK
The number of most-likely candidates that the model considers for the next token during generation.- See Also:
-
getTopP
The percentage of most-likely candidates that the model considers for the next token.- See Also:
-
builder
@Stability(Stable) static CfnFlowVersion.PromptModelInferenceConfigurationProperty.Builder builder()
-