Interface CfnAgent.InferenceConfigurationProperty
- All Superinterfaces:
software.amazon.jsii.JsiiSerializable
- All Known Implementing Classes:
CfnAgent.InferenceConfigurationProperty.Jsii$Proxy
- Enclosing class:
CfnAgent
If you need to pass additional parameters that the model supports, use the additionalModelRequestFields
request field in the call to Converse
or ConverseStream
. For more information, see Model parameters .
Example:
// The code below shows an example of how to instantiate this type. // The values are placeholders you should change. import software.amazon.awscdk.services.bedrock.*; InferenceConfigurationProperty inferenceConfigurationProperty = InferenceConfigurationProperty.builder() .maximumLength(123) .stopSequences(List.of("stopSequences")) .temperature(123) .topK(123) .topP(123) .build();
- See Also:
-
Nested Class Summary
Modifier and TypeInterfaceDescriptionstatic final class
A builder forCfnAgent.InferenceConfigurationProperty
static final class
An implementation forCfnAgent.InferenceConfigurationProperty
-
Method Summary
Modifier and TypeMethodDescriptionbuilder()
default Number
The maximum number of tokens allowed in the generated response.A list of stop sequences.default Number
The likelihood of the model selecting higher-probability options while generating a response.default Number
getTopK()
While generating a response, the model determines the probability of the following token at each point of generation.default Number
getTopP()
The percentage of most-likely candidates that the model considers for the next token.Methods inherited from interface software.amazon.jsii.JsiiSerializable
$jsii$toJson
-
Method Details
-
getMaximumLength
The maximum number of tokens allowed in the generated response.- See Also:
-
getStopSequences
A list of stop sequences.A stop sequence is a sequence of characters that causes the model to stop generating the response.
- See Also:
-
getTemperature
The likelihood of the model selecting higher-probability options while generating a response.A lower value makes the model more likely to choose higher-probability options, while a higher value makes the model more likely to choose lower-probability options.
The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models .
- See Also:
-
getTopK
While generating a response, the model determines the probability of the following token at each point of generation.The value that you set for
topK
is the number of most-likely candidates from which the model chooses the next token in the sequence. For example, if you settopK
to 50, the model selects the next token from among the top 50 most likely choices.- See Also:
-
getTopP
The percentage of most-likely candidates that the model considers for the next token.For example, if you choose a value of 0.8 for
topP
, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models .
- See Also:
-
builder
-