

# TextAIPromptInferenceConfiguration
<a name="API_amazon-q-connect_TextAIPromptInferenceConfiguration"></a>

Inference configuration for text-based AI Prompts.

## Contents
<a name="API_amazon-q-connect_TextAIPromptInferenceConfiguration_Contents"></a>

 ** maxTokensToSample **   <a name="connect-Type-amazon-q-connect_TextAIPromptInferenceConfiguration-maxTokensToSample"></a>
The maximum number of tokens to generate in the response.  
Type: Integer  
Valid Range: Minimum value of 0. Maximum value of 4096.  
Required: No

 ** temperature **   <a name="connect-Type-amazon-q-connect_TextAIPromptInferenceConfiguration-temperature"></a>
The temperature setting for controlling randomness in the generated response.  
Type: Float  
Valid Range: Minimum value of 0. Maximum value of 1.  
Required: No

 ** topK **   <a name="connect-Type-amazon-q-connect_TextAIPromptInferenceConfiguration-topK"></a>
The top-K sampling parameter for token selection.  
Type: Integer  
Valid Range: Minimum value of 0. Maximum value of 200.  
Required: No

 ** topP **   <a name="connect-Type-amazon-q-connect_TextAIPromptInferenceConfiguration-topP"></a>
The top-P sampling parameter for nucleus sampling.  
Type: Float  
Valid Range: Minimum value of 0. Maximum value of 1.  
Required: No

## See Also
<a name="API_amazon-q-connect_TextAIPromptInferenceConfiguration_SeeAlso"></a>

For more information about using this API in one of the language-specific AWS SDKs, see the following:
+  [AWS SDK for C\$1\$1](https://docs.aws.amazon.com/goto/SdkForCpp/qconnect-2020-10-19/TextAIPromptInferenceConfiguration) 
+  [AWS SDK for Java V2](https://docs.aws.amazon.com/goto/SdkForJavaV2/qconnect-2020-10-19/TextAIPromptInferenceConfiguration) 
+  [AWS SDK for Ruby V3](https://docs.aws.amazon.com/goto/SdkForRubyV3/qconnect-2020-10-19/TextAIPromptInferenceConfiguration) 