AWS::Bedrock::DataSource SemanticChunkingConfiguration
Settings for semantic document chunking for a data source. Semantic chunking splits a document into into smaller documents based on groups of similar content derived from the text with natural language processing.
With semantic chunking, each sentence is compared to the next to determine how similar they are. You specify a threshold in the form of a percentile, where adjacent sentences that are less similar than that percentage of sentence pairs are divided into separate chunks. For example, if you set the threshold to 90, then the 10 percent of sentence pairs that are least similar are split. So if you have 101 sentences, 100 sentence pairs are compared, and the 10 with the least similarity are split, creating 11 chunks. These chunks are further split if they exceed the max token size.
You must also specify a buffer size, which determines whether sentences are compared in isolation, or
within a moving context window that includes the previous and following sentence. For example, if you set
the buffer size to 1
, the embedding for sentence 10 is derived from sentences 9, 10, and 11
combined.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "BreakpointPercentileThreshold" :
Integer
, "BufferSize" :Integer
, "MaxTokens" :Integer
}
YAML
BreakpointPercentileThreshold:
Integer
BufferSize:Integer
MaxTokens:Integer
Properties
BreakpointPercentileThreshold
-
The dissimilarity threshold for splitting chunks.
Required: Yes
Type: Integer
Minimum:
50
Maximum:
99
Update requires: Replacement
BufferSize
-
The buffer size.
Required: Yes
Type: Integer
Minimum:
0
Maximum:
1
Update requires: Replacement
MaxTokens
-
The maximum number of tokens that a chunk can contain.
Required: Yes
Type: Integer
Minimum:
1
Update requires: Replacement