AWS::Bedrock::DataSource WebCrawlerConfiguration
The configuration of web URLs that you want to crawl. You should be authorized to crawl the URLs.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "CrawlerLimits" :
WebCrawlerLimits
, "ExclusionFilters" :[ String, ... ]
, "InclusionFilters" :[ String, ... ]
, "Scope" :String
}
YAML
CrawlerLimits:
WebCrawlerLimits
ExclusionFilters:- String
InclusionFilters:- String
Scope:String
Properties
CrawlerLimits
-
The configuration of crawl limits for the web URLs.
Required: No
Type: WebCrawlerLimits
Update requires: No interruption
ExclusionFilters
-
A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
Required: No
Type: Array of String
Minimum:
1
Maximum:
1000 | 25
Update requires: No interruption
InclusionFilters
-
A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
Required: No
Type: Array of String
Minimum:
1
Maximum:
1000 | 25
Update requires: No interruption
Scope
-
The scope of what is crawled for your URLs.
You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".
Required: No
Type: String
Allowed values:
HOST_ONLY | SUBDOMAINS
Update requires: No interruption