Connect to Microsoft SharePoint for your Amazon Bedrock knowledge base - Amazon Bedrock

Connect to Microsoft SharePoint for your Amazon Bedrock knowledge base

Microsoft SharePoint is a collaborative web-based service for working on documents, web pages, web sites, lists, and more. You can connect to your SharePoint instance for your Amazon Bedrock knowledge base by using either the AWS Management Console for Amazon Bedrock or the CreateDataSource API (see Amazon Bedrock supported SDKs and AWS CLI).

Note

Microsoft SharePoint data source connector is in preview release and is subject to change.

Amazon Bedrock supports connecting to SharePoint Online instances. Crawling OneNote documents is currently not supported. Currently, only Amazon OpenSearch Serverless vector store is available to use with this data source.

There are limits to how many files and MB per file that can be crawled. See Quotas for knowledge bases.

Supported features

  • Auto detection of main document fields

  • Inclusion/exclusion content filters

  • Incremental content syncs for added, updated, deleted content

  • OAuth 2.0 authentication

Prerequisites

In SharePoint, make sure you:

  • Take note of your SharePoint Online site URL/URLs. For example, https://yourdomain.sharepoint.com/sites/mysite. Your URL must start with https and contain sharepoint.com. Your site URL must be the actual SharePoint site, not sharepoint.com/ or sites/mysite/home.aspx

  • Take note of the domain name of your SharePoint Online instance URL/URLs.

  • (For OAuth 2.0 authentication) Copy your Microsoft 365 tenant ID. You can find your tenant ID in the Properties of your Azure Active Directory portal or in your OAuth application.

    Take note of the username and password of the admin SharePoint account, and copy the client ID and client secret value when registering an application.

    Note

    For an example application, see Register a client application in Microsoft Entra ID (formerly known as Azure Active Directory) on the Microsoft Learn website.

  • Certain read permissions are required to connect to SharePoint when you register an application.

    • SharePoint: AllSites.Read (Delegated) – Read items in all site collections

  • You might need to turn off Security Defaults in your Azure portal using an admin user. For more information on managing security default settings in the Azure portal, see Microsoft documentation on how to enable/disable security defaults.

  • You might need to turn off multi-factor authentication (MFA) in your SharePoint account, so that Amazon Bedrock is not blocked from crawling your SharePoint content.

In your AWS account, make sure you:

  • Store your authentication credentials in an AWS Secrets Manager secret and note the Amazon Resource Name (ARN) of the secret. Follow the Connection configuration instructions on this page to include the key-values pairs that must be included in your secret.

  • Include the necessary permissions to connect to your data source in your AWS Identity and Access Management (IAM) role/permissions policy for your knowledge base. For information on the required permissions for this data source to add to your knowledge base IAM role, see Permissions to access data sources.

Note

If you use the console, you can go to AWS Secrets Manager to add your secret or use an existing secret as part of the data source configuration step. The IAM role with all the required permissions can be created for you as part of the console steps for creating a knowledge base. After you have configured your data source and other configurations, the IAM role with all the required permissions are applied to your specific knowledge base.

We recommend that you regularly refresh or rotate your credentials and secret. Provide only the necessary access level for your own security. We do not recommend that you re-use credentials and secrets across data sources.

Connection configuration

To connect to your SharePoint instance, you must provide the necessary configuration information so that Amazon Bedrock can access and crawl your data. You must also follow the Prerequisites.

An example of a configuration for this data source is included in this section.

For more information about auto detection of document fields, inclusion/exclusion filters, incremental syncing, secret authentication credentials, and how these work, select the following:

The data source connector automatically detects and crawls all of the main metadata fields of your documents or content. For example, the data source connector can crawl the document body equivalent of your documents, the document title, the document creation or modification date, or other core fields that might apply to your documents.

Important

If your content includes sensitive information, then Amazon Bedrock could respond using sensitive information.

You can apply filtering operators to metadata fields to help you further improve the relevancy of responses. For example, document "epoch_modification_time" or the number of seconds that’s passed January 1 1970 for when the document was last updated. You can filter on the most recent data, where "epoch_modification_time" is greater than a certain number. For more information on the filtering operators you can apply to your metadata fields, see Metadata and filtering.

You can include or exclude crawling certain content. For example, you can specify an exclusion prefix/regular expression pattern to skip crawling any file that contains “private” in the file name. You could also specify an inclusion prefix/regular expression pattern to include certain content entities or content types. If you specify an inclusion and exclusion filter and both match a document, the exclusion filter takes precedence and the document isn’t crawled.

An example of a regular expression pattern to exclude or filter out PDF files that contain "private" in the file name: ".*private.*\\.pdf"

You can apply inclusion/exclusion filters on the following content types:

  • Page: Main page title

  • Event: Event name

  • File: File name with its extension for attachments and all document files

Crawling OneNote documents is currently not supported.

You can use the following parsing options for ingestion of data from your data source:

When sending a CreateDataSource request (see link for request and response formats and field details) with an Agents for Amazon Bedrock build-time endpoint, don't include a parsingConfiguration field within the vectorIngestionConfiguration.

The following HTTP request shows a minimal working example for setting up a data source and using the default Amazon Bedrock parser during ingestion.

PUT /knowledgebases/KB12345678/datasources/ HTTP/1.1 Content-type: application/json { "dataSourceConfiguration": { "type": "S3", "s3Configuration": { "bucketArn": "amzn-s3-demo-bucket" } }, "name": "myDataSource", "vectorIngestionConfiguration": { "parsingConfiguration": { "parsingStrategy": "???" } } }

When sending a CreateDataSource request (see link for request and response formats and field details) with an Agents for Amazon Bedrock build-time endpoint, include a parsingConfiguration field within the vectorIngestionConfiguration.

The following HTTP request shows a minimal working example for setting up a data source and using a foundation model to parse it during ingestion.

PUT /knowledgebases/KB12345678/datasources/ HTTP/1.1 Content-type: application/json { "dataSourceConfiguration": { "type": "S3", "s3Configuration": { "bucketArn": "amzn-s3-demo-bucket" } }, "name": "myDataSource", "vectorIngestionConfiguration": { "parsingConfiguration": { "parsingStrategy": "BEDROCK_FOUNDATION_MODEL", "bedrockFoundationModelConfiguration": { "parsingPrompt": { "parsingPromptText": "“Pay attention to if the table headers have sub-headers. Do not miss sub-headers. - If a content cell 236 contains multiple items, output them in SEPARATE content rows. - If a header cell contains multiple rows, 237 output them in SEPERATE header rows. DO NOT omit any header text. - If a cell crosses multiple columns. Put 238 the text in the first cell but output an empty cell in the following. Ignore logos and text in header/footer 239 but include if found in the main body of the document." }, "modelArn": "anthropic.claude-3-haiku-20240307-v1:0" } } } }

Within the ParsingConfiguration object, specify the following fields:

  • parsingStrategy – Specify BEDROCK_FOUNDATION_MODEL and include the bedrockFoundationModelConfiguration field.

  • parsingPromptText – Optionally override the default parsing prompt with your custom one.

  • modelArn – The ARN of the foundation model to use for parsing the data.

The data source connector crawls new, modified, and deleted content each time your data source syncs with your knowledge base. Amazon Bedrock can use your data source’s mechanism for tracking content changes and crawl content that changed since the last sync. When you sync your data source with your knowledge base for the first time, all content is crawled by default.

To sync your data source with your knowledge base, use the StartIngestionJob API or select your knowledge base in the console and select Sync within the data source overview section.

Important

All data that you sync from your data source becomes available to anyone with bedrock:Retrieve permissions to retrieve the data. This can also include any data with controlled data source permissions. For more information, see Knowledge base permissions.

(For OAuth 2.0 authentication) Your secret authentication credentials in AWS Secrets Manager should include these key-value pairs:

  • username: SharePoint admin username

  • password: SharePoint admin password

  • clientId: app client ID

  • clientSecret: app client secret

Note

Your secret in AWS Secrets Manager must use the same region of your knowledge base.

Console
Connect a SharePoint instance to your knowledge base
  1. Follow the steps at Create a knowledge base in Amazon Bedrock Knowledge Bases and choose SharePoint as the data source.

  2. Provide a name and optional description for the data source.

  3. Provide your SharePoint site URL/URLs. For example, for SharePoint Online, https://yourdomain.sharepoint.com/sites/mysite. Your URL must start with https and contain sharepoint.com. Your site URL must be the actual SharePoint site, not sharepoint.com/ or sites/mysite/home.aspx

  4. Provide the domain name of your SharePoint instance.

  5. In the Advanced settings section, you can optionally configure the following:

    • KMS key for transient data storage. – You can encrypt the transient data while converting your data into embeddings with the default AWS managed key or your own KMS key. For more information, see Encryption of transient data storage during data ingestion.

    • Data deletion policy – You can delete the vector embeddings for your data source that are stored in the vector store by default, or choose to retain the vector store data.

  6. Provide the authentication information to connect to your SharePoint instance:

    1. For OAuth 2.0 authentication, provide the tenant ID. You can find your tenant ID in the Properties of your Azure Active Directory portal or in your OAuth application.

    2. For OAuth 2.0 authentication, go to AWS Secrets Manager to add your secret authentication credentials or use an existing Amazon Resource Name (ARN) for the secret you created. Your secret must contain the SharePoint admin username and password, and your registered app client ID and client secret. For an example application, see Register a client application in Microsoft Entra ID (formerly known as Azure Active Directory) on the Microsoft Learn website.

  7. (Optional) In the Content chunking and parsing section, you can customize how to chunk and parse your data:

    1. For more information about parsing options, see Parsing options for your data source.

    2. For more information about chunking strategies, see How content chunking works for knowledge bases.

    3. For more information about how to customize chunking of your data and processing of your metadata with a Lambda function, see Use a custom transformation Lambda function to define how your data is ingested.

  8. Choose to use filters/regular expressions patterns to include or exclude certain content. All standard content is crawled otherwise.

  9. Continue to choose an embeddings model and vector store. To see the remaining steps, return to Create a knowledge base in Amazon Bedrock Knowledge Bases and continue from the step after connecting your data source.

API

The following is an example of a configuration for connecting to SharePoint Online for your Amazon Bedrock knowledge base. You configure your data source using the API with the AWS CLI or supported SDK, such as Python. After you call CreateKnowledgeBase, you call CreateDataSource to create your data source with your connection information in dataSourceConfiguration. Remember to also specify your chunking strategy/approach in vectorIngestionConfiguration and your data deletion policy in dataDeletionPolicy.

AWS Command Line Interface

aws bedrock create-data-source \ --name "SharePoint Online connector" \ --description "SharePoint Online data source connector for Amazon Bedrock to use content in SharePoint" \ --knowledge-base-id "your-knowledge-base-id" \ --data-source-configuration file://sharepoint-bedrock-connector-configuration.json \ --data-deletion-policy "DELETE" \ --vector-ingestion-configuration '{"chunkingConfiguration":[{"chunkingStrategy":"FIXED_SIZE","fixedSizeChunkingConfiguration":[{"maxTokens":"100","overlapPercentage":"10"}]}]}' sharepoint-bedrock-connector-configuration.json { "sharePointConfiguration": { "sourceConfiguration": { "tenantId": "888d0b57-69f1-4fb8-957f-e1f0bedf64de", "hostType": "ONLINE", "domain": "yourdomain", "siteUrls": [ "https://yourdomain.sharepoint.com/sites/mysite" ], "authType": "OAUTH2_CLIENT_CREDENTIALS", "credentialsSecretArn": "arn:aws::secretsmanager:your-region:secret:AmazonBedrock-SharePoint" }, "crawlerConfiguration": { "filterConfiguration": { "type": "PATTERN", "patternObjectFilter": { "filters": [ { "objectType": "File", "inclusionFilters": [ ".*\\.pdf" ], "exclusionFilters": [ ".*private.*\\.pdf" ] } ] } } } }, "type": "SHAREPOINT" }