Create a knowledge base in Amazon Bedrock Knowledge Bases - Amazon Bedrock

Create a knowledge base in Amazon Bedrock Knowledge Bases

Amazon Bedrock knowledge bases allow you to integrate proprietary information into your generative-AI applications to create Retrieval Augmented Generation (RAG) solutions. A knowledge base searches your data to find the most useful information and can use it to answer natural language questions.

Note

You can’t create a knowledge base with a root user. Log in with an IAM user before starting these steps.

When you create a knowledge base, you set up the configurations and permissions of the knowledge base, choose a data source to connect to, the embeddings model to convert the data into embeddings, and the vector store to keep the vector embeddings in. Choose the tab for your preferred method, and then follow the steps:

Console
To set up the configurations and permissions for a knowledge base
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. In the left navigation pane, choose Knowledge bases.

  3. In the Knowledge bases section, select the create button.

  4. (Optional) Change the default name and provide a description for your knowledge base.

  5. Choose an AWS Identity and Access Management (IAM) role that provides Amazon Bedrock permission to access other required AWS services. You can let Amazon Bedrock create the service role or choose a custom role that you have created.

  6. Choose a data source to connect your knowledge base to.

  7. (Optional) Add tags to your knowledge base. For more information, see Tagging Amazon Bedrock resources.

  8. (Optional) Configure services for which to deliver activity logs for your knowledge base.

  9. Go to the next section and follow the steps at Connect a data source to your knowledge base to configure a data source.

  10. Choose an embeddings model to convert your data into vector embeddings.

  11. (Optional) Expand the Additional configurations section to see the following configuration options (not all models support all configurations):

    • Embeddings type – Whether to convert the data to floating-point (float32) vector embeddings (more precise, but more costly) or binary vector embeddings (less precise, but less costly). To learn about which embeddings models support binary vectors, refer to supported embeddings models.

    • Vector dimensions – Higher values improve accuracy but increase cost and latency.

  12. Choose a vector store to store the vector embeddings that will be used for query. You have the following options:

    • Quick create a new vector store – choose one of the available vector stores for Amazon Bedrock to create.

      • Amazon OpenSearch Serverless – Amazon Bedrock Knowledge Bases creates an Amazon OpenSearch Serverless vector search collection and index and configures it with the required fields for you.

      • Amazon Aurora PostgreSQL Serverless – Amazon Bedrock sets up an Amazon Aurora PostgreSQL Serverless vector store. This process takes unstructured text data from an Amazon S3 bucket, transforms it into text chunks and vectors, and then stores them in a PostgreSQL database. For more information, see Quick create an Aurora PostgreSQL Knowledge Base for Amazon Bedrock.

      • Amazon Neptune Analytics – Amazon Bedrock uses Retrieval Augmented Generation (RAG) techniques combined with graphs to enhance generative AI applications so that end users can get more accurate and comprehensive responses.

    • Choose a vector store you have created – Select a supported vector store and identify the vector field names and metadata field names in the vector index. For more information, see Prerequisites for your own vector store for a knowledge base.

      Note

      If your data source is a Confluence, Microsoft SharePoint, or Salesforce instance, the only supported vector store service is Amazon OpenSearch Serverless.

  13. If your data source contains images, specify an Amazon S3 URI in which to store the images that the parser will extract from the data. The images can be returned during query.

    Note

    Multimodal data is only supported with Amazon S3 and custom data sources.

  14. Check the details of your knowledge base. You can edit any section before going ahead and creating your knowledge base.

    Note

    The time it takes to create the knowledge base depends on your specific configurations. When the creation of the knowledge base has completed, the status of the knowledge base changes to either state it is ready or available.

    Once your knowledge base is ready and available, sync your data source for the first time and whenever you want to keep your content up to date. Select your knowledge base in the console and select Sync within the data source overview section.

API

To create a knowledge base, send a CreateKnowledgeBase request with an Agents for Amazon Bedrock build-time endpoint.

Note

If you're connecting to an unstructured data source and you prefer to let Amazon Bedrock create and manage a vector store for you in Amazon OpenSearch Service, use the console. For more information, see Create a knowledge base in Amazon Bedrock Knowledge Bases.

The following fields are required:

Field Basic description
name A name for the knowledge base
roleArn The ARN of a knowledge base service role.
knowledgeBaseConfiguration Contains configurations for the knowledge base. See details below.
storageConfiguration (Only required if you're connecting to an unstructured data source). Contains configurations for the data source service that you choose.

In the knowledgeBaseConfiguration, specify the type of data source that you plan to connect the knowledge base to and then specify the ARN of the embedding model to use and configurations for it. For more information, see VectorKnowledgeBaseConfiguration. You can specify the following types:

  • VECTOR – For unstructured data sources. Specify the ARN of the embedding model to use and configurations for it. For more information, see VectorKnowledgeBaseConfiguration.

  • STRUCTURED – For structured data stores. Specify the type of structured data store to use and the configurations for that data store.

The following fields are optional:

Field Use case
description A description for the knowledge base.
clientToken To ensure the API request completes only once. For more information, see Ensuring idempotency.
tags To associate tags with the flow. For more information, see Tagging Amazon Bedrock resources.

The knowledgeBaseConfiguration field maps to a KnowledgeBaseConfiguration object. In it, specify VECTOR in the type field. In the VectorKnowledgeBaseConfiguration, specify the ARN of the embedding model to use and its configurations.

The storageConfiguration field maps to a StorageConfiguration object. In it, specify the vector store that you plan to connect to in the type field and include the field that corresponds to that vector store. See each vector store configuration type at StorageConfiguration for details about the information you need to provide.