Skip to content

Instantly share code, notes, and snippets.

@opejovic
Last active November 21, 2025 14:54
Show Gist options
  • Select an option

  • Save opejovic/0244c4e59cfb3fe9a957c53008597685 to your computer and use it in GitHub Desktop.

Select an option

Save opejovic/0244c4e59cfb3fe9a957c53008597685 to your computer and use it in GitHub Desktop.
  1. Create an S3 Data Source
  2. Setup AOSS Vector Index and Configure BKB Access Permissions
  3. Configure Amazon Bedrock Knowledge Base and Synchronize it with Data Source

Amazon Bedrock Knowledge Bases (BKBs) provide a fully managed capability to implement RAG-based solutions. By integrating your own data — such as documents, manuals, and other domain-specific sources of information — into a knowledge base, you can improve the accuracy, relevance, and usefulness of model-generated responses. When a user submits a query, Amazon Bedrock Knowledge Bases search across the available data sources, retrieve the most relevant content, and pass this information to the foundation model to generate a more informed response.

Pre-requisites

Please make sure that you have enabled the following model access in Amazon Bedrock Console:

  • Amazon Titan Text Embeddings V2.

1. Create an S3 Data Source

Amazon Bedrock Knowledge Bases can connect to a variety of data sources for downstream RAG applications. Supported data sources include Amazon S3, Confluence, Microsoft SharePoint, Salesforce, Web Crawler, and custom data sources.

We will use Amazon S3 to store unstructured data — specifically, PDF files containing medical information. This S3 bucket will serve as the source of documents for our Knowledge Base. During the ingestion process, Bedrock will parse these documents, convert them into vector embeddings using an embedding model, and store them in a vector database for efficient retrieval during queries.

1.1 Create an S3 bucket

1.2 Upload docs to S3

2. Setup AOSS Vector Index and Configure BKB Access Permissions

Next we will create a vector index using Amazon OpenSearch Serverless (AOSS) and configure the necessary access permissions for the Bedrock Knowledge Base (BKB) that we’ll set up later. AOSS provides a fully managed, serverless solution for running vector search workloads at billion-vector scale. It automatically handles resource scaling and eliminates the need for cluster management, while delivering low-latency, millisecond response times with pay-per-use pricing.

It’s worth noting that Bedrock Knowledge Bases also supports other popular vector stores, including Amazon Aurora PostgreSQL with pgvector, Pinecone, Redis Enterprise Cloud, and MongoDB, among others.

2.1 Create IAM Role with Necessary Permissions for Bedrock Knowledge Base

We will create an IAM role with all the necessary policies and permissions to allow BKB to execute operations, such as invoking Bedrock FMs and reading data from an S3 bucket.

2.2 Create AOSS Policies and Vector Collection

Next we need to create and attach three key policies for securing and managing access to the AOSS collection: an encryption policy, a network access policy, and a data access policy. These policies ensure proper encryption, network security, and the necessary permissions for creating, reading, updating, and deleting collection items and indexes. This step is essential for configuring the OpenSearch collection to interact with BKB securely and efficiently (you can read more about AOSS collections here). We will use another helper function for this.

⚠️ Note: in order to keep setup overhead at mininum, in this example we allow public internet access to the OpenSearch Serverless collection resource. However, for production environments we strongly suggest to leverage private connection between your VPC and Amazon OpenSearch Serverless resources via an VPC endpoint, as described here.

With all the necessary policies in place, we proceed to actually creating a new AOSS collection. Please note that this can take a few minutes to complete.

2.3 Grant BKB Access to AOSS Data

Next we need to create a data access policy that grants BKB the necessary permissions to read from our AOSS collections. We then attach this policy to the Bedrock execution role we created earlier, allowing BKB to securely access AOSS data when generating responses.

2.4 Create an AOSS Vector Index

Now that we have all necessary access permissions in place, we can create a vector index in the AOSS collection we created previously.

3. Configure Amazon Bedrock Knowledge Base and Synchronize it with Data Source

In this section, we’ll create an Amazon Bedrock Knowledge Base (BKB) and connect it to the data that will be stored in our newly created AOSS vector index.

3.1 Create a Bedrock Knowledge Base

Setting up a Knowledge Base involves providing two key configurations:

  • Storage Configuration tells Bedrock where to store the generated vector embeddings by specifying the target vector store and providing the necessary connection detail (here, we use the AOSS vector index we created earlier),
  • Knowledge Base Configuration defines how Bedrock should generate vector embeddings from your data by specifying the embedding model to use (Titan Text Embeddings V2 in this sample), along with any additional settings required for handling multimodal content.

3.2 Connect BKB to a Data Source

With our Knowledge Base in place, the next step is to connect it to a data source. This involves two key actions:

  • Create a data source for the Knowledge Base that will point to the location of our raw data (in this case, S3),
  • Define how that data should be processed and ingested into the vector store — for example, by specifying a chunking configuration that controls how large each text fragment should be when generating vector embeddings for retrieval.

3.3 Synchronize BKB with Data Source

Once the Knowledge Base and its data source are configured, we can start a fully-managed data ingestion job. During this process, BKB will retrieve the documents from the connected data source (on S3, in this case), extract and preprocess the content, split it into smaller chunks based on the configured chunking strategy, generate vector embeddings for each chunk, and store those embeddings in the vector store (AOSS vector store, in this case).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment