

# Amazon OpenSearch Serverless
<a name="serverless"></a>

Amazon OpenSearch Serverless is an on-demand, auto-scaling configuration for Amazon OpenSearch Service. Unlike provisioned OpenSearch domains, which require manual capacity management, an OpenSearch Serverless collection automatically scales compute resources based on your application's needs.

OpenSearch Serverless offers a cost-effective solution for workloads that are infrequent, intermittent, or unpredictable. It optimizes costs by automatically scaling compute capacity based on your application's usage. Serverless collections use the same high-capacity, distributed, and highly available storage volume as provisioned OpenSearch Service domains.

OpenSearch Serverless collections are always encrypted. You can choose the encryption key, but you can't disable encryption. For more information, see [Encryption in Amazon OpenSearch Serverless](serverless-encryption.md)

## Benefits
<a name="serverless-benefits"></a>

OpenSearch Serverless has the following benefits:
+ **Simpler than provisioned** – OpenSearch Serverless removes much of the complexity of managing OpenSearch clusters and capacity. It automatically sizes and tunes your clusters, and takes care of shard and index lifecycle management. It also manages service software updates and OpenSearch version upgrades. All updates and upgrades are non-disruptive.
+ **Cost-effective** – When you use OpenSearch Serverless, you only pay for the resources that you consume. This removes the need for upfront provisioning and overprovisioning for peak workloads.
+ **Highly available** – OpenSearch Serverless supports production workloads with redundancy to protect against Availability Zone outages and infrastructure failures.
+ **Scalable** – OpenSearch Serverless automatically scales resources to maintain consistently fast data ingestion rates and query response times.

# What is Amazon OpenSearch Serverless?
<a name="serverless-overview"></a>

Amazon OpenSearch Serverless is an on-demand, serverless option for Amazon OpenSearch Service that eliminates the operational complexity of provisioning, configuring, and tuning OpenSearch clusters. It’s ideal for organizations that prefer not to self-manage their clusters or lack the dedicated resources and expertise to operate large-scale deployments. With OpenSearch Serverless, you can search and analyze large volumes of data without managing the underlying infrastructure.

An OpenSearch Serverless *collection* is a group of OpenSearch indexes that work together to support a specific workload or use case. Collections simplify operations compared to self-managed OpenSearch clusters, which require manual provisioning.

Collections use the same high-capacity, distributed, and highly available storage as provisioned OpenSearch Service domains, but further reduce complexity by eliminating manual configuration and tuning. Data within a collection is encrypted in transit. OpenSearch Serverless also supports OpenSearch Dashboards, providing an interface for data analysis.

Currently, serverless collections run OpenSearch version 2.17.x. As new versions are released, OpenSearch Serverless automatically upgrades collections to incorporate new features, bug fixes, and performance improvements.

OpenSearch Serverless supports the same ingest and query API operations as the OpenSearch open source suite, so you can continue to use your existing clients and applications. Your clients must be compatible with OpenSearch 2.x in order to work with OpenSearch Serverless. For more information, see [Ingesting data into Amazon OpenSearch Serverless collections](serverless-clients.md).

**Topics**
+ [Use cases for OpenSearch Serverless](#serverless-use-cases)
+ [How it works](#serverless-process)
+ [Choosing a collection type](#serverless-usecase)
+ [Pricing](#serverless-pricing)
+ [Supported AWS Regions](#serverless-regions)
+ [Limitations](#serverless-limitations)
+ [Comparing OpenSearch Service and OpenSearch Serverless](serverless-comparison.md)

## Use cases for OpenSearch Serverless
<a name="serverless-use-cases"></a>

OpenSearch Serverless supports two primary use cases:
+ **Log analytics** - The log analytics segment focuses on analyzing large volumes of semi-structured, machine-generated time series data for operational and user behavior insights.
+ **Full-text search** - The full-text search segment powers applications in your internal networks (content management systems, legal documents) and internet-facing applications, such as ecommerce website content search. 

 When you create a collection, you choose one of these use cases. For more information, see [Choosing a collection type](#serverless-usecase).

## How it works
<a name="serverless-process"></a>

Traditional OpenSearch clusters have a single set of instances that perform both indexing and search operations, and index storage is tightly coupled with compute capacity. By contrast, OpenSearch Serverless uses a cloud-native architecture that separates the indexing (ingest) components from the search (query) components, with Amazon S3 as the primary data storage for indexes. 

This decoupled architecture lets you scale search and indexing functions independently of each other, and independently of the indexed data in S3. The architecture also provides isolation for ingest and query operations so that they can run concurrently without resource contention. 

When you write data to a collection, OpenSearch Serverless distributes it to the *indexing* compute units. The indexing compute units ingest the incoming data and move the indexes to S3. When you perform a search on the collection data, OpenSearch Serverless routes requests to the *search* compute units that hold the data being queried. The search compute units download the indexed data directly from S3 (if it's not already cached locally), run search operations, and perform aggregations. 

The following image illustrates this decoupled architecture:

![\[Diagram showing indexing and search processes using compute units and Amazon S3 storage.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/Serverless.png)


OpenSearch Serverless compute capacity for data ingestion, searching, and querying are measured in OpenSearch Compute Units (OCUs). Each OCU is a combination of 6 GiB of memory and corresponding virtual CPU (vCPU), as well as data transfer to Amazon S3. Each OCU includes enough hot ephemeral storage for 120 GiB of index data.

When you create your first collection, OpenSearch Serverless instantiates two OCUs—one for indexing and one for search. To ensure high availability, it also launches a standby set of nodes in another Availability Zone. For development and testing purposes, you can disable the **Enable redundancy** setting for a collection, which eliminates the two standby replicas and only instantiates two OCUs. By default, the redundant active replicas are enabled, which means that a total of four OCUs are instantiated for the first collection in an account.

These OCUs exist even when there's no activity on any collection endpoints. All subsequent collections share these OCUs. When you create additional collections in the same account, OpenSearch Serverless only adds additional OCUs for search and ingest as needed to support the collections, according to the [capacity limits](serverless-scaling.md#serverless-scaling-configure) that you specify. Capacity scales back down as your compute usage decreases.

For information about how you're billed for these OCUs, see [Pricing](#serverless-pricing).

## Choosing a collection type
<a name="serverless-usecase"></a>

OpenSearch Serverless supports three primary collection types:

**Time series** – The log analytics segment that analyzes large volumes of semi-structured, machine-generated data in real-time, providing insights into operations, security, user behavior, and business performance.

**Search** – Full-text search that enables applications within internal networks, such as content management systems and legal document repositories, as well as internet-facing applications like e-commerce site search and content discovery.

**Vector search** – Semantic search on vector embeddings simplifies vector data management and enables machine learning (ML)-augmented search experiences. It supports generative AI applications such as chatbots, personal assistants, and fraud detection.

You choose a collection type when you first create a collection:

![\[Three collection type options: Time series, Search, and Vector search for different data use cases.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-collection-type.png)


The collection type that you choose depends on the kind of data that you plan to ingest into the collection, and how you plan to query it. You can't change the collection type after you create it.

The collection types have the following notable **differences**:
+ For *search* and *vector search* collections, all data is stored in hot storage to ensure fast query response times. *Time series* collections use a combination of hot and warm storage, where the most recent data is kept in hot storage to optimize query response times for more frequently accessed data.
+ For *time series* and *vector search* collections, you can't index by custom document ID or update by upsert requests. This operation is reserved for search use cases. You can update by document ID instead. For more information, see [Supported OpenSearch API operations and permissions](serverless-genref.md#serverless-operations).
+ For *search* and *time series* collections, you can't use k-NN type indexes.

## Pricing
<a name="serverless-pricing"></a>

AWS charges you for the following OpenSearch Serverless components:
+ Data ingestion compute
+ Search and query compute
+ Storage retained in Amazon S3

One OCU comprises 6 GB of RAM, corresponding vCPU, GP3 storage, and data transfer to Amazon S3. The smallest unit you can be billed for is 0.5 OCU. AWS bills OCU on an hourly basis, with second-level granularity. In your account statement, you see an entry for compute in OCU-hours with a label for data ingestion and a label for search. AWS also bills you on a monthly basis for data stored in Amazon S3. It doesn't charge you for using OpenSearch Dashboards.

When you create a collection with redundant active replicas, you're billed for a minimum of 1 OCU (0.5 OCU × 2) for ingestion, including both primary and standby, and 1 OCU (0.5 OCU × 2) for search:
+ 1 OCU (0.5 OCU × 2) for ingestion, including both primary and standby
+ 1 OCU (0.5 OCU × 2) for search

If you disable redundant active replicas, you're billed for a minimum of 1 OCU (0.5 OCU x 2) for the first collection in your account. All subsequent collections can share those OCUs.

OpenSearch Serverless adds additional OCUs in increments of 1 OCU based on the compute power and storage needed to support your collections. You can configure a maximum number of OCUs for your account in order to control costs.

**Note**  
Collections with unique AWS KMS keys can't share OCUs with other collections.

OpenSearch Serverless attempts to use the minimum required resources to account for changing workloads. The number of OCUs provisioned at any time can vary and isn't exact. Over time, the algorithm that OpenSearch Serverless uses will continue to improve in order to better minimize system usage.

For full pricing details, see [Amazon OpenSearch Service pricing](https://aws.amazon.com/opensearch-service/pricing/).

## Supported AWS Regions
<a name="serverless-regions"></a>

OpenSearch Serverless is available in a subset of AWS Regions that OpenSearch Service is available in. For a list of supported Regions, see [Amazon OpenSearch Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html) in the *AWS General Reference*.

## Limitations
<a name="serverless-limitations"></a>

OpenSearch Serverless has the following limitations:
+ Some OpenSearch API operations aren't supported. See [Supported OpenSearch API operations and permissions](serverless-genref.md#serverless-operations).
+ Some OpenSearch plugins aren't supported. See [Supported OpenSearch plugins](serverless-genref.md#serverless-plugins).
+ There's currently no way to automatically migrate your data from a managed OpenSearch Service domain to a serverless collection. You must reindex your data from a domain to a collection.
+ Cross-account access to collections isn't supported. You can't include collections from other accounts in your encryption or data access policies.
+ Custom OpenSearch plugins aren't supported.
+ Automated snapshots are supported for OpenSearch Serverless collections. Manual snapshots are not supported. For more information, see [Backing up collections using snapshots](serverless-snapshots.md).
+ Cross-Region search and replication aren't supported.
+ There are limits on the number of serverless resources that you can have in a single account and Region. See [OpenSearch Serverless quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-limits-serverless).
+ The refresh interval for indexes in vector search collections is approximately 60 seconds. The refresh interval for indexes in search and time series collections is approximately 10 seconds.
+ The number of shards, number of intervals, and refresh interval are not modifiable and are handled by OpenSearch Serverless. The sharding strategy is based off the collection type and traffic. For example, a time series collection scales primary shards based on write traffic bottlenecks.
+ Geospatial features available on OpenSearch versions up to 2.1 are supported.

# Comparing OpenSearch Service and OpenSearch Serverless
<a name="serverless-comparison"></a>

In OpenSearch Serverless, some concepts and features are different than their corresponding feature for a provisioned OpenSearch Service domain. For example, one important difference is that OpenSearch Serverless doesn't have the concept of a cluster or node.

The following table describes how important features and concepts in OpenSearch Serverless differ from the equivalent feature in a provisioned OpenSearch Service domain.


| Feature | OpenSearch Service | OpenSearch Serverless | 
| --- | --- | --- | 
|  **Domains versus collections**  |  Indexes are held in *domains*, which are pre-provisioned OpenSearch clusters. For more information, see [Creating and managing Amazon OpenSearch Service domains](createupdatedomains.md).  |  Indexes are held in *collections*, which are logical groupings of indexes that represent a specific workload or use case. For more information, see [Managing Amazon OpenSearch Serverless collections](serverless-manage.md).  | 
|  **Node types and capacity management**  |  You build a cluster with node types that meet your cost and performance specifications. You must calculate your own storage requirements and choose an instance type for your domain. For more information, see [Sizing Amazon OpenSearch Service domains](sizing-domains.md).  |  OpenSearch Serverless automatically scales and provisions additional compute units for your account based on your capacity usage. For more information, see [Managing capacity limits for Amazon OpenSearch Serverless](serverless-scaling.md).  | 
|  **Billing**  |  You pay for each hour of use of an EC2 instance and for the cumulative size of any EBS storage volumes attached to your instances. For more information, see [Pricing for Amazon OpenSearch Service](what-is.md#pricing).  |  You're charged in OCU-hours for compute for data ingestion, compute for search and query, and storage retained in S3. For more information, see [Pricing](serverless-overview.md#serverless-pricing).  | 
|  **Encryption**  |  Encryption at rest is *optional* for domains. For more information, see [Encryption of data at rest for Amazon OpenSearch Service](encryption-at-rest.md).  |  Encryption at rest is *required* for collections. For more information, see [Encryption in Amazon OpenSearch Serverless](serverless-encryption.md).  | 
|  **Data access control**  |  Access to the data within domains is determined by IAM policies and [fine-grained access control](fgac.md).  |  Access to data within collections is determined by [data access policies](serverless-data-access.md).  | 
| Supported OpenSearch operations |  OpenSearch Service supports a subset of all of the OpenSearch API operations. For more information, see [Supported operations in Amazon OpenSearch Service](supported-operations.md).  |  OpenSearch Serverless supports a different subset of OpenSearch API operations. For more information, see [Supported operations and plugins in Amazon OpenSearch Serverless](serverless-genref.md).  | 
| Dashboards sign-in |  Sign in with a username and password. For more information, see [Accessing OpenSearch Dashboards as the master user](fgac.md#fgac-dashboards).  |  If you're logged into the AWS console and navigate to your Dashboard URL, you'll automatically log in. For more information, see [Accessing OpenSearch Dashboards](serverless-dashboards.md).  | 
| APIs |  Interact programmatically with OpenSearch Service using the [OpenSearch Service API operations](https://docs.aws.amazon.com/opensearch-service/latest/APIReference/Welcome.html).  |  Interact programmatically with OpenSearch Serverless using the [OpenSearch Serverless API operations](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/Welcome.html).  | 
| Network access |  Network settings for a domain apply to the domain endpoint as well as the OpenSearch Dashboards endpoint. Network access for both is tightly coupled.  |  Network settings for the domain endpoint and the OpenSearch Dashboards endpoint are decoupled. You can choose to not configure network access for OpenSearch Dashboards. For more information, see [Network access for Amazon OpenSearch Serverless](serverless-network.md).  | 
| Signing requests |  Use the OpenSearch high and low-level REST clients to sign requests. Specify the service name as `es`.  |  At this time, OpenSearch Serverless supports a subset of clients that OpenSearch Service supports. When you sign requests, specify the service name as `aoss`. The `x-amz-content-sha256` header is required. For more information, see [Signing HTTP requests with other clients](serverless-clients.md#serverless-signing).  | 
| OpenSearch version upgrades |  You manually upgrade your domains as new versions of OpenSearch become available. You're responsible for ensuring that your domain meets the upgrade requirements, and that you've addressed any breaking changes.  |  OpenSearch Serverless automatically upgrades your collections to new OpenSearch versions. Upgrades don't necessarily happen as soon as a new version is available.  | 
| Service software updates |  You manually apply service software updates to your domain as they become available.  |  OpenSearch Serverless automatically updates your collections to consume the latest bug fixes, features, and performance improvements.  | 
| VPC access |  You can [provision your domain within a VPC](vpc.md). You can also create additional [OpenSearch Service-managed VPC endpoints](vpc-interface-endpoints.md) to access the domain.  |  You create one or more [OpenSearch Serverless-managed VPC endpoints](serverless-vpc.md) for your account. Then, you include these endpoints within [network policies](serverless-network.md).  | 
| SAML authentication |  You enable SAML authentication on a per-domain basis. For more information, see [SAML authentication for OpenSearch Dashboards](saml.md).  |  You configure one or more SAML providers at the account level, then you include the associated user and group IDs within data access policies. For more information, see [SAML authentication for Amazon OpenSearch Serverless](serverless-saml.md).  | 
| Transport Layer Security (TLS) | OpenSearch Service supports TLS 1.2 but it is recommend you use TLS 1.3. | OpenSearch Serverless supports TLS 1.2 but it is recommended you use TLS 1.3. | 

# Tutorial: Getting started with Amazon OpenSearch Serverless
<a name="serverless-getting-started"></a>

This tutorial walks you through the basic steps to get an Amazon OpenSearch Serverless *search* collection up and running quickly. A search collection allows you to power applications in your internal networks and internet-facing applications, such as ecommerce website search and content search. 

To learn how to use a *vector search* collection, see [Working with vector search collections](serverless-vector-search.md). For more detailed information about using collections, see [Managing Amazon OpenSearch Serverless collections](serverless-manage.md) and the other topics within this guide.

You'll complete the following steps in this tutorial:

1. [Configure permissions](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-getting-started.html#serverless-gsg-permissions)

1. [Create a collection](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-getting-started.html#serverless-gsg-create)

1. [Upload and search data](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-getting-started.html#serverless-gsg-index)

1. [Delete the collection](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-getting-started.html#serverless-gsg-delete)
**Note**  
We recommend that you use only ASCII characters for your `IndexName`. If you do not use ASCII characters for your `IndexName`, the `IndexName` in CloudWatch metrics will be converted to a URL encoded format for Non-ASCII characters.

## Step 1: Configure permissions
<a name="serverless-gsg-permissions"></a>

In order to complete this tutorial, and to use OpenSearch Serverless in general, you must have the correct IAM permissions. In this tutorial, you will create a collection, upload and search data, and then delete the collection.

Your user or role must have an attached [identity-based policy](security-iam-serverless.md#security-iam-serverless-id-based-policies) with the following minimum permissions:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Action": [
        "aoss:CreateCollection",
        "aoss:ListCollections",
        "aoss:BatchGetCollection",
        "aoss:DeleteCollection",
        "aoss:CreateAccessPolicy",
        "aoss:ListAccessPolicies",
        "aoss:UpdateAccessPolicy",
        "aoss:CreateSecurityPolicy",
        "aoss:GetSecurityPolicy",
        "aoss:UpdateSecurityPolicy",
        "iam:ListUsers",
        "iam:ListRoles"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
```

------

For more information about OpenSearch Serverless IAM permissions, see [Identity and Access Management for Amazon OpenSearch Serverless](security-iam-serverless.md).

## Step 2: Create a collection
<a name="serverless-gsg-create"></a>

A collection is a group of OpenSearch indexes that work together to support a specific workload or use case.

**To create an OpenSearch Serverless collection**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. Choose **Collections** in the left navigation pane and choose **Create collection**.

1. Name the collection **movies**.

1. For collection type, choose **Search**. For more information, see [Choosing a collection type](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-overview.html#serverless-usecase).

1. For **Security**, choose **Standard create**.

1. Under **Encryption**, select **Use AWS owned key**. This is the AWS KMS key that OpenSearch Serverless will use to encrypt your data.

1. Under **Network**, configure network settings for the collection.
   + For the access type, select **Public**.
   + For the resource type, choose both **Enable access to OpenSearch endpoints** and **Enable access to OpenSearch Dashboards**. Since you'll upload and search data using OpenSearch Dashboards, you need to enable both.

1. Choose **Next**.

1. For **Configure data access**, set up access settings for the collection. [Data access policies](serverless-data-access.md) allow users and roles to access the data within a collection. In this tutorial, we'll provide a single user the permissions required to index and search data in the *movies* collection.

   Create a single rule that provides access to the *movies* collection. Name the rule **Movies collection access**.

1. Choose **Add principals**, **IAM users and roles** and select the user or role that you'll use to sign in to OpenSearch Dashboards and index data. Choose **Save**.

1. Under **Index permissions**, select all of the permissions.

1. Choose **Next**.

1. For the access policy settings, choose **Create a new data access policy** and name the policy **movies**.

1. Choose **Next**.

1. Review your collection settings and choose **Submit**. Wait several minutes for the collection status to become `Active`.

## Step 3: Upload and search data
<a name="serverless-gsg-index"></a>

You can upload data to an OpenSearch Serverless collection using [Postman](https://www.postman.com/downloads/) or cURL. For brevity, these examples use **Dev Tools** within the OpenSearch Dashboards console.

**To index and search data in the movies collection**

1. Choose **Collections** in the left navigation pane and choose the **movies** collection to open its details page.

1. Choose the OpenSearch Dashboards URL for the collection. The URL takes the format `https://dashboards.{region}.aoss.amazonaws.com/_login/?collectionId={collection-id}`. 

1. Within OpenSearch Dashboards, open the left navigation pane and choose **Dev Tools**.

1. To create a single index called *movies-index*, send the following request:

   ```
   PUT movies-index 
   ```  
![\[OpenSearch Dashboards console showing PUT request for movies-index with JSON response.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-gsg-create.png)

1. To index a single document into *movies-index*, send the following request:

   ```
   PUT movies-index/_doc/1
   { 
     "title": "Shawshank Redemption",
     "genre": "Drama",
     "year": 1994
   }
   ```

1. To search data in OpenSearch Dashboards, you need to configure at least one index pattern. OpenSearch uses these patterns to identify which indexes you want to analyze. Open the left navigation pane, choose **Stack Management**, choose **Index Patterns**, and then choose **Create index pattern**. For this tutorial, enter *movies*.

1. Choose **Next step** and then choose **Create index pattern**. After the pattern is created, you can view the various document fields such as `title` and `genre`.

1. To begin searching your data, open the left navigation pane again and choose **Discover**, or use the [search API](https://opensearch.org/docs/latest/api-reference/search/) within Dev Tools.

## Handling errors
<a name="serverless-gsg-data-plane-errors"></a>

When running index and search operations, you may get the following error responses:
+ `HTTP 507` – Indicates that an internal server error occurred. This error generally indicates that your OpenSearch compute units (OCUs) are overloaded by the volume or complexity of your requests. Although OpenSearch Serverless scales automatically to manage the load, there can be a delay in deploying additional resources. 

  To mitigate this error, implement an exponential backoff retry policy. This approach temporarily reduces the request rate to effectively manage the load. For more details, refer to [Retry behavior](https://docs.aws.amazon.com/sdkref/latest/guide/feature-retry-behavior.html) in the *AWS SDKs and Tools Reference Guide*.
+ `HTTP 402` – Indicates that you reached the maximum OpenSearch compute unit (OCU) capacity limit. Optimize your workload to reduce the OCU usage or request a quota increase.

## Step 4: Delete the collection
<a name="serverless-gsg-delete"></a>

Because the *movies* collection is for test purposes, make sure to delete it when you're done experimenting.

**To delete an OpenSearch Serverless collection**

1. Go back to the **Amazon OpenSearch Service** console.

1. Choose **Collections** in the left navigation pane and select the **movies** collection.

1. Choose **Delete** and confirm deletion.

## Next steps
<a name="serverless-gsg-next"></a>

Now that you know how to create a collection and index data, you might want to try some of the following exercises:
+ See more advanced options for creating a collection. For more information, see [Managing Amazon OpenSearch Serverless collections](serverless-manage.md).
+ Learn how to configure security policies to manage collection security at scale. For more information, see [Overview of security in Amazon OpenSearch Serverless](serverless-security.md).
+ Discover other ways to index data into collections. For more information, see [Ingesting data into Amazon OpenSearch Serverless collections](serverless-clients.md).

# Amazon OpenSearch Serverless collections
<a name="serverless-collections"></a>

A *collection* in Amazon OpenSearch Serverless is a logical grouping of one or more indexes that represent an analytics workload. OpenSearch Serverless automatically manages and tunes the collection, requiring minimal manual input.

**Topics**
+ [Managing Amazon OpenSearch Serverless collections](serverless-manage.md)
+ [Working with vector search collections](serverless-vector-search.md)
+ [Using data lifecycle policies with Amazon OpenSearch Serverless](serverless-lifecycle.md)
+ [Using the AWS SDKs to interact with Amazon OpenSearch Serverless](serverless-sdk.md)
+ [Using CloudFormation to create Amazon OpenSearch Serverless collections](serverless-cfn.md)
+ [Backing up collections using snapshots](serverless-snapshots.md)
+ [Zstandard Codec Support in Amazon OpenSearch Serverless](serverless-zstd-compression.md)
+ [Save Storage by Using Derived Source](serverless-derived-source.md)

# Managing Amazon OpenSearch Serverless collections
<a name="serverless-manage"></a>

A *collection* in Amazon OpenSearch Serverless is a logical grouping of one or more indexes that represent an analytics workload. OpenSearch Serverless automatically manages and tunes the collection, requiring minimal manual input.

**Topics**
+ [Configuring permissions for collections](serverless-collection-permissions.md)
+ [Automatic semantic enrichment for Serverless](serverless-semantic-enrichment.md)
+ [Creating collections](serverless-create.md)
+ [Accessing OpenSearch Dashboards](serverless-dashboards.md)
+ [Viewing collections](serverless-list.md)
+ [Deleting collections](serverless-delete.md)

# Configuring permissions for collections
<a name="serverless-collection-permissions"></a>

OpenSearch Serverless uses the following AWS Identity and Access Management (IAM) permissions for creating and managing collections. You can specify IAM conditions to restrict users to specific collections.
+ `aoss:CreateCollection` – Create a collection.
+ `aoss:ListCollections` – List collections in the current account.
+ `aoss:BatchGetCollection` – Get details about one or more collections.
+ `aoss:UpdateCollection` – Modify a collection.
+ `aoss:DeleteCollection` – Delete a collection.

The following sample identity-based access policy provides the minimum permissions necessary for a user to manage a single collection named `Logs`:

```
[
   {
      "Sid":"Allows managing logs collections",
      "Effect":"Allow",
      "Action":[
         "aoss:CreateCollection",
         "aoss:ListCollections",
         "aoss:BatchGetCollection",
         "aoss:UpdateCollection",
         "aoss:DeleteCollection",
         "aoss:CreateAccessPolicy",
         "aoss:CreateSecurityPolicy"
      ],
      "Resource":"*",
      "Condition":{
         "StringEquals":{
            "aoss:collection":"Logs"
         }
      }
   }
]
```

`aoss:CreateAccessPolicy` and `aoss:CreateSecurityPolicy` are included because encryption, network, and data access policies are required in order for a collection to function properly. For more information, see [Identity and Access Management for Amazon OpenSearch Serverless](security-iam-serverless.md).

**Note**  
If you're creating the first collection in your account, you also need the `iam:CreateServiceLinkedRole` permission. For more information, see [Using service-linked roles to create OpenSearch Serverless collections](serverless-service-linked-roles.md).

# Automatic semantic enrichment for Serverless
<a name="serverless-semantic-enrichment"></a>

## Introduction
<a name="serverless-semantic-enrichment-intro"></a>

The automatic semantic enrichment feature can help improve search relevance by up to 20% over lexical search. Automatic semantic enrichment eliminates the undifferentiated heavy lifting of managing your own ML (machine learning) model infrastructure and integration with the search engine. The feature is available for all three serverless collection types: Search, Time Series, and Vector.

## What is semantic search
<a name="serverless-semantic-enrichment-whats-is"></a>

 Traditional search engines rely on word-to-word matching (referred to as lexical search) to find results for queries. Although this works well for specific queries such as television model numbers, it struggles with more abstract searches. For example, when searching for "shoes for the beach," a lexical search merely matches individual words "shoes," "beach," "for," and "the" in catalog items, potentially missing relevant products like "water-resistant sandals" or "surf footwear" that don't contain the exact search terms.

 Semantic search returns query results that incorporate not just keyword matching, but the intent and contextual meaning of the user's search. For example, if a user searches for "how to treat a headache," a semantic search system might return the following results: 
+ Migraine remedies
+ Pain management techniques
+ Over-the-counter pain relievers 

## Model details and performance benchmark
<a name="serverless-semantic-enrichment-model-detail"></a>

 While this feature handles the technical complexities behind the scenes without exposing the underlying model, we provide transparency through a brief model description and benchmark results to help you make informed decisions about feature adoption in your critical workloads.

 Automatic semantic enrichment uses a service-managed, pre-trained sparse model that works effectively without requiring custom fine-tuning. The model analyzes the fields you specify, expanding them into sparse vectors based on learned associations from diverse training data. The expanded terms and their significance weights are stored in native Lucene index format for efficient retrieval. We’ve optimized this process using [document-only mode,](https://docs.opensearch.org/docs/latest/vector-search/ai-search/neural-sparse-with-pipelines/#step-1a-choose-the-search-mode) where encoding happens only during data ingestion. Search queries are merely tokenized rather than processed through the sparse model, making the solution both cost-effective and performant. 

 Our performance validation during feature development used the [MS MARCO](https://huggingface.co/datasets/BeIR/msmarco) passage retrieval dataset, featuring passages averaging 334 characters. For relevance scoring, we measured average Normalized Discounted Cumulative Gain (NDCG) for the first 10 search results (ndcg@10) on the [BEIR](https://github.com/beir-cellar/beir) benchmark for English content and average ndcg@10 on MIRACL for multilingual content. We assessed latency through client-side, 90th-percentile (p90) measurements and search response p90 [took values.](https://github.com/beir-cellar/beir) These benchmarks provide baseline performance indicators for both search relevance and response times. Here are the key benchmark numbers - 
+ English language - Relevance improvement of 20% over lexical search. It also lowered P90 search latency by 7.7% over lexical search (BM25 is 26 ms, and automatic semantic enrichment is 24 ms).
+ Multi-lingual - Relevance improvement of 105% over lexical search, whereas P90 search latency increased by 38.4% over lexical search (BM25 is 26 ms, and automatic semantic enrichment is 36 ms).

Given the unique nature of each workload, we encourage you to evaluate this feature in your development environment using your own benchmarking criteria before making implementation decisions.

## Languages Supported
<a name="serverless-semantic-enrichment-languages"></a>

The feature supports English. In addition, the model also supports Arabic, Bengali, Chinese, Finnish, French, Hindi, Indonesian, Japanese, Korean, Persian, Russian, Spanish, Swahili, and Telugu.

## Set up an automatic semantic enrichment index for serverless collections
<a name="serverless-semantic-enrichment-index-setup"></a>

Setting up an index with automatic semantic enrichment enabled for your text fields is easy, and you can manage it through the console, APIs, and CloudFormation templates during new index creation. To enable it for an existing index, you need to recreate the index with automatic semantic enrichment enabled for text fields. 

Console experience - The AWS console allows you to easily create an index with automatic semantic enrichment fields. Once you select a collection, you will find the create index button at the top of the console. Once you click the create index button, you will find options to define automatic semantic enrichment fields. In one index, you can have combinations of automatic semantic enrichment for English and multilingual, as well as lexical fields.

![\[alt text not found\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/ase-console-exp-serverless.png)


API experience - To create an automatic semantic enrichment index using the AWS Command Line Interface (AWS CLI), use the create-index command: 

```
aws opensearchserverless create-index \
--id [collection_id] \
--index-name [index_name] \
--index-schema [index_body] \
```

 In the following example index-schema, the *title\$1semantic *field has a field type set to *text* and has parameter *semantic\$1enrichment *set to status *ENABLED*. Setting the *semantic\$1enrichment* parameter enables automatic semantic enrichment on the *title\$1semantic* field. You can use the *language\$1options* field to specify either *english* or *multi-lingual*. 

```
    aws opensearchserverless create-index \
    --id XXXXXXXXX \
    --index-name 'product-catalog' \
    --index-schema '{
    "mappings": {
        "properties": {
            "product_id": {
                "type": "keyword"
            },
            "title_semantic": {
                "type": "text",
                "semantic_enrichment": {
                    "status": "ENABLED",
                    "language_options": "english"
                }
            },
            "title_non_semantic": {
                "type": "text"
            }
        }
    }
}'
```

To describe the created index, use the following command:

```
aws opensearchserverless get-index \
--id [collection_id] \
--index-name [index_name] \
```

You can also use CloudFormation templates (Type:AWS::OpenSearchServerless::CollectionIndex) to create semantic search during collection provisioning as well as after the collection is created.

## Data ingestion and search
<a name="serverless-semantic-enrichment-data-ingest"></a>

Once you've created an index with automatic semantic enrichment enabled, the feature works automatically during data ingestion process, no additional configuration required.

Data ingestion: When you add documents to your index, the system automatically:
+ Analyzes the text fields you designated for semantic enrichment
+ Generates semantic encodings using OpenSearch Service managed sparse model
+ Stores these enriched representations alongside your original data

This process uses OpenSearch's built-in ML connectors and ingest pipelines, which are created and managed automatically behind the scenes.

Search: The semantic enrichment data is already indexed, so queries run efficiently without invoking the ML model again. This means you get improved search relevance with no additional search latency overhead.

## Configuring permissions for automatic semantic enrichment
<a name="serverless-semantic-enrichment-permissions"></a>

Before creating an automated semantic enrichment index, you need to configure the required permissions. This section explains the permissions needed and how to set them up.

### IAM policy permissions
<a name="iam-policy-permissions"></a>

Use the following AWS Identity and Access Management (IAM) policy to grant the necessary permissions for working with automatic semantic enrichment:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AutomaticSemanticEnrichmentPermissions",
            "Effect": "Allow",
            "Action": [
                "aoss:CreateIndex",
                "aoss:GetIndex",
                "aoss:UpdateIndex",
                "aoss:DeleteIndex",
                "aoss:APIAccessAll"
            ],
            "Resource": "*"
        }
    ]
}
```

------

**Key permissions**  
+ The `aoss:*Index` permissions enable index management
+ The `aoss:APIAccessAll` permission allows OpenSearch API operations
+ To restrict permissions to a specific collection, replace `"Resource": "*"` with the collection's ARN

### Configure data access permissions
<a name="serverless-collection-permissions-data-network"></a>

To set up an index for automatic semantic enrichment, you must have appropriate data access policies that grant permission to access index, pipeline, and model collection resources. For more information about data access policies, see [Data access control for Amazon OpenSearch Serverless](serverless-data-access.md). For the procedure to configure a data access policy, see [Creating data access policies (console)](serverless-data-access.md#serverless-data-access-console).

#### Data access permissions
<a name="serverless-collection-data-access-permissions"></a>

```
[
    {
        "Description": "Create index permission",
        "Rules": [
            {
                "ResourceType": "index",
                "Resource": ["index/collection_name/*"],
                "Permission": [
                  "aoss:CreateIndex", 
                  "aoss:DescribeIndex",
                  "aoss:UpdateIndex",
                  "aoss:DeleteIndex"
                ]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    },
    {
        "Description": "Create pipeline permission",
        "Rules": [
            {
                "ResourceType": "collection",
                "Resource": ["collection/collection_name"],
                "Permission": [
                  "aoss:CreateCollectionItems",
                  "aoss:DescribeCollectionItems"
                ]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    },
    {
        "Description": "Create model permission",
        "Rules": [
            {
                "ResourceType": "model",
                "Resource": ["model/collection_name/*"],
                "Permission": ["aoss:CreateMLResource"]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    },
]
```

#### Network access permissions
<a name="serverless-collection-network-access-permissions"></a>

To allow service APIs to access private collections, you must configure network policies that permit the required access between the service API and the collection. For more information about network policies, see [Network access for Amazon OpenSearch Serverless](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-network.html) .

```
[
   {
      "Description":"Enable automatic semantic enrichment in a private collection",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/collection_name"
            ]
         }
      ],
      "AllowFromPublic":false,
      "SourceServices":[
         "aoss.amazonaws.com"
      ],
   }
]
```

**To configure network access permissions for a private collection**

1. Sign in to the OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation, choose *Network policies*. Then do one of the following:
   + Choose an existing policy name and choose *Edit*
   + Choose *Create network policy* and configure the policy details

1. In the *Access type* area, choose *Private (recommended)*, and then select *AWS service private access*.

1. In the search field, choose *Service*, and then choose *aoss.amazonaws.com*.

1. In the *Resource type* area, select the *Enable access to OpenSearch endpoint* box.

1. For *Search collection(s), or input specific prefix term(s)*, in the search field, select *Collection Name*. Then enter or select the name of the collections to associate with the network policy.

1. Choose *Create* for a new network policy or *Update* for an existing network policy.

## Query Rewrites
<a name="serverless-collection-query-rewrite"></a>

Automatic semantic enrichment automatically converts your existing “match” queries to semantic search queries without requiring query modifications. If a match query is part of a compound query, the system traverses your query structure, finds match queries, and replaces them with neural sparse queries. Currently, the feature only supports replacing “match” queries, whether it’s a standalone query or part of a compound query. “multi\$1match” is not supported. In addition, the feature supports all compound queries to replace their nested match queries. Compound queries include: bool, boosting, constant\$1score, dis\$1max, function\$1score, and hybrid. 

## Limitations of automatic semantic enrichment
<a name="serverless-collection-ase-limitation"></a>

Automatic semantic search is most effective when applied to small-to-medium sized fields containing natural language content, such as movie titles, product descriptions, reviews, and summaries. Although semantic search enhances relevance for most use cases, it might not be optimal for certain scenarios. Consider following limitations when deciding whether to implement automatic semantic enrichment for your specific use case. 
+ Very long documents – The current sparse model processes only the first 8,192 tokens of each document for English. For multilingual documents, it’s 512 tokens. For lengthy articles, consider implementing document chunking to ensure complete content processing.
+ Log analysis workloads – Semantic enrichment significantly increases index size, which might be unnecessary for log analysis where exact matching typically suffices. The additional semantic context rarely improves log search effectiveness enough to justify the increased storage requirements. 
+ Automatic semantic enrichment is not compatible with the Derived Source feature. 
+ Throttling – Indexing inference requests are currently capped at 100 TPS for OpenSearch Serverless. This is a soft limit; reach out to AWS Support for higher limits.

## Pricing
<a name="serverless-collection-ase-pricing"></a>

 OpenSearch Serverless bills automatic semantic enrichment based on OpenSearch Compute Units (OCUs) consumed during sparse vector generation at indexing time. You’re charged only for actual usage during indexing. You can monitor this consumption using the Amazon CloudWatch metric SemanticSearchOCU. For specific details about model token limits, volume throughput per OCU, and example of sample calculation, visit [ OpenSearch Service Pricing](https://aws.amazon.com/opensearch-service/pricing/). 

# Creating collections
<a name="serverless-create"></a>

You can use the console or the AWS CLI to create a serverless collection. These steps cover how to create a *search* or *time series* collection. To create a *vector search* collection, see [Working with vector search collections](serverless-vector-search.md). 

**Topics**
+ [Create a collection (console)](serverless-create-console.md)
+ [Create a collection (CLI)](serverless-create-cli.md)

# Create a collection (console)
<a name="serverless-create-console"></a>

Use the procedures in this section to create a collection by using the AWS Management Console. These steps cover how to create a *search* or *time series* collection. To create a *vector search* collection, see [Working with vector search collections](serverless-vector-search.md). 

**Topics**
+ [Configure collection settings](#serverless-create-console-step-2)
+ [Configure additional search fields](#serverless-create-console-step-3)

## Configure collection settings
<a name="serverless-create-console-step-2"></a>

Use the following procedure configure information about your collection. 

**To configure collection settings using the console**

1. Navigate to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home/](https://console.aws.amazon.com/aos/home/).

1. Expand **Serverless** in the left navigation pane and choose **Collections**. 

1. Choose **Create collection**.

1. Provide a name and description for the collection. The name must meet the following criteria:
   + Is unique to your account and AWS Region
   + Contains only lowercase letters a-z, the numbers 0–9, and the hyphen (-)
   + Contains between 3 and 32 characters

1. Choose a collection type:
   + **Time series** – Log analytics segment that focuses on analyzing large volumes of semi-structured, machine-generated data. At least 24 hours of data is stored on hot indexes, and the rest remains in warm storage.
   + **Search** – Full-text search that powers applications in your internal networks and internet-facing applications. All search data is stored in hot storage to ensure fast query response times.
**Note**  
Choose this option if you are enabling automatic semantic search, as described in [Configure collection settings](#serverless-create-console-step-2).
   + **Vector search** – Semantic search on vector embeddings that simplifies vector data management. Powers machine learning (ML) augmented search experiences and generative AI applications such as chatbots, personal assistants, and fraud detection.

   For more information, see [Choosing a collection type](serverless-overview.md#serverless-usecase).

1. For **Deployment type**, choose the redundancy setting for your collection. By default, each collection has redundancy, which means that the indexing and search OpenSearch Compute Units (OCUs) each have their own standby replicas in a different Availability Zone. For development and testing purposes, you can choose to disable redundancy, which reduces the number of OCUs in your collection to two. For more information, see [How it works](serverless-overview.md#serverless-process).

1. For **Security**, choose **Standard create**.

1. For **Encryption**, choose an AWS KMS key to encrypt your data with. OpenSearch Serverless notifies you if the collection name that you entered matches a pattern defined in an encryption policy. You can choose to keep this match or override it with unique encryption settings. For more information, see [Encryption in Amazon OpenSearch Serverless](serverless-encryption.md).

1. For **Network access settings**, configure network access for the collection.
   + For **Access type**, select public or private. 

     If you choose private, specify which VPC endpoints and AWS services can access the collection.
     + **VPC endpoints for access** – Specify one or more VPC endpoints to allow access through. To create a VPC endpoint, see [Data plane access through AWS PrivateLink](serverless-vpc.md).
     + **AWS service private access** – Select one or more supported services to allow access to.
   + For **Resource type**, select whether users can access the collection through its *OpenSearch* endpoint (to make API calls through cURL, Postman, and so on), through the *OpenSearch Dashboards* endpoint (to work with visualizations and make API calls through the console), or both.
**Note**  
AWS service private access applies only to the OpenSearch endpoint, not to the OpenSearch Dashboards endpoint.

   OpenSearch Serverless notifies you if the collection name that you entered matches a pattern defined in a network policy. You can choose to keep this match or override it with custom network settings. For more information, see [Network access for Amazon OpenSearch Serverless](serverless-network.md).

1. (Optional) Add one or more tags to the collection. For more information, see [Tagging Amazon OpenSearch Serverless collections](tag-collection.md).

1. Choose **Next**.

## Configure additional search fields
<a name="serverless-create-console-step-3"></a>

The options you see on page two of the create collection workflow depend on the type of collection you are creating. This section describes how to configure additional search fields for each collection type. This section also describes how to configure automatic semantic enrichment. Skip any section that doesn't apply to your collection type.

**Topics**
+ [Configure automatic semantic enrichment](#serverless-create-console-step-3-semantic-enrichment-fields)
+ [Configure time series search fields](#serverless-create-console-step-3-time-series-fields)
+ [Configure lexical search fields](#serverless-create-console-step-3-lexical-fields)
+ [Configure vector search fields](#serverless-create-console-step-3-vector-search-fields)

### Configure automatic semantic enrichment
<a name="serverless-create-console-step-3-semantic-enrichment-fields"></a>

When you create or edit a collection, you can configure automatic semantic enrichment, which simplifies semantic search implementation and capabilities in Amazon OpenSearch Service. Semantic search returns query results that incorporate not just keyword matching, but the intent and contextual meaning of the user's search. For more information, see [Automatic semantic enrichment for Serverless](serverless-semantic-enrichment.md).

**To configure automatic semantic enrichment**

1. In the **Index details** section, for **Index name**, specify a name.

1. In the **Automatic semantic enrichment fields** section, choose **Add semantic search field**.

1. In the **Input field name for semantic enrichment** field, enter the name of a field that you want to enrich.

1. **Data type** is **Text**. You can't change this.

1. For **Language**, choose either **English** or **Multilingual**.

1. Choose **Add field**.

1. After you finish configuring optional fields for your collection, choose **Next**. Review your changes and choose **Submit** to create the collection.

### Configure time series search fields
<a name="serverless-create-console-step-3-time-series-fields"></a>

The options in the **Time series search fields** section pertain to time series data and data streams. For more information about these subjects, see [Managing time-series data in Amazon OpenSearch Service with data streams](data-streams.md).

**To configure time series search fields**

1. In the **Time series search fields** section, choose **Add time series field**.

1. For **Field name**, enter a name.

1. For **Data type**, choose a type from the list.

1. Choose **Add field**

1. After you finish configuring optional fields for your collection, choose **Next**. Review your changes and choose **Submit** to create the collection.

### Configure lexical search fields
<a name="serverless-create-console-step-3-lexical-fields"></a>

Lexical search seeks an exact match between a search query and indexed terms or keywords.

**To configure lexical search fields**

1. In the **Lexical search fields** section, choose **Add search field**.

1. For **Field name**, enter a name.

1. For **Data type**, choose a type from the list.

1. Choose **Add field**

1. After you finish configuring optional fields for your collection, choose **Next**. Review your changes and choose **Submit** to create the collection.

### Configure vector search fields
<a name="serverless-create-console-step-3-vector-search-fields"></a>

**To configure vector search fields**

1. In the **Vector fields** section, choose **Add vector field**.

1. For **Field name**, enter a name.

1. For **Engine**, choose a type from the list.

1. Enter the number of dimensions.

1. For **Distance Metric**, choose a type from the list.

1. After you finish configuring optional fields for your collection, choose **Next**.

1. Review your changes and choose **Submit** to create the collection.

# Create a collection (CLI)
<a name="serverless-create-cli"></a>

Use the procedures in this section to create an OpenSearch Serverless collection using the AWS CLI. 

**Topics**
+ [Before you begin](#serverless-create-cli-before-you-begin)
+ [Creating a collection](#serverless-create-cli-creating)
+ [Creating a collection with an automatic semantic enrichment index](#serverless-create-cli-automatic-semantic-enrichment)

## Before you begin
<a name="serverless-create-cli-before-you-begin"></a>

Before you create a collection using the AWS CLI, use the following procedure to create required policies for the collection.

**Note**  
In each of the following procedures, when you specify a name for a collection, the name must meet the following criteria:  
Is unique to your account and AWS Region
Contains only lowercase letters a-z, the numbers 0–9, and the hyphen (-)
Contains between 3 and 32 characters

**To create required policies for a collection**

1. Open the AWS CLI and run the following command to create an [encryption policy](serverless-encryption.md) with a resource pattern that matches the intended name of the collection. 

   ```
   &aws opensearchserverless create-security-policy \
     --name policy name \
     --type encryption --policy "{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/collection name\"]}],\"AWSOwnedKey\":true}"
   ```

   For example, if you plan to name your collection *logs-application*, you might create an encryption policy like this:

   ```
   &aws opensearchserverless create-security-policy \
     --name logs-policy \
     --type encryption --policy "{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/logs-application\"]}],\"AWSOwnedKey\":true}"
   ```

   If you plan to use the policy for additional collections, you can make the rule more broad, such as `collection/logs*` or `collection/*`.

1. Run the following command to configure network settings for the collection using a [network policy](serverless-network.md). You can create network policies after you create a collection, but we recommend doing it beforehand.

   ```
   &aws opensearchserverless create-security-policy \
     --name policy name \
     --type network --policy "[{\"Description\":\"description\",\"Rules\":[{\"ResourceType\":\"dashboard\",\"Resource\":[\"collection\/collection name\"]},{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/collection name\"]}],\"AllowFromPublic\":true}]"
   ```

   Using the previous *logs-application* example, you might create the following network policy:

   ```
   &aws opensearchserverless create-security-policy \
     --name logs-policy \
     --type network --policy "[{\"Description\":\"Public access for logs collection\",\"Rules\":[{\"ResourceType\":\"dashboard\",\"Resource\":[\"collection\/logs-application\"]},{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/logs-application\"]}],\"AllowFromPublic\":true}]"
   ```

## Creating a collection
<a name="serverless-create-cli-creating"></a>

The following procedure uses the [CreateCollection](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateCollection.html) API action to create a collection of type `SEARCH` or `TIMESERIES`. If you don't specify a collection type in the request, it defaults to `TIMESERIES`. For more information about these types, see [Choosing a collection type](serverless-overview.md#serverless-usecase). To create a *vector search* collection, see [Working with vector search collections](serverless-vector-search.md). 

If your collection is encrypted with an AWS owned key, the `kmsKeyArn` is `auto` rather than an ARN.

**Important**  
After you create a collection, you won't be able to access it unless it matches a data access policy. For more information, see [Data access control for Amazon OpenSearch Serverless](serverless-data-access.md).

**To create a collection**

1. Verify that you created required policies described in [Before you begin](#serverless-create-cli-before-you-begin).

1. Run the following command. For `type` specify either `SEARCH` or `TIMESERIES`.

   ```
   &aws opensearchserverless create-collection --name "collection name" --type collection type --description "description"
   ```

## Creating a collection with an automatic semantic enrichment index
<a name="serverless-create-cli-automatic-semantic-enrichment"></a>

Use the following procedure to create a new OpenSearch Serverless collection with an index that is configured for [automatic semantic enrichment](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-semantic-enrichment.html). The procedure uses the OpenSearch Serverless [CreateIndex](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateIndex.html) API action.

**To create a new collection with an index configured for automatic semantic enrichment**

Run the following command to create the collection and an index.

```
&aws opensearchserverless create-index \
--region Region ID \
--id collection name --index-name index name \
--index-schema \
'mapping in json'
```

Here's an example.

```
&aws opensearchserverless create-index \
--region us-east-1 \
--id conversation_history --index-name conversation_history_index \
--index-schema \ 
'{
    "mappings": {
        "properties": {
            "age": {
                "type": "integer"
            },
            "name": {
                "type": "keyword"
            },
            "user_description": {
                "type": "text"
            },
            "conversation_history": {
                "type": "text",
                "semantic_enrichment": {
                    "status": "ENABLED",
                    // Specifies the sparse tokenizer for processing multi-lingual text
                    "language_option": "MULTI-LINGUAL", 
                    // If embedding_field is provided, the semantic embedding field will be set to the given name rather than original field name + "_embedding"
                    "embedding_field": "conversation_history_user_defined" 
                }
            },
            "book_title": {
                "type": "text",
                "semantic_enrichment": {
                    // No embedding_field is provided, so the semantic embedding field is set to "book_title_embedding"
                    "status": "ENABLED",
                    "language_option": "ENGLISH"
                }
            },
            "abstract": {
                "type": "text",
                "semantic_enrichment": {
                    // If no language_option is provided, it will be set to English.
                    // No embedding_field is provided, so the semantic embedding field is set to "abstract_embedding"
                    "status": "ENABLED" 
                }
            }
        }
    }
}'
```

# Accessing OpenSearch Dashboards
<a name="serverless-dashboards"></a>

After you create a collection with the AWS Management Console, you can navigate to the collection's OpenSearch Dashboards URL. You can find the Dashboards URL by choosing **Collections** in the left navigation pane and selecting the collection to open its details page. The URL takes the format `https://dashboards.us-east-1.aoss.amazonaws.com/_login/?collectionId=07tjusf2h91cunochc`. Once you navigate to the URL, you'll automatically log into Dashboards.

If you already have the OpenSearch Dashboards URL available but aren't on the AWS Management Console, calling the Dashboards URL from the browser will redirect to the console. Once you enter your AWS credentials, you'll automatically log in to Dashboards. For information about accessing collections for SAML, see [Accessing OpenSearch Dashboards with SAML](serverless-saml.md#serverless-saml-dashboards).

The OpenSearch Dashboards console timeout is one hour and isn't configurable.

**Note**  
On May 10, 2023, OpenSearch introduced a common global endpoint for OpenSearch Dashboards. You can now navigate to OpenSearch Dashboards in the browser with a URL that takes the format `https://dashboards.us-east-1.aoss.amazonaws.com/_login/?collectionId=07tjusf2h91cunochc`. To ensure backward compatibility, we'll continue to support the existing collection specific OpenSearch Dashboards endpoints with the format `https://07tjusf2h91cunochc.us-east-1.aoss.amazonaws.com/_dashboards`.

# Viewing collections
<a name="serverless-list"></a>

You can view the existing collections in your AWS account on the **Collections** tab of the Amazon OpenSearch Service console.

To list collections along with their IDs, send a [ListCollections](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListCollections.html) request.

```
&aws opensearchserverless list-collections
```

**Sample response**

```
{
   "collectionSummaries":[
      {
         "arn":"arn:aws:aoss:us-east-1:123456789012:collection/07tjusf2h91cunochc",
         "id":"07tjusf2h91cunochc",
         "name":"my-collection",
         "status":"CREATING"
      }
   ]
}
```

To limit the search results, use collection filters. This request filters the response to collections in the `ACTIVE` state: 

```
&aws opensearchserverless list-collections --collection-filters '{ "status": "ACTIVE" }'
```

To get more detailed information about one or more collections, including the OpenSearch endpoint and the OpenSearch Dashboards endpoint, send a [BatchGetCollection](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_BatchGetCollection.html) request:

```
&aws opensearchserverless batch-get-collection --ids "07tjusf2h91cunochc" "1iu5usc4rame"
```

**Note**  
You can include `--names` or `--ids` in the request, but not both.

**Sample response**

```
{
   "collectionDetails":[
      {
         "id": "07tjusf2h91cunochc",
         "name": "my-collection",
         "status": "ACTIVE",
         "type": "SEARCH",
         "description": "",
         "arn": "arn:aws:aoss:us-east-1:123456789012:collection/07tjusf2h91cunochc",
         "kmsKeyArn": "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab",
         "createdDate": 1667446262828,
         "lastModifiedDate": 1667446300769,
         "collectionEndpoint": "https://07tjusf2h91cunochc.us-east-1.aoss.amazonaws.com",
         "dashboardEndpoint": "https://07tjusf2h91cunochc.us-east-1.aoss.amazonaws.com/_dashboards"
      },
      {
         "id": "178ukvtg3i82dvopdid",
         "name": "another-collection",
         "status": "ACTIVE",
         "type": "TIMESERIES",
         "description": "",
         "arn": "arn:aws:aoss:us-east-1:123456789012:collection/178ukvtg3i82dvopdid",
         "kmsKeyArn": "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab",
         "createdDate": 1667446262828,
         "lastModifiedDate": 1667446300769,
         "collectionEndpoint": "https://178ukvtg3i82dvopdid.us-east-1.aoss.amazonaws.com",
         "dashboardEndpoint": "https://178ukvtg3i82dvopdid.us-east-1.aoss.amazonaws.com/_dashboards"
      }
   ],
   "collectionErrorDetails":[]
}
```

# Deleting collections
<a name="serverless-delete"></a>

Deleting a collection deletes all data and indexes in the collection. You can't recover collections after you delete them.

**To delete a collection using the console**

1. From the **Collections** panel of the Amazon OpenSearch Service console, select the collection you want to delete.

1. Choose **Delete** and confirm deletion.

To delete a collection using the AWS CLI, send a [DeleteCollection](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_DeleteCollection.html) request:

```
&aws opensearchserverless delete-collection --id 07tjusf2h91cunochc
```

**Sample response**

```
{
   "deleteCollectionDetail":{
      "id":"07tjusf2h91cunochc",
      "name":"my-collection",
      "status":"DELETING"
   }
}
```

# Working with vector search collections
<a name="serverless-vector-search"></a>

The *vector search* collection type in OpenSearch Serverless provides a similarity search capability that is scalable and high performing. It makes it easy for you to build modern machine learning (ML) augmented search experiences and generative artificial intelligence (AI) applications without having to manage the underlying vector database infrastructure. 

Use cases for vector search collections include image searches, document searches, music retrieval, product recommendations, video searches, location-based searches, fraud detection, and anomaly detection. 

Because the vector engine for OpenSearch Serverless is powered by the [k-nearest neighbor (k-NN) search feature](https://opensearch.org/docs/latest/search-plugins/knn/index/) in OpenSearch, you get the same functionality with the simplicity of a serverless environment. The engine supports [k-NN plugin API](https://opensearch.org/docs/latest/search-plugins/knn/api/). With these operations, you can take advantage of full-text search, advanced filtering, aggregations, geospatial queries, nested queries for faster retrieval of data, and enhanced search results.

The vector engine provides distance metrics such as Euclidean distance, cosine similarity, and dot product similarity, and can accommodate 16,000 dimensions. You can store fields with various data types for metadata, such as numbers, Booleans, dates, keywords, and geopoints. You can also store fields with text for descriptive information to add more context to stored vectors. Colocating the data types reduces complexity, increases maintainability, and avoids data duplication, version compatibility challenges, and licensing issues. 

**Note**  
Note the following information:  
Amazon OpenSearch Serverless supports Faiss 16-bit scalar quantization which can be used to perform conversions between 32-bit floating and 16-bit vectors. To learn more, see [ Faiss 16-bit scalar quantization](https://opensearch.org/docs/latest/search-plugins/knn/knn-vector-quantization/#faiss-16-bit-scalar-quantization). You can also use binary vectors to reduce memory costs. For more information, see [Binary vectors](https://opensearch.org/docs/latest/field-types/supported-field-types/knn-vector#binary-vectors).
Amazon OpenSearch Serverless supports disk-based vector search. Disk-based vector search significantly reduces the operational costs for vector workloads in low-memory environments. For more information, see [Disk-based vector search](https://docs.opensearch.org/2.19/vector-search/optimizing-storage/disk-based-vector-search/).

## Getting started with vector search collections
<a name="serverless-vector-tutorial"></a>

In this tutorial, you complete the following steps to store, search, and retrieve vector embeddings in real time:

1. [Configure permissions](#serverless-vector-permissions)

1. [Create a collection](#serverless-vector-create)

1. [Upload and search data](#serverless-vector-index)

1. [Delete the collection](#serverless-vector-delete)

### Step 1: Configure permissions
<a name="serverless-vector-permissions"></a>

To complete this tutorial (and to use OpenSearch Serverless in general), you must have the correct AWS Identity and Access Management (IAM) permissions. In this tutorial, you create a collection, upload and search data, and then delete the collection.

Your user or role must have an attached [identity-based policy](security-iam-serverless.md#security-iam-serverless-id-based-policies) with the following minimum permissions:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Action": [
        "aoss:CreateCollection",
        "aoss:ListCollections",
        "aoss:BatchGetCollection",
        "aoss:DeleteCollection",
        "aoss:CreateAccessPolicy",
        "aoss:ListAccessPolicies",
        "aoss:UpdateAccessPolicy",
        "aoss:CreateSecurityPolicy",
        "iam:ListUsers",
        "iam:ListRoles"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
```

------

For more information about OpenSearch Serverless IAM permissions, see [Identity and Access Management for Amazon OpenSearch Serverless](security-iam-serverless.md).

### Step 2: Create a collection
<a name="serverless-vector-create"></a>

A *collection* is a group of OpenSearch indexes that work together to support a specific workload or use case.

**To create an OpenSearch Serverless collection**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. Choose **Collections** in the left navigation pane and choose **Create collection**.

1. Name the collection **housing**.

1. For collection type, choose **Vector search**. For more information, see [Choosing a collection type](serverless-overview.md#serverless-usecase).

1. Under **Deployment type**, clear **Enable redundancy (active replicas)**. This creates a collection in development or testing mode, and reduces the number of OpenSearch Compute Units (OCUs) in your collection to two. If you want to create a production environment in this tutorial, leave the check box selected. 

1. Under **Security**, select **Easy create** to streamline your security configuration. All the data in the vector engine is encrypted in transit and at rest by default. The vector engine supports fine-grained IAM permissions so that you can define who can create, update, and delete encryptions, networks, collections, and indexes.

1. Choose **Next**.

1. Review your collection settings and choose **Submit**. Wait several minutes for the collection status to become `Active`.

### Step 3: Upload and search data
<a name="serverless-vector-index"></a>

An *index* is a collection of documents with a common data schema that provides a way for you to store, search, and retrieve your vector embeddings and other fields. You can create and upload data to indexes in an OpenSearch Serverless collection by using the [Dev Tools](https://opensearch.org/docs/latest/dashboards/dev-tools/index-dev/) console in OpenSearch Dashboards, or an HTTP tool such as [Postman](https://www.postman.com/downloads/) or [awscurl](https://github.com/okigan/awscurl). This tutorial uses Dev Tools.

**To index and search data in the housing collection**

1. To create a single index for your new collection, send the following request in the [Dev Tools](https://opensearch.org/docs/latest/dashboards/dev-tools/index-dev/) console. By default, this creates an index with an `nmslib` engine and Euclidean distance.

   ```
   PUT housing-index
   {
      "settings": {
         "index.knn": true
      },
      "mappings": {
         "properties": {
            "housing-vector": {
               "type": "knn_vector",
               "dimension": 3
            },
            "title": {
               "type": "text"
            },
            "price": {
               "type": "long"
            },
            "location": {
               "type": "geo_point"
            }
         }
      }
   }
   ```

1. To index a single document into *housing-index*, send the following request:

   ```
   POST housing-index/_doc
   {
     "housing-vector": [
       10,
       20,
       30
     ],
     "title": "2 bedroom in downtown Seattle",
     "price": "2800",
     "location": "47.71, 122.00"
   }
   ```

1. To search for properties that are similar to the ones in your index, send the following query:

   ```
   GET housing-index/_search
   {
       "size": 5,
       "query": {
           "knn": {
               "housing-vector": {
                   "vector": [
                       10,
                       20,
                       30
                   ],
                   "k": 5
               }
           }
       }
   }
   ```

### Step 4: Delete the collection
<a name="serverless-vector-delete"></a>

Because the *housing* collection is for test purposes, make sure to delete it when you're done experimenting.

**To delete an OpenSearch Serverless collection**

1. Go back to the **Amazon OpenSearch Service** console.

1. Choose **Collections** in the left navigation pane and select the **properties** collection.

1. Choose **Delete** and confirm the deletion.

## Filtered search
<a name="serverless-vector-filter"></a>

You can use filters to refine your semantic search results. To create an index and perform a filtered search on your documents, substitute [Upload and search data](#serverless-vector-index) in the previous tutorial with the following instructions. The other steps remain the same. For more information about filters, see [k-NN search with filters](https://opensearch.org/docs/latest/search-plugins/knn/filter-search-knn/).

**To index and search data in the housing collection**

1. To create a single index for your collection, send the following request in the [Dev Tools](https://opensearch.org/docs/latest/dashboards/dev-tools/index-dev/) console:

   ```
   PUT housing-index-filtered
   {
     "settings": {
       "index.knn": true
     },
     "mappings": {
       "properties": {
         "housing-vector": {
           "type": "knn_vector",
           "dimension": 3,
           "method": {
             "engine": "faiss",
             "name": "hnsw"
           }
         },
         "title": {
           "type": "text"
         },
         "price": {
           "type": "long"
         },
         "location": {
           "type": "geo_point"
         }
       }
     }
   }
   ```

1. To index a single document into *housing-index-filtered*, send the following request:

   ```
   POST housing-index-filtered/_doc
   {
     "housing-vector": [
       10,
       20,
       30
     ],
     "title": "2 bedroom in downtown Seattle",
     "price": "2800",
     "location": "47.71, 122.00"
   }
   ```

1. To search your data for an apartment in Seattle under a given price and within a given distance of a geographical point, send the following request:

   ```
   GET housing-index-filtered/_search
   {
     "size": 5,
     "query": {
       "knn": {
         "housing-vector": {
           "vector": [
             0.1,
             0.2,
             0.3
           ],
           "k": 5,
           "filter": {
             "bool": {
               "must": [
                 {
                   "query_string": {
                     "query": "Find me 2 bedroom apartment in Seattle under $3000 ",
                     "fields": [
                       "title"
                     ]
                   }
                 },
                 {
                   "range": {
                     "price": {
                       "lte": 3000
                     }
                   }
                 },
                 {
                   "geo_distance": {
                     "distance": "100miles",
                     "location": {
                       "lat": 48,
                       "lon": 121
                     }
                   }
                 }
               ]
             }
           }
         }
       }
     }
   }
   ```

## Billion scale workloads
<a name="serverless-vector-billion"></a>

Vector search collections support workloads with billions of vectors. You don’t need to reindex for scaling purposes because auto scaling does this for you. If you have millions of vectors (or more) with a high number of dimensions and need more than 200 OCUs, contact [AWS Support](https://aws.amazon.com/premiumsupport/) to raise your the maximum OpenSearch Compute Units (OCUs) for your account. 

## Limitations
<a name="serverless-vector-limitations"></a>

Vector search collections have the following limitations:
+ Vector search collections don't support the Apache Lucene ANN engine.
+ Vector search collections only support the HNSW algorithm with Faiss and do not support IVF and IVFQ.
+ Vector search collections don't support the warmup, stats, and model training API operations.
+ Vector search collections don't support inline or stored scripts.
+ Index count information isn't available in the AWS Management Console for vector search collections. 
+ The refresh interval for indexes on vector search collections is 60 seconds.

## Next steps
<a name="serverless-vector-next"></a>

Now that you know how to create a vector search collection and index data, you might want to try some of the following exercises:
+ Use the OpenSearch Python client to work with vector search collections. See this tutorial on [GitHub](https://github.com/opensearch-project/opensearch-py/blob/main/guides/plugins/knn.md). 
+ Use the OpenSearch Java client to work with vector search collections. See this tutorial on [GitHub](https://github.com/opensearch-project/opensearch-java/blob/main/guides/plugins/knn.md). 
+ Set up LangChain to use OpenSearch as a vector store. LangChain is an open source framework for developing applications powered by language models. For more information, see the [LangChain documentation](https://python.langchain.com/docs/integrations/vectorstores/opensearch).

# Using data lifecycle policies with Amazon OpenSearch Serverless
<a name="serverless-lifecycle"></a>

A data lifecycle policy in Amazon OpenSearch Serverless defines how long OpenSearch Serverless retains data in a time series collection. For example, you can set a policy to retain log data for 30 days before OpenSearch Serverless deletes it.

You can configure a separate policy for each index within each time series collection in your AWS account. OpenSearch Serverless retains documents for at least the duration that you specify in the policy. It then deletes the documents automatically on a best-effort basis, typically within 48 hours or 10% of the retention period, whichever is longer.

Only time series collections support data lifecycle policies. Search and vector search collections do not.

**Topics**
+ [Data lifecycle policies](#serverless-lifecycle-policies)
+ [Required permissions](#serverless-lifecycle-permissions)
+ [Policy precedence](#serverless-lifecycle-precedence)
+ [Policy syntax](#serverless-lifecycle-syntax)
+ [Creating data lifecycle policies](#serverless-lifecycle-create)
+ [Updating data lifecycle policies](#serverless-lifecycle-update)
+ [Deleting data lifecycle policies](#serverless-lifecycle-delete)

## Data lifecycle policies
<a name="serverless-lifecycle-policies"></a>

In a data lifecycle policy, you specify a series of rules. The data lifecycle policy lets you manage the retention period of data associated to indexes or collections that match these rules. These rules define the retention period for data in an index or group of indexes. Each rule consists of a resource type (`index`), a retention period, and a list of resources (indexes) that the retention period applies to.

You define the retention period with one of the following formats:
+ `"MinIndexRetention": "24h"` – OpenSearch Serverless retains index data for the specified period in hours or days. You can set this period to be from `24h` to `3650d`.
+ `"NoMinIndexRetention": true` – OpenSearch Serverless retains index data indefinitely.

In the following sample policy, the first rule specifies a retention period of 15 days for all indexes within the collection `marketing`. The second rule specifies that all index names that begin with `log` in the `finance` collection have no retention period set and will be retained indefinitely.

```
{
   "lifeCyclePolicyDetail": {
      "type": "retention",
      "name": "my-policy",
      "policyVersion": "MTY4ODI0NTM2OTk1N18x",
      "policy": {
         "Rules": [
            {
            "ResourceType":"index",
            "Resource":[
               "index/marketing/*"
            ],
            "MinIndexRetention": "15d"
         },
         {
            "ResourceType":"index",
            "Resource":[
               "index/finance/log*"
            ],
            "NoMinIndexRetention": true
         }
         ]
      },
      "createdDate": 1688245369957,
      "lastModifiedDate": 1688245369957
   }
}
```

In the following sample policy rule, OpenSearch Serverless indefinitely retains the data in all indexes for all collections within the account.

```
{
   "Rules": [
      {
         "ResourceType": "index",
         "Resource": [
            "index/*/*"
         ]
      }
   ],
   "NoMinIndexRetention": true
}
```

## Required permissions
<a name="serverless-lifecycle-permissions"></a>

Lifecycle policies for OpenSearch Serverless use the following AWS Identity and Access Management (IAM) permissions. You can specify IAM conditions to restrict users to data lifecycle policies associated with specific collections and indexes.
+ `aoss:CreateLifecyclePolicy` – Create a data lifecycle policy.
+ `aoss:ListLifecyclePolicies` – List all data lifecycle policies in the current account.
+ `aoss:BatchGetLifecyclePolicy` – View a data lifecycle policy associated with an account or policy name.
+ `aoss:BatchGetEffectiveLifecyclePolicy` – View a data lifecycle policy for a given resource (`index` is the only supported resource).
+ `aoss:UpdateLifecyclePolicy` – Modify a given data lifecycle policy, and change its retention setting or resource.
+ `aoss:DeleteLifecyclePolicy` – Delete a data lifecycle policy.

The following identity-based access policy allows a user to view all data lifecycle policies, and update policies with the resource pattern `index/application-logs`:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aoss:UpdateLifecyclePolicy"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aoss:collection": "application-logs"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "aoss:ListLifecyclePolicies",
                "aoss:BatchGetLifecyclePolicy"
            ],
            "Resource": "*"
        }
    ]
}
```

------

## Policy precedence
<a name="serverless-lifecycle-precedence"></a>

There can be situations where data lifecycle policy rules overlap, within or across policies. When this happens, a rule with a more specific resource name or pattern for an index overrides a rule with a more general resource name or pattern for any indexes that are common to *both* rules.

For example, in the following policy, two rules apply to an index `index/sales/logstash`. In this situation, the second rule takes precedence because `index/sales/log*` is the longest match to `index/sales/logstash`. Therefore, OpenSearch Serverless sets no retention period for the index.

```
{
      "Rules":[
         {
            "ResourceType":"index",
            "Resource":[
               "index/sales/*",
            ],
            "MinIndexRetention": "15d"
         },
         {
            "ResourceType":"index",
            "Resource":[
               "index/sales/log*",
            ],
            "NoMinIndexRetention": true
         }
      ]
   }
```

## Policy syntax
<a name="serverless-lifecycle-syntax"></a>

Provide one or more *rules*. These rules define data lifecycle settings for your OpenSearch Serverless indexes.

Each rule contains the following elements. You can either provide `MinIndexRetention` or `NoMinIndexRetention` in each rule, but not both. 


| Element | Description | 
| --- | --- | 
| Resource type | The type of resource that the rule applies to. The only supported option for data lifecycle policies is index. | 
| Resource | A list of resource names and/or patterns. Patterns consist of a prefix and a wildcard (\$1), which allow the associated permissions to apply to multiple resources. For example, index/<collection-name\$1pattern>/<index-name\$1pattern>. | 
| MinIndexRetention | The minimum period, in days (d) or hours (h), to retain the document in the index. The lower bound is 24h and the upper bound is 3650d. | 
| NoMinIndexRetention | If true, OpenSearch Serverless retains documents indefinitely. | 

In the following example, the first rule applies to all indexes under the `autoparts-inventory` pattern (`index/autoparts-inventory/*`) and requires data to be retained for at least 20 days before any actions, such as deletion or archiving, can occur. 

The second rule targets indexes matching the `auto*/gear` pattern (`index/auto*/gear`), setting a minimum retention period of 24 hours.

The third rule applies specifically to the `tires` index and has no minimum retention period, meaning that data in this index can be deleted or archived immediately or based on other criteria. These rules help manage the retention of index data with varying retention times or no retention restrictions.

```
{
  "Rules": [
    {
      "ResourceType": "index",
      "Resource": [
        "index/autoparts-inventory/*"
      ],
      "MinIndexRetention": "20d"
    },
    {
      "ResourceType": "index",
      "Resource": [
        "index/auto*/gear"
      ],
      "MinIndexRetention": "24h"
    },
    {
      "ResourceType": "index",
      "Resource": [
        "index/autoparts-inventory/tires"
      ],
      "NoMinIndexRetention": true
    }
  ]
}
```

## Creating data lifecycle policies
<a name="serverless-lifecycle-create"></a>

To create a data lifecycle policy, you define rules that manage the retention and deletion of your data based on specified criteria. 

### Console
<a name="serverless-lifecycle-create-console"></a>

**To create a data lifecycle policy**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation pane, choose **Data lifecycle policies**.

1. Choose **Create data lifecycle policy**.

1. Enter a descriptive name for the policy.

1. For **Data lifecycle**, choose **Add** and select the collections and indexes for the policy. 

   Start by choosing the collections to which the indexes belong. Then, either choose the index from the list or enter an index pattern. To select all collections as sources, enter an asterisk (`*`).

1. For **Data retention**, you can either choose to retain the data indefinitely, or deselect **Unlimited (never delete)** and specify a time period after which OpenSearch Serverless automatically deletes the data from Amazon S3.

1. Choose **Save**, then **Create**.

### AWS CLI
<a name="serverless-lifecycle-create-cli"></a>

To create a data lifecycle policy using the AWS CLI, use the [create-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/create-lifecycle-policy.html) command with the following options:
+ `--name` – The name of the policy.
+ `--type` – The type of policy. Currently, the only available value is `retention`.
+ `--policy` – The data lifecycle policy. This parameter accepts both inline policies and .json files. You must encode inline policies as a JSON escaped string. To provide the policy in a file, use the format `--policy file://my-policy.json`.

**Example**  

```
aws opensearchserverless create-lifecycle-policy \
  --name my-policy \
  --type retention \
  --policy "{\"Rules\":[{\"ResourceType\":\"index\",\"Resource\":[\"index/autoparts-inventory/*\"],\"MinIndexRetention\": \"81d\"},{\"ResourceType\":\"index\",\"Resource\":[\"index/sales/orders*\"],\"NoMinIndexRetention\":true}]}"
```

## Updating data lifecycle policies
<a name="serverless-lifecycle-update"></a>

To update a data lifecycle policy, you can modify existing rules to reflect changes in your data retention or deletion requirements. This allows you to adapt your policies as your data management needs evolve.

There might be a few minutes of lag time between when you update the policy and when OpenSearch Serverless starts to enforce the new retention periods.

### Console
<a name="serverless-lifecycle-update-console"></a>

**To update a data lifecycle policy**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation pane, choose **Data lifecycle policies**.

1. Select the data lifecycle policy that you want to update, then choose **Edit**.

1. Modify the policy using the visual editor or the JSON editor.

1. Choose **Save**.

### AWS CLI
<a name="serverless-lifecycle-update-cli"></a>

To update a data lifecycle policy using the AWS CLI, use the [update-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/update-lifecycle-policy.html) command. 

You must include the `--policy-version` parameter in the request. You can retrieve the policy version by using the [list-lifecycle-policies](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/list-lifecycle-policies.html) or [batch-get-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/batch-get-lifecycle-policy.html) commands. We recommend including the most recent policy version to prevent accidentally overwriting changes made by others.

The following request updates a data lifecycle policy with a new policy JSON document.

**Example**  

```
aws opensearchserverless update-lifecycle-policy \
  --name my-policy \
  --type retention \
  --policy-version MTY2MzY5MTY1MDA3Ml8x \
  --policy file://my-new-policy.json
```

## Deleting data lifecycle policies
<a name="serverless-lifecycle-delete"></a>

When you delete a data lifecycle policy, OpenSearch Serverless no longer enforces it on any matching indexes.

### Console
<a name="serverless-lifecycle-delete-console"></a>

**To delete a data lifecycle policy**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation pane, choose **Data lifecycle policies**.

1. Select the policy that you want to delete, then choose **Delete** and confirm deletion.

### AWS CLI
<a name="serverless-lifecycle-delete-cli"></a>

To delete a data lifecycle policy using the AWS CLI, use the [delete-lifecycle-policy](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/delete-lifecycle-policy.html) command.

**Example**  

```
aws opensearchserverless delete-lifecycle-policy \
  --name my-policy \
  --type retention
```

# Using the AWS SDKs to interact with Amazon OpenSearch Serverless
<a name="serverless-sdk"></a>

This section includes examples of how to use the AWS SDKs to interact with Amazon OpenSearch Serverless. These code samples show how to create security policies and collections, and how to query collections.

**Note**  
We're currently building out these code samples. If you want to contribute a code sample (Java, Go, etc.), please open a pull request directly within the [GitHub repository](https://github.com/awsdocs/amazon-opensearch-service-developer-guide/blob/master/doc_source/serverless-sdk.md).

**Topics**
+ [Python](#serverless-sdk-python)
+ [JavaScript](#serverless-sdk-javascript)

## Python
<a name="serverless-sdk-python"></a>

The following sample script uses the [AWS SDK for Python (Boto3)](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/opensearchserverless.html), as well as the [opensearch-py](https://pypi.org/project/opensearch-py/) client for Python, to create encryption, network, and data access policies, create a matching collection, and index some sample data.

To install the required dependencies, run the following commands:

```
pip install opensearch-py
pip install boto3
pip install botocore
pip install requests-aws4auth
```

Within the script, replace the `Principal` element with the Amazon Resource Name (ARN) of the user or role that's signing the request. You can also optionally modify the `region`.

```
from opensearchpy import OpenSearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
import boto3
import botocore
import time

# Build the client using the default credential configuration.
# You can use the CLI and run 'aws configure' to set access key, secret
# key, and default region.

client = boto3.client('opensearchserverless')
service = 'aoss'
region = 'us-east-1'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key,
                   region, service, session_token=credentials.token)


def createEncryptionPolicy(client):
    """Creates an encryption policy that matches all collections beginning with tv-"""
    try:
        response = client.create_security_policy(
            description='Encryption policy for TV collections',
            name='tv-policy',
            policy="""
                {
                    \"Rules\":[
                        {
                            \"ResourceType\":\"collection\",
                            \"Resource\":[
                                \"collection\/tv-*\"
                            ]
                        }
                    ],
                    \"AWSOwnedKey\":true
                }
                """,
            type='encryption'
        )
        print('\nEncryption policy created:')
        print(response)
    except botocore.exceptions.ClientError as error:
        if error.response['Error']['Code'] == 'ConflictException':
            print(
                '[ConflictException] The policy name or rules conflict with an existing policy.')
        else:
            raise error


def createNetworkPolicy(client):
    """Creates a network policy that matches all collections beginning with tv-"""
    try:
        response = client.create_security_policy(
            description='Network policy for TV collections',
            name='tv-policy',
            policy="""
                [{
                    \"Description\":\"Public access for TV collection\",
                    \"Rules\":[
                        {
                            \"ResourceType\":\"dashboard\",
                            \"Resource\":[\"collection\/tv-*\"]
                        },
                        {
                            \"ResourceType\":\"collection\",
                            \"Resource\":[\"collection\/tv-*\"]
                        }
                    ],
                    \"AllowFromPublic\":true
                }]
                """,
            type='network'
        )
        print('\nNetwork policy created:')
        print(response)
    except botocore.exceptions.ClientError as error:
        if error.response['Error']['Code'] == 'ConflictException':
            print(
                '[ConflictException] A network policy with this name already exists.')
        else:
            raise error


def createAccessPolicy(client):
    """Creates a data access policy that matches all collections beginning with tv-"""
    try:
        response = client.create_access_policy(
            description='Data access policy for TV collections',
            name='tv-policy',
            policy="""
                [{
                    \"Rules\":[
                        {
                            \"Resource\":[
                                \"index\/tv-*\/*\"
                            ],
                            \"Permission\":[
                                \"aoss:CreateIndex\",
                                \"aoss:DeleteIndex\",
                                \"aoss:UpdateIndex\",
                                \"aoss:DescribeIndex\",
                                \"aoss:ReadDocument\",
                                \"aoss:WriteDocument\"
                            ],
                            \"ResourceType\": \"index\"
                        },
                        {
                            \"Resource\":[
                                \"collection\/tv-*\"
                            ],
                            \"Permission\":[
                                \"aoss:CreateCollectionItems\"
                            ],
                            \"ResourceType\": \"collection\"
                        }
                    ],
                    \"Principal\":[
                        \"arn:aws:iam::123456789012:role\/Admin\"
                    ]
                }]
                """,
            type='data'
        )
        print('\nAccess policy created:')
        print(response)
    except botocore.exceptions.ClientError as error:
        if error.response['Error']['Code'] == 'ConflictException':
            print(
                '[ConflictException] An access policy with this name already exists.')
        else:
            raise error


def createCollection(client):
    """Creates a collection"""
    try:
        response = client.create_collection(
            name='tv-sitcoms',
            type='SEARCH'
        )
        return(response)
    except botocore.exceptions.ClientError as error:
        if error.response['Error']['Code'] == 'ConflictException':
            print(
                '[ConflictException] A collection with this name already exists. Try another name.')
        else:
            raise error


def waitForCollectionCreation(client):
    """Waits for the collection to become active"""
    response = client.batch_get_collection(
        names=['tv-sitcoms'])
    # Periodically check collection status
    while (response['collectionDetails'][0]['status']) == 'CREATING':
        print('Creating collection...')
        time.sleep(30)
        response = client.batch_get_collection(
            names=['tv-sitcoms'])
    print('\nCollection successfully created:')
    print(response["collectionDetails"])
    # Extract the collection endpoint from the response
    host = (response['collectionDetails'][0]['collectionEndpoint'])
    final_host = host.replace("https://", "")
    indexData(final_host)


def indexData(host):
    """Create an index and add some sample data"""
    # Build the OpenSearch client
    client = OpenSearch(
        hosts=[{'host': host, 'port': 443}],
        http_auth=awsauth,
        use_ssl=True,
        verify_certs=True,
        connection_class=RequestsHttpConnection,
        timeout=300
    )
    # It can take up to a minute for data access rules to be enforced
    time.sleep(45)

    # Create index
    response = client.indices.create('sitcoms-eighties')
    print('\nCreating index:')
    print(response)

    # Add a document to the index.
    response = client.index(
        index='sitcoms-eighties',
        body={
            'title': 'Seinfeld',
            'creator': 'Larry David',
            'year': 1989
        },
        id='1',
    )
    print('\nDocument added:')
    print(response)


def main():
    createEncryptionPolicy(client)
    createNetworkPolicy(client)
    createAccessPolicy(client)
    createCollection(client)
    waitForCollectionCreation(client)


if __name__ == "__main__":
    main()
```

## JavaScript
<a name="serverless-sdk-javascript"></a>

The following sample script uses the [SDK for JavaScript in Node.js](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-opensearchserverless/), as well as the [opensearch-js](https://www.npmjs.com/package/@opensearch-project/opensearch) client for JavaScript, to create encryption, network, and data access policies, create a matching collection, create an index, and index some sample data.

To install the required dependencies, run the following commands:

```
npm i aws-sdk
npm i aws4
npm i @opensearch-project/opensearch
```

Within the script, replace the `Principal` element with the Amazon Resource Name (ARN) of the user or role that's signing the request. You can also optionally modify the `region`.

```
var AWS = require('aws-sdk');
var aws4 = require('aws4');
var {
    Client,
    Connection
} = require("@opensearch-project/opensearch");
var {
    OpenSearchServerlessClient,
    CreateSecurityPolicyCommand,
    CreateAccessPolicyCommand,
    CreateCollectionCommand,
    BatchGetCollectionCommand
} = require("@aws-sdk/client-opensearchserverless");
var client = new OpenSearchServerlessClient();

async function execute() {
    await createEncryptionPolicy(client)
    await createNetworkPolicy(client)
    await createAccessPolicy(client)
    await createCollection(client)
    await waitForCollectionCreation(client)
}

async function createEncryptionPolicy(client) {
    // Creates an encryption policy that matches all collections beginning with 'tv-'
    try {
        var command = new CreateSecurityPolicyCommand({
            description: 'Encryption policy for TV collections',
            name: 'tv-policy',
            type: 'encryption',
            policy: " \
        { \
            \"Rules\":[ \
                { \
                    \"ResourceType\":\"collection\", \
                    \"Resource\":[ \
                        \"collection\/tv-*\" \
                    ] \
                } \
            ], \
            \"AWSOwnedKey\":true \
        }"
        });
        const response = await client.send(command);
        console.log("Encryption policy created:");
        console.log(response['securityPolicyDetail']);
    } catch (error) {
        if (error.name === 'ConflictException') {
            console.log('[ConflictException] The policy name or rules conflict with an existing policy.');
        } else
            console.error(error);
    };
}

async function createNetworkPolicy(client) {
    // Creates a network policy that matches all collections beginning with 'tv-'
    try {
        var command = new CreateSecurityPolicyCommand({
            description: 'Network policy for TV collections',
            name: 'tv-policy',
            type: 'network',
            policy: " \
            [{ \
                \"Description\":\"Public access for television collection\", \
                \"Rules\":[ \
                    { \
                        \"ResourceType\":\"dashboard\", \
                        \"Resource\":[\"collection\/tv-*\"] \
                    }, \
                    { \
                        \"ResourceType\":\"collection\", \
                        \"Resource\":[\"collection\/tv-*\"] \
                    } \
                ], \
                \"AllowFromPublic\":true \
            }]"
        });
        const response = await client.send(command);
        console.log("Network policy created:");
        console.log(response['securityPolicyDetail']);
    } catch (error) {
        if (error.name === 'ConflictException') {
            console.log('[ConflictException] A network policy with that name already exists.');
        } else
            console.error(error);
    };
}

async function createAccessPolicy(client) {
    // Creates a data access policy that matches all collections beginning with 'tv-'
    try {
        var command = new CreateAccessPolicyCommand({
            description: 'Data access policy for TV collections',
            name: 'tv-policy',
            type: 'data',
            policy: " \
            [{ \
                \"Rules\":[ \
                    { \
                        \"Resource\":[ \
                            \"index\/tv-*\/*\" \
                        ], \
                        \"Permission\":[ \
                            \"aoss:CreateIndex\", \
                            \"aoss:DeleteIndex\", \
                            \"aoss:UpdateIndex\", \
                            \"aoss:DescribeIndex\", \
                            \"aoss:ReadDocument\", \
                            \"aoss:WriteDocument\" \
                        ], \
                        \"ResourceType\": \"index\" \
                    }, \
                    { \
                        \"Resource\":[ \
                            \"collection\/tv-*\" \
                        ], \
                        \"Permission\":[ \
                            \"aoss:CreateCollectionItems\" \
                        ], \
                        \"ResourceType\": \"collection\" \
                    } \
                ], \
                \"Principal\":[ \
                    \"arn:aws:iam::123456789012:role\/Admin\" \
                ] \
            }]"
        });
        const response = await client.send(command);
        console.log("Access policy created:");
        console.log(response['accessPolicyDetail']);
    } catch (error) {
        if (error.name === 'ConflictException') {
            console.log('[ConflictException] An access policy with that name already exists.');
        } else
            console.error(error);
    };
}

async function createCollection(client) {
    // Creates a collection to hold TV sitcoms indexes
    try {
        var command = new CreateCollectionCommand({
            name: 'tv-sitcoms',
            type: 'SEARCH'
        });
        const response = await client.send(command);
        return (response)
    } catch (error) {
        if (error.name === 'ConflictException') {
            console.log('[ConflictException] A collection with this name already exists. Try another name.');
        } else
            console.error(error);
    };
}

async function waitForCollectionCreation(client) {
    // Waits for the collection to become active
    try {
        var command = new BatchGetCollectionCommand({
            names: ['tv-sitcoms']
        });
        var response = await client.send(command);
        while (response.collectionDetails[0]['status'] == 'CREATING') {
            console.log('Creating collection...')
            await sleep(30000) // Wait for 30 seconds, then check the status again
            function sleep(ms) {
                return new Promise((resolve) => {
                    setTimeout(resolve, ms);
                });
            }
            var response = await client.send(command);
        }
        console.log('Collection successfully created:');
        console.log(response['collectionDetails']);
        // Extract the collection endpoint from the response
        var host = (response.collectionDetails[0]['collectionEndpoint'])
        // Pass collection endpoint to index document request
        indexDocument(host)
    } catch (error) {
        console.error(error);
    };
}

async function indexDocument(host) {

    var client = new Client({
        node: host,
        Connection: class extends Connection {
            buildRequestObject(params) {
                var request = super.buildRequestObject(params)
                request.service = 'aoss';
                request.region = 'us-east-1'; // e.g. us-east-1
                var body = request.body;
                request.body = undefined;
                delete request.headers['content-length'];
                request.headers['x-amz-content-sha256'] = 'UNSIGNED-PAYLOAD';
                request = aws4.sign(request, AWS.config.credentials);
                request.body = body;

                return request
            }
        }
    });

    // Create an index
    try {
        var index_name = "sitcoms-eighties";

        var response = await client.indices.create({
            index: index_name
        });

        console.log("Creating index:");
        console.log(response.body);

        // Add a document to the index
        var document = "{ \"title\": \"Seinfeld\", \"creator\": \"Larry David\", \"year\": \"1989\" }\n";

        var response = await client.index({
            index: index_name,
            body: document
        });

        console.log("Adding document:");
        console.log(response.body);
    } catch (error) {
        console.error(error);
    };
}

execute()
```

# Using CloudFormation to create Amazon OpenSearch Serverless collections
<a name="serverless-cfn"></a>

You can use CloudFormation to create Amazon OpenSearch Serverless resources such as collections, security policies, and VPC endpoints. For the comprehensive OpenSearch Serverless CloudFormation reference, see [Amazon OpenSearch Serverless](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_OpenSearchServerless.html) in the *CloudFormation User Guide*.

The following sample CloudFormation template creates a simple data access policy, network policy, and security policy, as well as a matching collection. It's a good way to get up and running quickly with Amazon OpenSearch Serverless and provision the necessary elements to create and use a collection.

**Important**  
This example uses public network access, which isn't recommended for production workloads. We recommend using VPC access to protect your collections. For more information, see [AWS::OpenSearchServerless::VpcEndpoint](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-opensearchserverless-vpcendpoint.html) and [Data plane access through AWS PrivateLink](serverless-vpc.md).

```
AWSTemplateFormatVersion: 2010-09-09
Description: 'Amazon OpenSearch Serverless template to create an IAM user, encryption policy, data access policy and collection'
Resources:
  IAMUSer:
    Type: 'AWS::IAM::User'
    Properties:
      UserName:  aossadmin
  DataAccessPolicy:
    Type: 'AWS::OpenSearchServerless::AccessPolicy'
    Properties:
      Name: quickstart-access-policy
      Type: data
      Description: Access policy for quickstart collection
      Policy: !Sub >-
        [{"Description":"Access for cfn user","Rules":[{"ResourceType":"index","Resource":["index/*/*"],"Permission":["aoss:*"]},
        {"ResourceType":"collection","Resource":["collection/quickstart"],"Permission":["aoss:*"]}],
        "Principal":["arn:aws:iam::${AWS::AccountId}:user/aossadmin"]}]
  NetworkPolicy:
    Type: 'AWS::OpenSearchServerless::SecurityPolicy'
    Properties:
      Name: quickstart-network-policy
      Type: network
      Description: Network policy for quickstart collection
      Policy: >-
        [{"Rules":[{"ResourceType":"collection","Resource":["collection/quickstart"]}, {"ResourceType":"dashboard","Resource":["collection/quickstart"]}],"AllowFromPublic":true}]
  EncryptionPolicy:
    Type: 'AWS::OpenSearchServerless::SecurityPolicy'
    Properties:
      Name: quickstart-security-policy
      Type: encryption
      Description: Encryption policy for quickstart collection
      Policy: >-
        {"Rules":[{"ResourceType":"collection","Resource":["collection/quickstart"]}],"AWSOwnedKey":true}
  Collection:
    Type: 'AWS::OpenSearchServerless::Collection'
    Properties:
      Name: quickstart
      Type: TIMESERIES
      Description: Collection to holds timeseries data
    DependsOn: EncryptionPolicy
Outputs:
  IAMUser:
    Value: !Ref IAMUSer
  DashboardURL:
    Value: !GetAtt Collection.DashboardEndpoint
  CollectionARN:
    Value: !GetAtt Collection.Arn
```

# Backing up collections using snapshots
<a name="serverless-snapshots"></a>

Snapshots are point-in-time backups of your Amazon OpenSearch Serverless collections that provide disaster recovery capabilities. OpenSearch Serverless automatically creates and manages snapshots of your collections, ensuring business continuity and data protection. Each snapshot contains index metadata (settings and mappings for your indexes), cluster metadata (index templates and aliases), and index data (all documents and data stored in your indexes).

OpenSearch Serverless provides automatic hourly backups with no manual configuration, zero maintenance overhead, no additional storage costs, quick recovery from accidental data loss, and the ability to restore specific indexes from a snapshot.

Before working with snapshots, understand these important considerations. Creating a snapshot takes time to complete and isn't instantaneous. New documents or updates during snapshot creation will not be included in the snapshot. You can restore snapshots only to their original collection and not to a new one. When restored, indexes receive new UUIDs that differ from their original versions. Restoring to an existing open index in OpenSearch Serverless will overwrite the data of that index provided a new index name or a prefix pattern is not provided.This differs from OpenSearch core behavior. You can run only one restore operation at a time, and you can't start multiple restore operations on the same collection simultaneously. Attempting to restore indexes during an active restore operation causes the operation to fail. During a restore operation, your requests to the indexes fail.

## Required permissions
<a name="serverless-snapshots-permissions"></a>

To work with snapshots, configure the following permissions in your data access policy. For more information about data access policies, see [Data access policies versus IAM policies](serverless-data-access.md#serverless-data-access-vs-iam).


****  

| Data Access Policy | APIs | 
| --- | --- | 
| aoss:DescribeSnapshot | GET /\$1cat/snapshots/aoss-automatedGET \$1snapshot/aoss-automated/snapshot/ | 
| aoss:RestoreSnapshot | POST /\$1snapshot/aoss-automated/snapshot/\$1restore | 
| aoss:DescribeCollectionItems | GET /\$1cat/recovery | 

You can configure policies using the following AWS CLI commands:

1.  [ create-access-policy](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/create-access-policy.html) 

1.  [ delete-access-policy ](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/delete-access-policy.html) 

1. [ get-access-policy ](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/get-access-policy.html)

1. [ update-access-policy ](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/update-access-policy.html)

Here is a sample CLI command for creating an access policy. In the command, replace the *example* content with your specific information.

```
aws opensearchserverless create-access-policy \
--type data \
--name Example-data-access-policy \
--region aws-region \
--policy '[
  {
    "Rules": [
      {
        "Resource": [
          "collection/Example-collection"
        ],
        "Permission": [
          "aoss:DescribeSnapshot",
          "aoss:RestoreSnapshot",
          "aoss:DescribeCollectionItems"
        ],
        "ResourceType": "collection"
      }
    ],
    "Principal": [
      "arn:aws:iam::111122223333:user/UserName"
    ],
    "Description": "Data policy to support snapshot operations."
  }
]'
```

## Working with snapshots
<a name="serverless-snapshots-working-with"></a>

By default, when you create a new collection, OpenSearch Serverless automatically creates snapshots every hour. You don't need to take any action. Each snapshot includes all indexes in the collection. After OpenSearch Serverless creates snapshots, you can list them and review the details of the snapshot using the following procedures.

### List snapshots
<a name="serverless-snapshots-listing"></a>

Use the following procedures to list all snapshots in a collection and review their details.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**.

1. Choose the name of your collection to open its details page.

1. Choose the **Snapshots** tab to display all generated snapshots.

1. Review the snapshot information including:
   + **Snapshot ID** - Unique identifier for the snapshot
   + **Status** - Current state (Available, In Progress)
   + **Created time** - When the snapshot was taken

------
#### [ OpenSearch API ]
+ Use the following command to list all snapshots in a collection:

  ```
  GET /_cat/snapshots/aoss-automated
  ```

  OpenSearch Serverless returns a response like the following:

  ```
  id                                 status  start_epoch start_time end_epoch  end_time    duration    indexes successful_shards failed_shards total_shards
  snapshot-ExampleSnapshotID1     SUCCESS 1737964331  07:52:11   1737964382 07:53:02    50.4s       1                                             
  snapshot-ExampleSnapshotID2     SUCCESS 1737967931  08:52:11   1737967979 08:52:59    47.7s       2                                             
  snapshot-ExampleSnapshotID3     SUCCESS 1737971531  09:52:11   1737971581 09:53:01    49.1s       3                                             
  snapshot-ExampleSnapshotID4 IN_PROGRESS 1737975131  10:52:11   -          -            4.8d       3
  ```

------

### Get snapshot details
<a name="serverless-snapshots-get-details"></a>

Use the following procedures to retrieve detailed information about a specific snapshot.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**.

1. Choose the name of your collection to open its details page.

1. Choose the **Snapshots** tab.

1. Choose the snapshot job ID to display detailed information about the snapshot, including metadata, indexes included, and timing information.

------
#### [ OpenSearch API ]
+ Use the following command to retrieve information about a snapshot. In the command, replace the *example* content with your specific information.

  ```
  GET _snapshot/aoss-automated/snapshot/
  ```

  Example Request:

  ```
  GET _snapshot/aoss-automated/snapshot-ExampleSnapshotID1/
  ```

  Example Response:

  ```
  {
      "snapshots": [
          {
              "snapshot": "snapshot-ExampleSnapshotID1-5e01-4423-9833Example",
              "uuid": "Example-5e01-4423-9833-9e9eb757Example",
              "version_id": 136327827,
              "version": "2.11.0",
              "remote_store_index_shallow_copy": true,
              "indexes": [
                  "Example-index-0117"
              ],
              "data_streams": [],
              "include_global_state": true,
              "metadata": {},
              "state": "SUCCESS",
              "start_time": "2025-01-27T09:52:11.953Z",
              "start_time_in_millis": 1737971531953,
              "end_time": "2025-01-27T09:53:01.062Z",
              "end_time_in_millis": 1737971581062,
              "duration_in_millis": 49109,
              "failures": [],
              "shards": {
                  "total": 0,
                  "failed": 0,
                  "successful": 0
              }
          }
      ]
  }
  ```

------

The snapshot response includes several key fields: `id` provides a unique identifier for the snapshot operation, `status` returns the current state `SUCCESS` or `IN_PROGRESS`, `duration` indicates the time taken to complete the snapshot operation, and `indexes` returns the number of indexes included in the snapshot.

## Restoring from a snapshot
<a name="serverless-snapshots-restoring"></a>

Restoring from a snapshot recovers data from a previously taken backup. This process is crucial for disaster recovery and data management in OpenSearch Serverless. Before restoring, understand that restored indexes will have different UUIDs than their original versions, restoring to an existing open index in OpenSearch Serverless will overwrite the data of that index provided a new index name or a prefix pattern is not provided, snapshots can only be restored to their original collection (cross-collection restoration is not supported), and restore operations will impact cluster performance so plan accordingly.

Use the following procedures to restore backed up indexes from a snapshot.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**.

1. Choose the name of your collection to open its details page.

1. Choose the **Snapshots** tab to display available snapshots.

1. Choose the snapshot you want to restore from, then choose **Restore from snapshot**.

1. In the **Restore from snapshot** dialog:
   + For **Snapshot name**, verify the selected snapshot ID.
   + For **Snapshot scope**, choose either:
     + **All indexes in collection** - Restore all indexes from the snapshot
     + **Specific indexes** - Select individual indexes to restore
   + For **Destination**, choose the collection to restore to.
   + (Optional) Configure **Rename settings** to rename restored indexes:
     + **Do not rename** - Keep original index names
     + **Add prefix to restored index names** - Add a prefix to avoid conflicts
     + **Rename using regular expression** - Use advanced renaming patterns
   + (Optional) Configure **Notification** settings to be notified when the restore completes or encounters errors.

1. Choose **Save** to start the restore operation.

------
#### [ OpenSearch API ]

1. Run the following command to identify the appropriate snapshot.

   ```
   GET /_snapshot/aoss-automated/_all
   ```

   For a smaller list of snapshots, run the following command.

   ```
   GET /_cat/snapshots/aoss-automated
   ```

1. Run the following command to verify the details of the snapshot before restoring. In the command, replace the *example* content with your specific information.

   ```
   GET _snapshot/aoss-automated/snapshot-ExampleSnapshotID1/
   ```

1. Run the following command to restore from a specific snapshot.

   ```
   POST /_snapshot/aoss-automated/snapshot-ID/_restore
   ```

   You can customize the restore operation by including a request body. Here's an example.

   ```
   POST /_snapshot/aoss-automated/snapshot-ExampleSnapshotID1-5e01-4423-9833Example/_restore
   {
     "indices": "opensearch-dashboards*,my-index*",
     "ignore_unavailable": true,
     "include_global_state": false,
     "include_aliases": false,
     "rename_pattern": "opensearch-dashboards(.+)",
     "rename_replacement": "restored-opensearch-dashboards$1"
   }
   ```

1. Run the following command to view the restore progress.

   ```
   GET /_cat/recovery
   ```

------

**Note**  
When restoring a snapshot with a command that includes a request body, you can use several parameters to control the restore behavior. The `indices` parameter specifies which indices to restore and supports wildcard patterns. Set `ignore_unavailable` to continue the restore operation even if an index in the snapshot is missing. Use `include_global_state` to determine whether to restore the cluster state, and `include_aliases` to control whether to restore associated aliases. The `rename_pattern` and `rename_replacement` parameters rename indexes during the restore operation.

# Zstandard Codec Support in Amazon OpenSearch Serverless
<a name="serverless-zstd-compression"></a>

Index codecs determine how an index's stored fields are compressed and stored on disk and in S3. The index codec is controlled by the static `index.codec` setting that specifies the compression algorithm. This setting impacts both index shard size and index operation performance.

By default, indexes in OpenSearch Serverless use the default codec with the LZ4 compression algorithm. OpenSearch Serverless also supports `zstd` and `zstd_no_dict` codecs with configurable compression levels from 1 to 6.

**Important**  
Since `index.codec` is a static setting, it cannot be changed after index creation.

For more details, refer to the [OpenSearch Index Codecs documentation](https://opensearch.org/docs/latest/im-plugin/index-codecs/).

## Creating an index with ZSTD codec
<a name="serverless-zstd-create-index"></a>

You can specify the ZSTD codec during index creation using the `index.codec` setting:

```
PUT /your_index
{
  "settings": {
    "index.codec": "zstd"
  }
}
```

## Compression levels
<a name="serverless-zstd-compression-levels"></a>

ZSTD codecs support optional compression levels via the `index.codec.compression_level` setting, accepting integers in the range [1, 6]. Higher compression levels result in better compression ratios (smaller storage) but slower compression and decompression speeds. The default compression level is 3.

```
PUT /your_index
{
  "settings": {
    "index.codec": "zstd",
    "index.codec.compression_level": 2
  }
}
```

## Performance benchmarking
<a name="serverless-zstd-performance"></a>

Based on benchmark testing with the nyc\$1taxi dataset, ZSTD compression achieved 26-32% better compression compared to baseline across different combinations of `zstd`, `zstd_no_dict`, and compression levels.


| Metric | ZSTD L1 | ZSTD L6 | ZSTD\$1NO\$1DICT L1 | ZSTD\$1NO\$1DICT L6 | 
| --- | --- | --- | --- | --- | 
| Index Size Reduction | 28.10% | 32% | 26.90% | 28.70% | 
| Indexing Throughput Change | -0.50% | -23.80% | -0.50% | -5.30% | 
| Match-all Query p90 Latency Improvement | -16.40% | 29.50% | -16.40% | 23.40% | 
| Range Query p90 Latency Improvement | 90.90% | 92.40% | -282.90% | 92.50% | 
| Distance Amount p90 Agg Latency Improvement | 2% | 24.70% | 2% | 13.80% | 

For more details, refer to the [AWS OpenSearch blog](https://aws.amazon.com/blogs/big-data/optimize-storage-costs-in-amazon-opensearch-service-using-zstandard-compression/).

# Save Storage by Using Derived Source
<a name="serverless-derived-source"></a>

By default, OpenSearch Serverless stores each ingested document in the `_source` field, which contains the original JSON document body, and indexes individual fields for search. While the `_source` field is not searchable, it is retained so that the full document can be returned when executing fetch requests, such as get and search. When derived source is enabled, OpenSearch Serverless skips storing the `_source` field and instead reconstructs it dynamically on demand — for example, during search, get, mget, reindex, or update operations. Using the derived source setting can reduce storage usage by up to 50%.

## Configuration
<a name="serverless-derived-source-config"></a>

To configure derived source for your index, create the index using the `index.derived_source.enabled` setting:

```
PUT my-index1
{
  "settings": {
    "index": {
      "derived_source": {
        "enabled": true
      }
    }
  }
}
```

## Important considerations
<a name="serverless-derived-source-considerations"></a>
+ Only certain field types are supported. For a list of supported fields and limitations, refer to the [OpenSearch documentation](https://docs.opensearch.org/latest/mappings/metadata-fields/source/#supported-fields-and-parameters). If you create an index with derived source and an unsupported field, index creation will fail. If you attempt to ingest a document with an unsupported field in a derived source-enabled index, ingestion will fail. Use this feature only when you are aware of the field types that will be added to your index.
+ The setting `index.derived_source.enabled` is a static setting. This cannot be changed after the index is created.

## Limitations on query responses
<a name="serverless-derived-source-limitations"></a>

When derived source is enabled, it imposes certain limitations on how query responses are generated and returned.
+ Date fields with multiple formats specified always use the first format in the list for all requested documents, regardless of the original ingested format.
+ Geopoint values are returned in a fixed `{"lat": lat_val, "lon": lon_val}` format and may lose some precision.
+ Multi-value arrays may be sorted, and keyword fields may be deduplicated.

For more details, refer to the [OpenSearch blog](https://opensearch.org/blog/save-up-to-2x-on-storage-with-derived-source/).

## Performance benchmarking
<a name="serverless-derived-source-performance"></a>

Based on benchmark testing with the nyc\$1taxi dataset, derived source achieved 58% reduction in index size compared to baseline.


| Metric | Derived Source | 
| --- | --- | 
| Index Size Reduction | 58.3% | 
| Indexing Throughput Change | 3.7% | 
| Indexing p90 Latency Change | 6.9% | 
| Match-all Query p90 Latency Improvement | 19% | 
| Range Query p90 Latency Improvement | -18.8% | 
| Distance Amount p90 Agg Latency Improvement | -7.3% | 

For more details, refer to the [OpenSearch blog](https://opensearch.org/blog/save-up-to-2x-on-storage-with-derived-source/).

# Amazon OpenSearch Serverless collection groups
<a name="serverless-collection-groups"></a>

*Collection groups* in Amazon OpenSearch Serverless organize multiple collections and enable compute resource sharing across collections with different KMS keys. This shared compute model reduces costs by eliminating the need for separate OpenSearch Compute Units (OCUs) for each KMS key.

Each OpenSearch Serverless collection you create is protected with encryption of data at rest using AWS KMS to store and manage your encryption keys. Collections within the same collection group share compute resources and OCU memory space, even when they use different KMS keys for encryption.

Collection groups provide isolation for security and performance requirements. You can group collections with the same KMS key into a single collection group for security isolation, or combine collections with different KMS keys in the same group for cost optimization. This flexibility lets you balance security requirements with resource efficiency.

When you add a collection to a collection group, OpenSearch Serverless assigns it to the group's shared compute resources. The system automatically manages the distribution of workloads across these resources while maintaining security by encrypting each collection's data with its designated KMS key. Access controls continue to apply at the collection level, and the shared compute resources access multiple KMS keys as needed to serve the collections in the group.

You control resource allocation by setting minimum and maximum OCU limits at the collection group level. These limits apply to all collections in the group and help you manage costs while ensuring consistent performance.

By default, there is a service quota (limit) for the number of collections in a collection group, the number of indexes in a collection, and the number of OCUs in a collection group. For more information, see [OpenSearch Serverless quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-limits-serverless).

## Key concepts
<a name="collection-groups-concepts"></a>

Collection group  
An AWS resource that acts as a container for one or more collections. Each collection group has a unique identifier and can have its own capacity limits and configuration settings.

Shared compute  
Collections within the same collection group share the same set of OCUs, regardless of the KMS keys they use. This sharing reduces costs by eliminating the need for separate compute resources for each KMS key.

Capacity limits  
You can set minimum and maximum OCU limits for both indexing and search operations at the collection group level. These limits help control costs and ensure consistent performance.

# Collection group capacity limits
<a name="collection-groups-capacity-limits"></a>

Collection groups provide granular control over resource allocation through minimum and maximum OCU limits. These limits apply to all collections within the group and operate independently from account-level capacity settings.

By default, there is a service quota (limit) for the number of collections in a collection group, the number of indexes in a collection, and the number of OCUs in a collection group. For more information, see [OpenSearch Serverless quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-limits-serverless).

## Understanding collection group capacity limits
<a name="collection-groups-capacity-overview"></a>

You can configure minimum and maximum OCU limits for both indexing and search operations at the collection group level. These limits control how OpenSearch Serverless scales resources for collections in the group:
+ **Minimum OCU** – The minimum number of OCUs that OpenSearch Serverless maintains for the collection group, ensuring consistent baseline performance.
  + If the workload requires fewer OCU's than the specified minimum value, OpenSearch Serverless would still maintain the specified minimum value of OCU's and billing would reflect the same.
  + If the workload requires higher number of OCU's than the specified minimum value, OpenSearch Serverless would maintain that level of OCU's that's required for the workload and the billing would reflect the higher OCU utilization.
+ **Maximum OCU** – The maximum number of OCUs that OpenSearch Serverless can scale up to for the collection group, helping you control costs.

Collection group capacity limits are decoupled from account-level limits. Account-level maximum OCU settings apply only to collections not associated with any collection group, while collection group maximum OCU settings apply to collections within that specific group.

## Valid capacity limit values
<a name="collection-groups-capacity-values"></a>

When setting minimum and maximum OCU limits for a collection group, you can only use values from the following set: 1, 2, 4, 8, 16, and multiples of 16 (such as 32, 48, 64, 80, 96) up to a maximum of 1,696 OCUs.

Both minimum and maximum OCU limits are optional when you create a collection group. If you don't specify a maximum OCU limit, OpenSearch Serverless uses a default value of 96 OCUs.

The minimum OCU limit must be less than or equal to the maximum OCU limit.

## Understanding the relationship between account-level and collection group OCU limits
<a name="collection-groups-capacity-relationship"></a>

When planning your OpenSearch Serverless capacity, it's important to understand how account-level OCU limits and collection group OCU limits interact. The sum of the maximum OCU settings across all collection groups plus the maximum OCU setting at the account level must be less than or equal to the service quota limit per account. For current limit values, see [OpenSearch Serverless quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-limits-serverless).

**Note**  
Account-level maximum OCU settings apply only to collections not associated with any collection group. Collections within collection groups are governed by their respective collection group limits, not the account-level limits.

This constraint applies to both indexing and search OCUs independently. For example, if you configure account-level settings and collection groups, you must ensure the total doesn't exceed the service quota limit for indexing OCUs and separately doesn't exceed the service quota limit for search OCUs. Additionally, you can create a maximum of 300 collection groups per account.

**Example: Planning capacity with account-level and collection group limits**  
If you set the account-level maximum search OCU to 500 and the service quota limit is 1,700:
+ And create 2 collection groups, the sum of the maximum OCU for the 2 collection groups must be no more than 1,200 (1,700 - 500)
+ You could leave each collection group at the default maximum OCU of 96 (96 \$1 96 \$1 500 = 692), leaving headroom for future growth
+ Or you could increase each collection group's maximum to 600 (600 \$1 600 \$1 500 = 1,700), using the full capacity allowed by the service quota

This relationship is critical for capacity planning. Before creating new collection groups or increasing maximum OCU limits, verify that your total allocation doesn't exceed the service quota limit. If you reach this limit, you must either reduce the maximum OCU settings on existing collection groups or decrease your account-level maximum OCU settings to make room for new allocations.

## Configuring capacity limits
<a name="collection-groups-capacity-configure"></a>

You can set capacity limits when you create a collection group or update them later. To configure capacity limits using the AWS CLI, use the [CreateCollectionGroup](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateCollectionGroup.html) or [UpdateCollectionGroup](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateCollectionGroup.html) commands:

```
aws opensearchserverless create-collection-group \
    --name my-collection-group \
    --capacity-limits maxIndexingCapacityInOCU=32,maxSearchCapacityInOCU=32,minIndexingCapacityInOCU=4,minSearchCapacityInOCU=4
```

To update capacity limits for an existing collection group:

```
aws opensearchserverless update-collection-group \
    --id abcdef123456 \
    --capacity-limits maxIndexingCapacityInOCU=48,maxSearchCapacityInOCU=48,minIndexingCapacityInOCU=8,minSearchCapacityInOCU=8
```

## Monitoring collection group capacity
<a name="collection-groups-capacity-monitoring"></a>

OpenSearch Serverless emits the following Amazon CloudWatch Logs metrics at one-minute intervals to help you monitor OCU utilization and capacity limits at the collection group level:
+ `IndexingOCU` – The number of indexing OCUs currently in use by the collection group.
+ `SearchOCU` – The number of search OCUs currently in use by the collection group.

OpenSearch Serverless also emits OCU metrics at the account level for collections not associated with any collection group. You can aggregate these metrics in CloudWatch to visualize the sum of OCUs across all collection groups and account-level collections.

Configure alarms to notify you when your collection group approaches its capacity limits so you can adjust settings as needed. For more information about OpenSearch Serverless metrics, see [Monitoring Amazon OpenSearch Serverless](serverless-monitoring.md).

## How capacity limits are enforced
<a name="collection-groups-capacity-enforcement"></a>

OpenSearch Serverless enforces collection group capacity limits during scaling operations. When your collections need additional resources, OpenSearch Serverless scales up to the maximum OCU limit. When demand decreases, OpenSearch Serverless scales down but maintains at least the minimum OCU limit to ensure consistent performance.

Capacity limits are enforced only when the collection group contains at least one collection. Empty collection groups do not consume OCUs or enforce capacity limits.

If a scaling operation would exceed the maximum OCU limit or violate the minimum OCU requirement, OpenSearch Serverless rejects the operation to maintain compliance with your configured limits.

# Encryption and KMS keys in collection groups
<a name="collection-groups-kms-keys"></a>

Each OpenSearch Serverless collection you create is protected with encryption of data at rest using AWS KMS to store and manage your encryption keys. When working with collection groups, you have flexibility in how you specify the KMS key for your collections.

You can provide the KMS key associated with a collection in two ways:
+ **In the CreateCollection request** – Specify the KMS key directly when you create the collection using the `encryption-config` parameter.
+ **In security policies** – Define the KMS key association in an encryption security policy.

When you specify a KMS key in both locations, the KMS key provided in the CreateCollection request takes precedence over the security policy configuration.

This flexibility simplifies managing collections at scale, particularly when you need to create multiple collections with unique KMS keys. Instead of creating and managing thousands of encryption policies, you can specify the KMS key directly during collection creation.

## Sharing OCUs across different KMS keys
<a name="collection-groups-kms-sharing"></a>

Collection groups enable compute resource sharing across collections with different KMS keys. Collections in the same collection group share OCU memory space, regardless of their encryption keys. This shared compute model reduces costs by eliminating the need for separate OCUs for each KMS key.

Collection groups provide isolation for security and performance requirements. You can group collections with the same KMS key into a single collection group for security isolation, or combine collections with different KMS keys in the same group for cost optimization. This flexibility lets you balance security requirements with resource efficiency.

The system maintains security by encrypting each collection's data with its designated KMS key. Access controls continue to apply at the collection level, and the shared compute resources access multiple KMS keys as needed to serve the collections in the group.

## Required KMS permissions
<a name="collection-groups-kms-permissions"></a>

When you specify a KMS key in the CreateCollection request, you need the following additional permissions:
+ `kms:DescribeKey` – Allows OpenSearch Serverless to retrieve information about the KMS key.
+ `kms:CreateGrant` – Allows OpenSearch Serverless to create a grant for the KMS key to enable encryption operations.

These permissions are not required when using AWS owned keys.

# Create collection groups
<a name="serverless-collection-groups-procedures"></a>

This topic describes how to create, configure, and manage collection groups in Amazon OpenSearch Serverless. Use collection groups to organize collections and share compute resources to optimize costs. Set minimum and maximum OCU limits at the collection group level to control performance and spending.

## Create a collection group
<a name="collection-groups-create"></a>

Use the following procedures to create a new collection group and configure its settings. Create a collection group using the OpenSearch Serverless console, AWS CLI, or the AWS SDKs . When you create a collection group, you specify capacity limits and other configuration options.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collection groups**

1. Choose **Create collection group**.

1. For **Collection group name**, enter a name for your collection group. The name must be 3-32 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.

1. (Optional) For **Description**, enter a description for your collection group.

1. In the **Capacity management** section, configure the OCU limits:
   + **Maximum indexing capacity** – The maximum number of indexing OCUs that collections in this group can scale up to.
   + **Maximum search capacity** – The maximum number of search OCUs that collections in this group can scale up to.
   + **Minimum indexing capacity** – The minimum number of indexing OCUs to maintain for consistent performance.
   + **Minimum search capacity** – The minimum number of search OCUs to maintain for consistent performance.

1. (Optional) In the **Tags** section, add tags to help organize and identify your collection group.

1. Choose **Create collection group**.

------
#### [ AWS CLI ]
+ Use the [create-collection-group](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/create-collection-group.html) command to create a new collection group. In the command, replace the *example* content with your own specific information.

  ```
  aws opensearchserverless create-collection-group \
      --name my-collection-group \
      --description "Collection group for production workloads" \
      --capacity-limits maxIndexingCapacityInOCU=20,maxSearchCapacityInOCU=20,minIndexingCapacityInOCU=2,minSearchCapacityInOCU=2 \
      --tags key=Environment,value=Production key=Team,value=DataEngineering
  ```

  The command returns details about the created collection group, including its unique ID and ARN.

------

## Add a new collection to a collection group
<a name="create-collection-in-group"></a>

When creating a new collection, specify an existing collection group name to associate the collection with. Use the following procedures to add a new collection to a collection group. 

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**

1. Choose **Create collection**.

1. For **Collection name**, enter a name for your collection. The name must be 3-28 characters long, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.

1. (Optional) For **Description**, enter a description for your collection.

1. In the **Collection group** section, select the collection group you want the collection to be assigned to. A collection can only belong to one collection group at a time.

   (Optional) You can also choose to **Create a new group**. This will navigate you to the **Create collection group** workflow. After you finish creating the collection group, return to the step 1 of this procedure to begin creating your new collection.

1. Continue through the workflow to create the collection.
**Important**  
Do not navigate away from the **Create collection** workflow. Doing so will stop the collection setup. The **Collection details** page appears after setup completes.

------
#### [ AWS CLI ]
+ Use the [create-collection](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/create-collection.html) command to create a new collection and add it to an existing collection group. In the command, replace the *example* content with your own specific information.

  ```
  aws opensearchserverless create-collection \
      --name my-collection \
      --type SEARCH \
      --collection-group-name my-collection-group \
      --description "Collection for search workloads"
  ```

------

# Manage Amazon OpenSearch Serverless collection groups
<a name="manage-collection-group"></a>

After creating Amazon OpenSearch Serverless collection groups, you can modify their settings as your needs change. Use these management operations to update capacity limits and view collection group details. These changes help you optimize resource allocation and maintain efficient organization of your collections.

## View collection groups
<a name="view-collection-groups"></a>

Display your OpenSearch Serverless collection groups to review their configurations, associated collections, and current status.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**

1. Choose the **Collection groups tab**. Your account's collection groups are displayed.

1.  Choose the **Name** of a collection group to display its details.

------
#### [ AWS CLI ]
+ Use the [list-collection-groups](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/list-collection-groups.html) command to list all collection groups in your account. Use the [batch-get-collection-group](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/batch-get-collection-group.html) command to view details about specific collection groups. In the following commands, replace the *example* content with your own specific information.

  To list all collection groups:

  ```
  aws opensearchserverless list-collection-groups
  ```

  To get details about specific collection groups:

  ```
  aws opensearchserverless batch-get-collection-group \
      --names my-collection-group another-group
  ```

------

## Update collection group settings
<a name="update-collection-group"></a>

Update your OpenSearch Serverless collection group settings to modify configurations such as capacity limits and description.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**

1. Choose the **Collection groups tab**. Your account's collection groups are displayed.

1.  Choose the **Name** of a collection group to display its details.

1. In **Collection group details**, choose **Edit**.

1. Make any changes, then choose **Save**.

------
#### [ AWS CLI ]
+ Use the [update-collection-group](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/update-collection-group.html) command to update the description and capacity limits of an existing collection group. In the following command, replace the *example* content with you own information.

  ```
  aws opensearchserverless update-collection-group \
      --id abcdef123456 \
      --description "Updated description for production workloads" \
      --capacity-limits maxIndexingCapacityInOCU=30,maxSearchCapacityInOCU=30,minIndexingCapacityInOCU=4,minSearchCapacityInOCU=4
  ```

------

Changes to capacity limits take effect immediately and might impact the scaling behavior of collections in the group.

## Delete collection groups
<a name="delete-collection-group"></a>

Before you can delete a collection group, you must first remove all collections from the group. You cannot delete a collection group that contains collections.

------
#### [ Console ]

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/](https://console.aws.amazon.com/aos/).

1. In the left navigation pane, choose **Serverless**, then choose **Collections**

1. Choose the **Collection groups tab**. Your account's collection groups are displayed.

1.  Choose the **Name** of the collection group you want to delete.
**Important**  
Remove all collections from the collection group by updating each collection to remove the collection group association or by moving them to other collection groups.

1. At the top of the page, choose **Delete**.

1. Confirm deletion, then choose **Delete**.

------
#### [ AWS CLI ]
+ Use the [delete-collection-group](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/delete-collection-group.html) command to delete a collection group.
**Important**  
Remove all collections from the collection group by updating each collection to remove the collection group association or by moving them to other collection groups.

  In the following command, replace the *example* content with you own information.

  Delete the empty collection group:

  ```
  aws opensearchserverless delete-collection-group \
      --id abcdef123456
  ```

------

# Managing capacity limits for Amazon OpenSearch Serverless
<a name="serverless-scaling"></a>

With Amazon OpenSearch Serverless, you don't have to manage capacity yourself. OpenSearch Serverless automatically scales the compute capacity for your account based on the current workload. Serverless compute capacity is measured in *OpenSearch Compute Units* (OCUs). Each OCU is a combination of 6 GiB of memory and corresponding virtual CPU (vCPU), as well as data transfer to Amazon S3. For more information about the decoupled architecture in OpenSearch Serverless, see [How it works](serverless-overview.md#serverless-process).

When you create your first collection, OpenSearch Serverless instantiates OCUs based on your redundancy settings. By default, redundant active replicas are enabled, which instantiates four OCUs (two for indexing and two for search). This ensures high availability with standby nodes in another Availability Zone.

For development and testing, you can disable the **Enable redundancy** setting for a collection. This removes standby replicas and uses only two OCUs (one for indexing and one for search).

These OCUs always exist, even when there's no indexing or search activity. All subsequent collections can share these OCUs, except for collections with unique AWS KMS keys, which instantiate their own set of OCUs. All collections associated with a collection group can share the same set of OCUs. Only one type of collection (search, time series, or vector search) can be included in a single collection group. For more information, see [Amazon OpenSearch Serverless collection groups](serverless-collection-groups.md).

OpenSearch Serverless automatically scales out and adds OCUs as your indexing and search usage grows. When traffic decreases, capacity scales back down to the minimum number of OCUs required for your data size.

For search and time series collections, the number of OCUs required when idle is proportional to data size and index count. For vector collections, OCU requirements depend on memory (RAM) to store vector graphs and disk space to store indices. When not idle, OCU requirements account for both factors.

Vector collections store index data in OCU local storage. OCU RAM limits are reached faster than disk limits, which restricts vector collections by RAM space.

With redundancy enabled, OCU capacity scales down to a minimum of 1 OCU (0.5 OCU x 2) for indexing and 1 OCU (0.5 OCU x 2) for search. When you disable redundancy, your collection can scale down to 0.5 OCU for indexing and 0.5 OCU for search.

Scaling also factors in the number of shards needed for your collection or index. Each OCU supports a specified number of shards, and the number of indexes should be proportional to the shard count. The total number of base OCUs required is the maximum of your data, memory, and shard requirements. For more information, see [Amazon OpenSearch Serverless cost-effective search capabilities, at any scale](https://aws.amazon.com/blogs/big-data/amazon-opensearch-serverless-cost-effective-search-capabilities-at-any-scale/) on the *AWS Big Data Blog*. 

For *search* and *vector search* collections, all data is stored on hot indexes to ensure fast query response times. *Time series* collections use a combination of hot and warm storage, keeping the most recent data in hot storage to optimize query response times for more frequently accessed data. For more information, see [Choosing a collection type](serverless-overview.md#serverless-usecase). 

**Note**  
A vector search collection can't share OCUs with *search* and *time series* collections, even if the vector search collection uses the same KMS key as the *search* or *time series* collections. A new set of OCUs will be created for your first vector collection. The OCUs of vector collections are shared among the same KMS key collections.

To manage capacity for your collections and to control costs, you can specify the overall maximum indexing and search capacity for the current account and Region, and OpenSearch Serverless scales out your collection resources automatically based on these specifications.

Because indexing and search capacity scale separately, you specify account-level limits for each:
+ **Maximum indexing capacity** – OpenSearch Serverless can increase indexing capacity up to this number of OCUs.
+ **Maximum search capacity** – OpenSearch Serverless can increase search capacity up to this number of OCUs.

**Note**  
At this time, capacity settings only apply at the account level. You can't configure per-collection capacity limits.

Your goal should be to ensure that the maximum capacity is high enough to handle spikes in workload. Based on your settings, OpenSearch Serverless automatically scales out the number of OCUs for your collections to process the indexing and search workload.

**Topics**
+ [Configuring capacity settings](#serverless-scaling-configure)
+ [Maximum capacity limits](#serverless-scaling-limits)
+ [Monitoring capacity usage](#serverless-scaling-monitoring)

## Configuring capacity settings
<a name="serverless-scaling-configure"></a>

To configure capacity settings in the OpenSearch Serverless console, expand **Serverless** in the left navigation pane and select **Dashboard**. Specify the maximum indexing and search capacity under **Capacity management**:

![\[Capacity management dashboard showing indexing and search capacity graphs with 10 OCU limits.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/ServerlessCapacity.png)


To configure capacity using the AWS CLI, send an [UpdateAccountSettings](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateAccountSettings.html) request:

```
aws opensearchserverless update-account-settings \
    --capacity-limits '{ "maxIndexingCapacityInOCU": 8,"maxSearchCapacityInOCU": 9 }'
```

## Maximum capacity limits
<a name="serverless-scaling-limits"></a>

The maximum total of indexes a collection can contain is 1000. For all three types of collections, the default maximum OCU capacity is 10 OCUs for indexing and 10 OCUs for search. The minimum OCU capacity allowed for an account is 1 OCU [0.5 OCU x 2] for indexing and 1 OCU [0.5 OCU x 2] for search. For all collections, the maximum allowed capacity is 1,700 OCUs for indexing and 1,700 OCUs for search. You can configure the OCU count to be any number from 2 to the maximum allowed capacity, in multiples of 2. 

Each OCU includes enough hot ephemeral storage for 120 GiB of index data. OpenSearch Serverless supports up to 1 TiB of data per index in *search* and *vector search* collections, and 100 TiB of hot data per index in a *time series* collection. For time series collections, you can still ingest more data, which can be stored as warm data in S3.

For a list of all quotas, see [OpenSearch Serverless quotas](https://docs.aws.amazon.com/general/latest/gr/opensearch-service.html#opensearch-limits-serverless).

## Monitoring capacity usage
<a name="serverless-scaling-monitoring"></a>

You can monitor the `SearchOCU` and `IndexingOCU` account-level CloudWatch metrics to understand how your collections are scaling. We recommend that you configure alarms to notify you if your account is approaching a threshold for metrics related to capacity, so you can adjust your capacity settings accordingly.

You can also use these metrics to determine if your maximum capacity settings are appropriate, or if you need to adjust them. Analyze these metrics to focus your efforts for optimizing the efficiency of your collections. For more information about the metrics that OpenSearch Serverless sends to CloudWatch, see [Monitoring Amazon OpenSearch Serverless](serverless-monitoring.md).

# Ingesting data into Amazon OpenSearch Serverless collections
<a name="serverless-clients"></a>

These sections provide details about the supported ingest pipelines for data ingestion into Amazon OpenSearch Serverless collections. They also cover some of the clients that you can use to interact with the OpenSearch API operations. Your clients should be compatible with OpenSearch 2.x in order to integrate with OpenSearch Serverless.

**Topics**
+ [Minimum required permissions](#serverless-ingestion-permissions)
+ [OpenSearch Ingestion](#serverless-osis-ingestion)
+ [Fluent Bit](#serverless-fluentbit)
+ [Amazon Data Firehose](#serverless-kdf)
+ [Go](#serverless-go)
+ [Java](#serverless-java)
+ [JavaScript](#serverless-javascript)
+ [Logstash](#serverless-logstash)
+ [Python](#serverless-python)
+ [Ruby](#serverless-ruby)
+ [Signing HTTP requests with other clients](#serverless-signing)

## Minimum required permissions
<a name="serverless-ingestion-permissions"></a>

In order to ingest data into an OpenSearch Serverless collection, the principal that is writing the data must have the following minimum permissions assigned in a [data access policy](serverless-data-access.md):

```
[
   {
      "Rules":[
         {
            "ResourceType":"index",
            "Resource":[
               "index/target-collection/logs"
            ],
            "Permission":[
               "aoss:CreateIndex",
               "aoss:WriteDocument",
               "aoss:UpdateIndex"
            ]
         }
      ],
      "Principal":[
         "arn:aws:iam::123456789012:user/my-user"
      ]
   }
]
```

The permissions can be more broad if you plan to write to additional indexes. For example, rather than specifying a single target index, you can allow permission to all indexes (index/*target-collection*/\$1), or a subset of indexes (index/*target-collection*/*logs\$1*).

For a reference of all available OpenSearch API operations and their associated permissions, see [Supported operations and plugins in Amazon OpenSearch Serverless](serverless-genref.md).

## OpenSearch Ingestion
<a name="serverless-osis-ingestion"></a>

Rather than using a third-party client to send data directly to an OpenSearch Serverless collection, you can use Amazon OpenSearch Ingestion. You configure your data producers to send data to OpenSearch Ingestion, and it automatically delivers the data to the collection that you specify. You can also configure OpenSearch Ingestion to transform your data before delivering it. For more information, see [Overview of Amazon OpenSearch Ingestion](ingestion.md).

An OpenSearch Ingestion pipeline needs permission to write to an OpenSearch Serverless collection that is configured as its sink. These permissions include the ability to describe the collection and send HTTP requests to it. For instructions to use OpenSearch Ingestion to add data to a collection, see [Granting Amazon OpenSearch Ingestion pipelines access to collections](pipeline-collection-access.md).

To get started with OpenSearch Ingestion, see [Tutorial: Ingesting data into a collection using Amazon OpenSearch Ingestion](osis-serverless-get-started.md).

## Fluent Bit
<a name="serverless-fluentbit"></a>

You can use [AWS for Fluent Bit image](https://github.com/aws/aws-for-fluent-bit#public-images) and the [OpenSearch output plugin](https://docs.fluentbit.io/manual/pipeline/outputs/opensearch) to ingest data into OpenSearch Serverless collections.

**Note**  
You must have version 2.30.0 or later of the AWS for Fluent Bit image in order to integrate with OpenSearch Serverless.

**Example configuration**:

This sample output section of the configuration file shows how to use an OpenSearch Serverless collection as a destination. The important addition is the `AWS_Service_Name` parameter, which is `aoss`. `Host` is the collection endpoint.

```
[OUTPUT]
    Name  opensearch
    Match *
    Host  collection-endpoint.us-west-2.aoss.amazonaws.com
    Port  443
    Index  my_index
    Trace_Error On
    Trace_Output On
    AWS_Auth On
    AWS_Region <region>
    AWS_Service_Name aoss
    tls     On
    Suppress_Type_Name On
```

## Amazon Data Firehose
<a name="serverless-kdf"></a>

Firehose supports OpenSearch Serverless as a delivery destination. For instructions to send data into OpenSearch Serverless, see [Creating a Kinesis Data Firehose Delivery Stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html) and [Choose OpenSearch Serverless for Your Destination](https://docs.aws.amazon.com/firehose/latest/dev/create-destination.html#create-destination-opensearch-serverless) in the *Amazon Data Firehose Developer Guide*.

The IAM role that you provide to Firehose for delivery must be specified within a data access policy with the `aoss:WriteDocument` minimum permission for the target collection, and you must have a preexisting index to send data to. For more information, see [Minimum required permissions](#serverless-ingestion-permissions).

Before you send data to OpenSearch Serverless, you might need to perform transforms on the data. To learn more about using Lambda functions to perform this task, see [Amazon Kinesis Data Firehose Data Transformation](https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html) in the same guide.

## Go
<a name="serverless-go"></a>

The following sample code uses the [opensearch-go](https://github.com/opensearch-project/opensearch-go) client for Go to establish a secure connection to the specified OpenSearch Serverless collection and create a single index. You must provide values for `region` and `host`.

```
package main

import (
  "context"
  "log"
  "strings"
  "github.com/aws/aws-sdk-go-v2/aws"
  "github.com/aws/aws-sdk-go-v2/config"
  opensearch "github.com/opensearch-project/opensearch-go/v2"
  opensearchapi "github.com/opensearch-project/opensearch-go/v2/opensearchapi"
  requestsigner "github.com/opensearch-project/opensearch-go/v2/signer/awsv2"
)

const endpoint = "" // serverless collection endpoint

func main() {
	ctx := context.Background()

	awsCfg, err := config.LoadDefaultConfig(ctx,
		config.WithRegion("<AWS_REGION>"),
		config.WithCredentialsProvider(
			getCredentialProvider("<AWS_ACCESS_KEY>", "<AWS_SECRET_ACCESS_KEY>", "<AWS_SESSION_TOKEN>"),
		),
	)
	if err != nil {
		log.Fatal(err) // don't log.fatal in a production-ready app
	}

	// create an AWS request Signer and load AWS configuration using default config folder or env vars.
	signer, err := requestsigner.NewSignerWithService(awsCfg, "aoss") // "aoss" for Amazon OpenSearch Serverless
	if err != nil {
		log.Fatal(err) // don't log.fatal in a production-ready app
	}

	// create an opensearch client and use the request-signer
	client, err := opensearch.NewClient(opensearch.Config{
		Addresses: []string{endpoint},
		Signer:    signer,
	})
	if err != nil {
		log.Fatal("client creation err", err)
	}

	indexName := "go-test-index"

  // define index mapping
	mapping := strings.NewReader(`{
	 "settings": {
	   "index": {
	        "number_of_shards": 4
	        }
	      }
	 }`)

	// create an index
	createIndex := opensearchapi.IndicesCreateRequest{
		Index: indexName,
    Body: mapping,
	}
	createIndexResponse, err := createIndex.Do(context.Background(), client)
	if err != nil {
		log.Println("Error ", err.Error())
		log.Println("failed to create index ", err)
		log.Fatal("create response body read err", err)
	}
	log.Println(createIndexResponse)

	// delete the index
	deleteIndex := opensearchapi.IndicesDeleteRequest{
		Index: []string{indexName},
	}

	deleteIndexResponse, err := deleteIndex.Do(context.Background(), client)
	if err != nil {
		log.Println("failed to delete index ", err)
		log.Fatal("delete index response body read err", err)
	}
	log.Println("deleting index", deleteIndexResponse)
}

func getCredentialProvider(accessKey, secretAccessKey, token string) aws.CredentialsProviderFunc {
	return func(ctx context.Context) (aws.Credentials, error) {
		c := &aws.Credentials{
			AccessKeyID:     accessKey,
			SecretAccessKey: secretAccessKey,
			SessionToken:    token,
		}
		return *c, nil
	}
}
```

## Java
<a name="serverless-java"></a>

The following sample code uses the [opensearch-java](https://search.maven.org/artifact/org.opensearch.client/opensearch-java) client for Java to establish a secure connection to the specified OpenSearch Serverless collection and create a single index. You must provide values for `region` and `host`.

The important difference compared to OpenSearch Service *domains* is the service name (`aoss` instead of `es`).

```
// import OpenSearchClient to establish connection to OpenSearch Serverless collection
import org.opensearch.client.opensearch.OpenSearchClient;

SdkHttpClient httpClient = ApacheHttpClient.builder().build();
// create an opensearch client and use the request-signer
OpenSearchClient client = new OpenSearchClient(
    new AwsSdk2Transport(
        httpClient,
        "...us-west-2.aoss.amazonaws.com", // serverless collection endpoint
        "aoss" // signing service name
        Region.US_WEST_2, // signing service region
        AwsSdk2TransportOptions.builder().build()
    )
);

String index = "sample-index";

// create an index
CreateIndexRequest createIndexRequest = new CreateIndexRequest.Builder().index(index).build();
CreateIndexResponse createIndexResponse = client.indices().create(createIndexRequest);
System.out.println("Create index reponse: " + createIndexResponse);

// delete the index
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest.Builder().index(index).build();
DeleteIndexResponse deleteIndexResponse = client.indices().delete(deleteIndexRequest);
System.out.println("Delete index reponse: " + deleteIndexResponse);

httpClient.close();
```

The following sample code again establishes a secure connection, and then searches an index.

```
import org.opensearch.client.opensearch.OpenSearchClient;

SdkHttpClient httpClient = ApacheHttpClient.builder().build();

OpenSearchClient client = new OpenSearchClient(
    new AwsSdk2Transport(
        httpClient,
        "...us-west-2.aoss.amazonaws.com", // serverless collection endpoint
        "aoss" // signing service name
        Region.US_WEST_2, // signing service region
        AwsSdk2TransportOptions.builder().build()
    )
);

Response response = client.generic()
    .execute(
        Requests.builder()
            .endpoint("/" + "users" + "/_search?typed_keys=true")
            .method("GET")
            .json("{"
                + "    \"query\": {"
                + "        \"match_all\": {}"
                + "    }"
                + "}")
            .build());

httpClient.close();
```

## JavaScript
<a name="serverless-javascript"></a>

The following sample code uses the [opensearch-js](https://www.npmjs.com/package/@opensearch-project/opensearch) client for JavaScript to establish a secure connection to the specified OpenSearch Serverless collection, create a single index, add a document, and delete the index. You must provide values for `node` and `region`.

The important difference compared to OpenSearch Service *domains* is the service name (`aoss` instead of `es`).

------
#### [ Version 3 ]

This example uses [version 3](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/) of the SDK for JavaScript in Node.js.

```
const { defaultProvider } = require('@aws-sdk/credential-provider-node');
const { Client } = require('@opensearch-project/opensearch');
const { AwsSigv4Signer } = require('@opensearch-project/opensearch/aws');

async function main() {
    // create an opensearch client and use the request-signer
    const client = new Client({
        ...AwsSigv4Signer({
            region: 'us-west-2',
            service: 'aoss',
            getCredentials: () => {
                const credentialsProvider = defaultProvider();
                return credentialsProvider();
            },
        }),
        node: '' # // serverless collection endpoint
    });

    const index = 'movies';

    // create index if it doesn't already exist
    if (!(await client.indices.exists({ index })).body) {
        console.log((await client.indices.create({ index })).body);
    }

    // add a document to the index
    const document = { foo: 'bar' };
    const response = await client.index({
        id: '1',
        index: index,
        body: document,
    });
    console.log(response.body);

    // delete the index
    console.log((await client.indices.delete({ index })).body);
}

main();
```

------
#### [ Version 2 ]

This example uses [version 2](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/) of the SDK for JavaScript in Node.js.

```
const AWS = require('aws-sdk');
const { Client } = require('@opensearch-project/opensearch');
const { AwsSigv4Signer } = require('@opensearch-project/opensearch/aws');

async function main() {
    // create an opensearch client and use the request-signer
    const client = new Client({
        ...AwsSigv4Signer({
            region: 'us-west-2',
            service: 'aoss',
            getCredentials: () =>
                new Promise((resolve, reject) => {
                    AWS.config.getCredentials((err, credentials) => {
                        if (err) {
                            reject(err);
                        } else {
                            resolve(credentials);
                        }
                    });
                }),
        }),
        node: '' # // serverless collection endpoint
    });

    const index = 'movies';

    // create index if it doesn't already exist
    if (!(await client.indices.exists({ index })).body) {
        console.log((await client.indices.create({
            index
        })).body);
    }

    // add a document to the index
    const document = {
        foo: 'bar'
    };
    const response = await client.index({
        id: '1',
        index: index,
        body: document,
    });
    console.log(response.body);

    // delete the index
    console.log((await client.indices.delete({ index })).body);
}

main();
```

------

## Logstash
<a name="serverless-logstash"></a>

You can use the [Logstash OpenSearch plugin ](https://github.com/opensearch-project/logstash-output-opensearch) to publish logs to OpenSearch Serverless collections. 

**To use Logstash to send data to OpenSearch Serverless**

1. Install version *2.0.0 or later* of the [logstash-output-opensearch](https://github.com/opensearch-project/logstash-output-opensearch) plugin using Docker or Linux.

------
#### [ Docker ]

   Docker hosts the Logstash OSS software with the OpenSearch output plugin preinstalled: [opensearchproject/logstash-oss-with-opensearch-output-plugin](https://hub.docker.com/r/opensearchproject/logstash-oss-with-opensearch-output-plugin/tags?page=1&ordering=last_updated&name=8.4.0). You can pull the image just like any other image:

   ```
   docker pull opensearchproject/logstash-oss-with-opensearch-output-plugin:latest
   ```

------
#### [ Linux ]

   First, [install the latest version of Logstash](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html) if you haven't already. Then, install version 2.0.0 of the output plugin:

   ```
   cd logstash-8.5.0/
   bin/logstash-plugin install --version 2.0.0 logstash-output-opensearch
   ```

   If the plugin is already installed, update it to the latest version:

   ```
   bin/logstash-plugin update logstash-output-opensearch 
   ```

   Starting with version 2.0.0 of the plugin, the AWS SDK uses version 3. If you're using a Logstash version earlier than 8.4.0, you must remove any pre-installed AWS plugins and install the `logstash-integration-aws` plugin:

   ```
   /usr/share/logstash/bin/logstash-plugin remove logstash-input-s3
   /usr/share/logstash/bin/logstash-plugin remove logstash-input-sqs
   /usr/share/logstash/bin/logstash-plugin remove logstash-output-s3
   /usr/share/logstash/bin/logstash-plugin remove logstash-output-sns
   /usr/share/logstash/bin/logstash-plugin remove logstash-output-sqs
   /usr/share/logstash/bin/logstash-plugin remove logstash-output-cloudwatch
   
   /usr/share/logstash/bin/logstash-plugin install --version 0.1.0.pre logstash-integration-aws
   ```

------

1. In order for the OpenSearch output plugin to work with OpenSearch Serverless, you must make the following modifications to the `opensearch` output section of logstash.conf:
   + Specify `aoss` as the `service_name` under `auth_type`.
   + Specify your collection endpoint for `hosts`.
   + Add the parameters `default_server_major_version` and `legacy_template`. These parameters are required for the plugin to work with OpenSearch Serverless.

   ```
   output {
     opensearch {
       hosts => "collection-endpoint:443"
       auth_type => {
         ...
         service_name => 'aoss'
       }
       default_server_major_version => 2
       legacy_template => false
     }
   }
   ```

   This example configuration file takes its input from files in an S3 bucket and sends them to an OpenSearch Serverless collection:

   ```
   input {
     s3  {
       bucket => "my-s3-bucket"
       region => "us-east-1"
     }
   }
   
   output {
     opensearch {
       ecs_compatibility => disabled
       hosts => "https://my-collection-endpoint.us-east-1.aoss.amazonaws.com:443"
       index => my-index
       auth_type => {
         type => 'aws_iam'
         aws_access_key_id => 'your-access-key'
         aws_secret_access_key => 'your-secret-key'
         region => 'us-east-1'
         service_name => 'aoss'
       }
       default_server_major_version => 2
       legacy_template => false
     }
   }
   ```

1. Then, run Logstash with the new configuration to test the plugin:

   ```
   bin/logstash -f config/test-plugin.conf
   ```

## Python
<a name="serverless-python"></a>

The following sample code uses the [opensearch-py](https://pypi.org/project/opensearch-py/) client for Python to establish a secure connection to the specified OpenSearch Serverless collection, create a single index, and search that index. You must provide values for `region` and `host`.

The important difference compared to OpenSearch Service *domains* is the service name (`aoss` instead of `es`).

```
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
import boto3

host = ''  # serverless collection endpoint, without https://
region = ''  # e.g. us-east-1

service = 'aoss'
credentials = boto3.Session().get_credentials()
auth = AWSV4SignerAuth(credentials, region, service)

# create an opensearch client and use the request-signer
client = OpenSearch(
    hosts=[{'host': host, 'port': 443}],
    http_auth=auth,
    use_ssl=True,
    verify_certs=True,
    connection_class=RequestsHttpConnection,
    pool_maxsize=20,
)

# create an index
index_name = 'books-index'
create_response = client.indices.create(
    index_name
)

print('\nCreating index:')
print(create_response)

# index a document
document = {
  'title': 'The Green Mile',
  'director': 'Stephen King',
  'year': '1996'
}

response = client.index(
    index = 'books-index',
    body = document,
    id = '1'
)


# delete the index
delete_response = client.indices.delete(
    index_name
)

print('\nDeleting index:')
print(delete_response)
```

## Ruby
<a name="serverless-ruby"></a>

The `opensearch-aws-sigv4` gem provides access to OpenSearch Serverless, along with OpenSearch Service, out of the box. It has all features of the [opensearch-ruby](https://rubygems.org/gems/opensearch-ruby) client because it's a dependency of this gem.

When instantiating the Sigv4 signer, specify `aoss` as the service name:

```
require 'opensearch-aws-sigv4'
require 'aws-sigv4'

signer = Aws::Sigv4::Signer.new(service: 'aoss',
                                region: 'us-west-2',
                                access_key_id: 'key_id',
                                secret_access_key: 'secret')

# create an opensearch client and use the request-signer
client = OpenSearch::Aws::Sigv4Client.new(
  { host: 'https://your.amz-opensearch-serverless.endpoint',
    log: true },
  signer)

# create an index
index = 'prime'
client.indices.create(index: index)

# insert data
client.index(index: index, id: '1', body: { name: 'Amazon Echo', 
                                            msrp: '5999', 
                                            year: 2011 })

# query the index
client.search(body: { query: { match: { name: 'Echo' } } })

# delete index entry
client.delete(index: index, id: '1')

# delete the index
client.indices.delete(index: index)
```

## Signing HTTP requests with other clients
<a name="serverless-signing"></a>

The following requirements apply when [signing requests](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to OpenSearch Serverless collections when you construct HTTP requests with another clients.
+ You must specify the service name as `aoss`.
+ The `x-amz-content-sha256` header is required for all AWS Signature Version 4 requests. It provides a hash of the request payload. If there's a request payload, set the value to its Secure Hash Algorithm (SHA) cryptographic hash (SHA256). If there's no request payload, set the value to `e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855`, which is the hash of an empty string.

**Topics**
+ [Indexing with cURL](#serverless-signing-curl)
+ [Indexing with Postman](#serverless-signing-postman)

### Indexing with cURL
<a name="serverless-signing-curl"></a>

The following example request uses the Client URL Request Library (cURL) to send a single document to an index named `movies-index` within a collection:

```
curl -XPOST \
    --user "$AWS_ACCESS_KEY_ID":"$AWS_SECRET_ACCESS_KEY" \
    --aws-sigv4 "aws:amz:us-east-1:aoss" \
    --header "x-amz-content-sha256: $REQUEST_PAYLOAD_SHA_HASH" \
    --header "x-amz-security-token: $AWS_SESSION_TOKEN" \
    "https://my-collection-endpoint.us-east-1.aoss.amazonaws.com/movies-index/_doc" \
    -H "Content-Type: application/json" -d '{"title": "Shawshank Redemption"}'
```

### Indexing with Postman
<a name="serverless-signing-postman"></a>

The following image shows how to send a requests to a collection using Postman. For instructions to authenticate, see [Authenticate with AWS Signature authentication workflow in Postman](https://learning.postman.com/docs/sending-requests/authorization/aws-signature/).

![\[JSON response showing creation of a "movies-index" with successful result and no shards.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/ServerlessPostman.png)


# Configure Machine Learning on Amazon OpenSearch Serverless
<a name="serverless-configure-machine-learning"></a>

## Machine Learning
<a name="serverless-configure-machine-learning-what-is"></a>

Machine Learning (ML) provides ML capabilities in the form of ML algorithms and remote models. With access to these models, you can run several AI workflows such as RAG or semantic search. ML supports experimentation and production deployment of generative AI use cases using the latest externally hosted models that you can configure with connectors. After you configure a connector, you must configure it to a model and then deploy it to perform prediction.

## Connectors
<a name="serverless-configure-machine-learning-connectors"></a>

Connectors facilitate access to models hosted on third-party ML platforms. They serve as the gateway between your OpenSearch cluster and a remote model. For more information, see the following documentation:
+ [Creating connectors for third-party ML platforms](https://docs.opensearch.org/latest/ml-commons-plugin/remote-models/connectors/) on the *OpenSearch Documentation* website
+ [Connectors for external platforms](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-external-connector.html)
+ [Connectors for AWS services](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html)
**Important**  
When you create a trust policy, add **ml.opensearchservice.amazonaws.com** as the OpenSearch Service principle.
Skip the steps on the [Connectors](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html) page that display how to configure a domain in the policy.
Add the `iam:PassRole` statement in the [Configure permissions](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html#connector-sagemaker-prereq) step.
Skip the **Map the ML role** step in OpenSearch Dashboards. Backend role configuration is not required. This applies to [Connectors for AWS services](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html), and to [Connectors for external platforms](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-external-connector.html).
In your SigV4 request to the collection endpoint, set the service name to **aoss** instead of **es**.

## Models
<a name="serverless-configure-machine-learning-models"></a>

A model is the core functionality that's used across various AI workflows. Generally, you associate the connector with a model to perform prediction using the connector. After a model is in the deployed state, you can run prediction. For more information, see [Register a model hosted on a third-party platform](https://docs.opensearch.org/latest/ml-commons-plugin/api/model-apis/register-model/#register-a-model-hosted-on-a-third-party-platform) on the *OpenSearch Documentation* website.

**Note**  
Not all model features are supported on OpenSearch Serverless, such as local models. For more information, see [Unsupported Machine Learning APIs and features](serverless-machine-learning-unsupported-features.md).

## Configure permissions for Machine Learning
<a name="serverless-configure-machine-learning-permissions"></a>

The following section describes the collection data access policies required for Machine Learning (ML). Replace the *placeholder values* with your specific information. For more information, see [Supported policy permissions](serverless-data-access.md#serverless-data-supported-permissions).

```
{
    "Rules": [
        {
            "Resource": [
                "model/collection_name/*"
            ],
            "Permission": [
                "aoss:DescribeMLResource",
                "aoss:CreateMLResource",
                "aoss:UpdateMLResource",
                "aoss:DeleteMLResource",
                "aoss:ExecuteMLResource"
            ],
            "ResourceType": "model"
        }
    ],
    "Principal": [
        "arn:aws:iam::account_id:role/role_name"
    ],
    "Description": "ML full access policy for collection_name"
}
```
+ **aoss:DescribeMLResource** – Grants permission to search and query connectors, models, and model groups.
+ **aoss:CreateMLResource** – Grants permission to create connectors, models, and model groups.
+ **aoss:UpdateMLResource** – Grants permission to update connectors, models, and model groups.
+ **aoss:DeleteMLResource** – Grants permission to delete connectors, models, and model groups.
+ **aoss:ExecuteMLResource** – Grants permission to perform predictions on models.

# Unsupported Machine Learning APIs and features
<a name="serverless-machine-learning-unsupported-features"></a>

## Unsupported APIs
<a name="serverless-unsupported-ml-api"></a>

The following Machine Learning (ML) APIs are not supported on Amazon OpenSearch Serverless:
+ Local Model functionality
+ Model Train API
+ Model Predict algorithm API
+ Model Batch Predict API
+ Agents API and its corresponding tools
+ MCP Server APIs
+ Memory APIs
+ Controller APIs
+ Execute Algorithm API
+ ML Profile AP
+ ML stats API

For more information about ML APIs, see [ML APIs](https://docs.opensearch.org/latest/ml-commons-plugin/api/index/) on the *OpenSearch Documentation* website.

## Unsupported features
<a name="serverless-unsupported-ml-features"></a>

The following ML features are not supported on Amazon OpenSearch Serverless:
+ Agents and tools
+ Local models
+ The ML Inference processor within Search and Ingest Pipelines
  + ML Inference Ingest Processor
  + ML Inference Search Response Processor
  + ML Inference Search Request Processor

For more information about these features, see the following documentation on the *OpenSearch Documentation* website:
+ [Machine learning](https://docs.opensearch.org/latest/ml-commons-plugin)
+ [ML inference processor](https://docs.opensearch.org/latest/ingest-pipelines/processors/ml-inference/)
+ [Search pipelines](https://docs.opensearch.org/latest/search-plugins/search-pipelines/index/)

# Configure Neural Search and Hybrid Search on OpenSearch Serverless
<a name="serverless-configure-neural-search"></a>

## Neural Search
<a name="serverless-configure-neural-search-what-is"></a>

Amazon OpenSearch Serverless supports Neural Search functionality for semantic search operations on your data. Neural Search uses machine learning models to understand the semantic meaning and context of your queries, providing more relevant search results than traditional keyword-based searches. This section explains how to configure Neural Search in OpenSearch Serverless, including the required permissions, supported processors, and key differences from the standard OpenSearch implementation.

With Neural Search you can perform semantic search on your data, which considers semantic meaning to understand the intent of your search queries. This capability is powered by the following components:
+ Text embedding ingest pipeline processor
+ Neural query
+ Neural sparse query

## Hybrid Search
<a name="serverless-configure-hybrid-search"></a>

With hybrid search, you can improve search relevance by combining keyword and semantic search capabilities. To use hybrid search, create a search pipeline that processes your search results and combines document scores. For more information, see [Search pipelines](https://docs.opensearch.org/latest/search-plugins/search-pipelines/index/) on the *OpenSearch Documentation* website. Use the following components to implement hybrid search:
+ Normalization search pipeline processor

**Supported normalization techniques**
  + `min_max`
  + `l2`

**Supported combination techniques**
  + `arithmetic_mean`
  + `geometric_mean`
  + `harmonic_mean`

  For more information about normalization and combination techniques, see [Request body fields](https://docs.opensearch.org/latest/search-plugins/search-pipelines/normalization-processor/#request-body-fields) on the *OpenSearch Documentation* website.
+ Hybrid query

## Neural and hybrid queries
<a name="serverless-configure-neural-search-hybrid-queries"></a>

By default, OpenSearch calculates document scores using the keyword-based Okapi BM25 algorithm, which works well for search queries that contain keywords. Neural Search provides new query types for natural language queries and the ability to combine both semantic and keyword search.

**Example : `neural`**  

```
"neural": {
  "vector_field": {
    "query_text": "query_text",
    "query_image": "image_binary",
    "model_id": "model_id",
    "k": 100
  }
}
```

For more information, see [Neural query](https://docs.opensearch.org/latest/query-dsl/specialized/neural/) on the *OpenSearch Documentation* website.

**Example : `hybrid`**  

```
"hybrid": {
      "queries": [
        array of lexical, neural, or combined queries
      ]
    }
```

For more information, see [Hybrid query](https://docs.opensearch.org/latest/query-dsl/compound/hybrid/) on the *OpenSearch Documentation* website.

To configure semantic search components in Amazon OpenSearch Serverless, follow the steps in the [Neural Search tutorial](https://docs.opensearch.org/latest/tutorials/vector-search/neural-search-tutorial/) on the *OpenSearch Documentation* website. Keep in mind these important differences:
+ OpenSearch Serverless supports only remote models. You must configure connectors to remotely hosted models. You don't need to deploy or remove remote models. For more information, see [Getting started with semantic and hybrid search](https://docs.opensearch.org/latest/tutorials/vector-search/neural-search-tutorial/) on the *OpenSearch Documentation* website.
+ Expect up to 15 seconds of latency when you search against your vector index or search for recently created search and ingest pipelines.

## Configure permissions
<a name="serverless-configure-neural-search-permissions"></a>

Neural Search in OpenSearch Serverless requires the following permissions. For more information, see [Supported policy permissions](serverless-data-access.md#serverless-data-supported-permissions).

**Example : Neural search policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "NeuralSearch",
            "Effect": "Allow",
            "Action": [
              "aoss:CreateIndex",
              "aoss:CreateCollection",
              "aoss:UpdateCollection",
              "aoss:DeleteIndex",
              "aoss:DeleteCollection"
            ],
            "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/your-collection-name"
        }
    ]
}
```
+ **aoss:\$1Index** – Creates a vector index where text embeddings are stored.
+ **aoss:\$1CollectionItems** – Creates ingest and search pipelines.
+ **aoss:\$1MLResource** – Creates and registers text embedding models.
+ **aoss:APIAccessAll** – Provides access to OpenSearch APIs for search and ingest operations.

The following describes the collection data access policies required for neural search. Replace the *placeholder values* with your specific information.

**Example : Data access policy**  

```
[
    {
        "Description": "Create index permission",
        "Rules": [
            {
                "ResourceType": "index",
                "Resource": ["index/collection_name/*"],
                "Permission": [
                  "aoss:CreateIndex", 
                  "aoss:DescribeIndex",
                  "aoss:UpdateIndex",
                  "aoss:DeleteIndex"
                ]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    },
    {
        "Description": "Create pipeline permission",
        "Rules": [
            {
                "ResourceType": "collection",
                "Resource": ["collection/collection_name"],
                "Permission": [
                  "aoss:CreateCollectionItems",
                  "aoss:DescribeCollectionItems",
                  "aoss:UpdateCollectionItems",
                  "aoss:DeleteCollectionItems"
                ]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    },
    {
        "Description": "Create model permission",
        "Rules": [
            {
                "ResourceType": "model",
                "Resource": ["model/collection_name/*"],
                "Permission": ["aoss:CreateMLResources"]
            }
        ],
        "Principal": [
            "arn:aws:iam::account_id:role/role_name"
        ]
    }
]
```

# Configure Workflows on Amazon OpenSearch Serverless
<a name="serverless-configure-workflows"></a>

## Workflows
<a name="serverless-configure-workflows-what-is"></a>

Workflows support builders in innovating AI applications on OpenSearch. The current process of using machine learning (ML) offerings in OpenSearch, such as Semantic Search, requires complex setup and pre-processing tasks, along with verbose user queries, both of which can be time-consuming and error-prone. Workflows are a simplification framework to chain multiple API calls for OpenSearch.

For setup and usage, see [Automating configurations](https://docs.opensearch.org/docs/latest/automating-configurations/index/) on the *OpenSearch* website. When you use Workflows in OpenSearch Serverless, consider these important differences:
+ OpenSearch Serverless uses only remote models in workflow steps. You don't need to deploy these models.
+ OpenSearch Serverless doesn't support the **Re-index** Workflow step.
+ When you search **Workflows** and **Workflow states** after other API calls, expect up to 15 seconds of latency for updates to appear.

OpenSearch Serverless Collections support Workflows when it's used as a data source in your OpenSearch UI application. For more information, see [Managing data source associations](application-data-sources-and-vpc.md).

## Configure permissions
<a name="serverless-configure-workflows-permissions"></a>

Before you create and provision a template, verify that you have the required permissions. If you need assistance, contact your account administrator. OpenSearch Serverless Workflows require the following permissions. You can scope permissions to a specific collection by defining the collection resource ARN in your IAM policy.

**Example : Workflows policy**    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "NeuralSearch",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::111122223333:role/Cognito_identitypoolname/Auth_Role"
        ]
      },
      "Action": [
        "aoss:CreateIndex",
        "aoss:CreateCollection",
        "aoss:UpdateCollection",
        "aoss:DeleteIndex",
        "aoss:DeleteCollection"
      ],
      "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/your-collection-name"
    }
  ]
}
```
+ **aoss:\$1CollectionItems** – Grants permission to create and manage templates, and provision [search and ingest pipelines](serverless-configure-neural-search.md).
+ **aoss:\$1Index** – Grants permission to create and delete indices using OpenSearch API operations.
+ **aoss:\$1MLResource** – Grants permission to provision workflow steps that use the [Configure Machine Learning](serverless-configure-machine-learning.md).

# Overview of security in Amazon OpenSearch Serverless
<a name="serverless-security"></a>

Security in Amazon OpenSearch Serverless differs fundamentally from security in Amazon OpenSearch Service in the following ways:


| Feature | OpenSearch Service | OpenSearch Serverless | 
| --- | --- | --- | 
| Data access control | Data access is determined by IAM policies and fine-grained access control. | Data access is determined by data access policies. | 
| Encryption at rest | Encryption at rest is optional for domains. | Encryption at rest is required for collections. | 
| Security setup and administration | You must configure network, encryption, and data access individually for each domain. | You can use security policies to manage security settings for multiple collections at scale. | 

The following diagram illustrates the security components that make up a functional collection. A collection must have an assigned encryption key, network access settings, and a matching data access policy that grants permission to its resources.

![\[Diagram showing encryption, network, data access, and authentication policies for a collection.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-security.png)


**Topics**
+ [Encryption policies](#serverless-security-encryption)
+ [Network policies](#serverless-security-network)
+ [Data access policies](#serverless-security-data-access)
+ [IAM and SAML authentication](#serverless-security-authentication)
+ [Infrastructure security](#serverless-infrastructure-security)
+ [Getting started with security in Amazon OpenSearch Serverless](serverless-tutorials.md)
+ [Identity and Access Management for Amazon OpenSearch Serverless](security-iam-serverless.md)
+ [Encryption in Amazon OpenSearch Serverless](serverless-encryption.md)
+ [Network access for Amazon OpenSearch Serverless](serverless-network.md)
+ [FIPS compliance in Amazon OpenSearch Serverless](fips-compliance-opensearch-serverless.md)
+ [Data access control for Amazon OpenSearch Serverless](serverless-data-access.md)
+ [Data plane access through AWS PrivateLink](serverless-vpc.md)
+ [Control plane access through AWS PrivateLink](serverless-vpc-cp.md)
+ [SAML authentication for Amazon OpenSearch Serverless](serverless-saml.md)
+ [Compliance validation for Amazon OpenSearch Serverless](serverless-compliance-validation.md)

## Encryption policies
<a name="serverless-security-encryption"></a>

[Encryption policies](serverless-encryption.md) define whether your collections are encrypted with an AWS owned key or a customer managed key. Encryption policies consist of two components: a **resource pattern** and an **encryption key**. The resource pattern defines which collection or collections the policy applies to. The encryption key determines how the associated collections will be secured.

To apply a policy to multiple collections, you include a wildcard (\$1) in the policy rule. For example, the following policy applies to all collections with names that begin with "logs".

![\[Input field for specifying a prefix term or collection name, with "logs*" entered.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-security-encryption.png)


Encryption policies streamline the process of creating and managing collections, especially when you do so programmatically. You can create a collection by specifying a name, and an encryption key is automatically assigned to it upon creation. 

## Network policies
<a name="serverless-security-network"></a>

[Network policies](serverless-network.md) define whether your collections are accessible privately, or over the internet from public networks. Private collections can be accessed through OpenSearch Serverless–managed VPC endpoints, or by specific AWS services such as Amazon Bedrock using *AWS service private access*. Just like encryption policies, network policies can apply to multiple collections, which allows you to manage network access for many collections at scale.

Network policies consist of two components: an **access type** and a **resource type**. The access type can either be public or private. The resource type determines whether the access you choose applies to the collection endpoint, the OpenSearch Dashboards endpoint, or both.

![\[Access type and resource type options for configuring network policies in OpenSearch.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-security-network.png)


If you plan to configure VPC access within a network policy, you must first create one or more [OpenSearch Serverless-managed VPC endpoints](serverless-vpc.md). These endpoints let you access OpenSearch Serverless as if it were in your VPC, without the use of an internet gateway, NAT device, VPN connection, or Direct Connect connection.

Private access to AWS services can only apply to the collection's OpenSearch endpoint, not to the OpenSearch Dashboards endpoint. AWS services cannot be granted access to OpenSearch Dashboards.

## Data access policies
<a name="serverless-security-data-access"></a>

[Data access policies](serverless-data-access.md) define how your users access the data within your collections. Data access policies help you manage collections at scale by automatically assigning access permissions to collections and indexes that match a specific pattern. Multiple policies can apply to a single resource.

Data access policies consist of a set of rules, each with three components: a **resource type**, **granted resources**, and a set of **permissions**. The resource type can be a collection or index. The granted resources can be collection/index names or patterns with a wildcard (\$1). The list of permissions specifies which [OpenSearch API operations](serverless-genref.md#serverless-operations) the policy grants access to. In addition, the policy contains a list of **principals**, which specify the IAM roles, users, and SAML identities to grant access to.

![\[Selected principals and granted resources with permissions for collection and index access.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-data-access.png)


For more information about the format of a data access policy, see the [policy syntax](serverless-data-access.md#serverless-data-access-syntax).

Before you create a data access policy, you must have one or more IAM roles or users, or SAML identities, to provide access to in the policy. For details, see the next section.

**Note**  
Switching from Public to Private Access for your collection, will remove the Indexes Tab in the OpenSearch Serverless Collection Console.

## IAM and SAML authentication
<a name="serverless-security-authentication"></a>

 IAM principals and SAML identities are one of the building blocks of a data access policy. Within the `principal` statement of an access policy, you can include IAM roles, users, and SAML identities. These principals are then granted the permissions that you specify in the associated policy rules.

```
[
   {
      "Rules":[
         {
            "ResourceType":"index",
            "Resource":[
               "index/marketing/orders*"
            ],
            "Permission":[
               "aoss:*"
            ]
         }
      ],
      "Principal":[
         "arn:aws:iam::123456789012:user/Dale",
         "arn:aws:iam::123456789012:role/RegulatoryCompliance",
         "saml/123456789012/myprovider/user/Annie"
      ]
   }
]
```

You configure SAML authentication directly within OpenSearch Serverless. For more information, see [SAML authentication for Amazon OpenSearch Serverless](serverless-saml.md). 

## Infrastructure security
<a name="serverless-infrastructure-security"></a>

Amazon OpenSearch Serverless is protected by AWS global network security. For information about AWS security services and how AWS protects infrastructure, see [AWS Cloud Security](https://aws.amazon.com/security/). To design your AWS environment using the best practices for infrastructure security, see [Infrastructure Protection](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/infrastructure-protection.html) in *Security Pillar AWS Well‐Architected Framework*.

You use AWS published API calls to access Amazon OpenSearch Serverless through the network. Clients must support Transport Layer Security (TLS). We require TLS 1.2 and recommend TLS 1.3. For a list of supported ciphers for TLS 1.3, see [TLS protocols and ciphers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#tls-protocols-ciphers) in the Elastic Load Balancing documentation.

Additionally, you must sign requests using an access key ID and a secret access key that is associated with an IAM principal. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) (AWS STS) to generate temporary security credentials to sign requests.

# Getting started with security in Amazon OpenSearch Serverless
<a name="serverless-tutorials"></a>

The following tutorials help you get started using Amazon OpenSearch Serverless. Both tutorials accomplish the same basic steps, but one uses the console while the other uses the AWS CLI.

Note that the use cases in these tutorials are simplified. The network and security policies are fairly open. In production workloads, we recommend that you configure more robust security features such as SAML authentication, VPC access, and restrictive data access policies.

**Topics**
+ [Tutorial: Getting started with security in Amazon OpenSearch Serverless (console)](gsg-serverless.md)
+ [Tutorial: Getting started with security in Amazon OpenSearch Serverless (CLI)](gsg-serverless-cli.md)

# Tutorial: Getting started with security in Amazon OpenSearch Serverless (console)
<a name="gsg-serverless"></a>

This tutorial walks you through the basic steps to create and manage security policies using the Amazon OpenSearch Serverless console.

You will complete the following steps in this tutorial:

1. [Configure permissions](#gsgpermissions)

1. [Create an encryption policy](#gsg-encryption)

1. [Create a network policy](#gsg-network)

1. [Configure a data access policy](#gsg-data-access)

1. [Create a collection](#gsgcreate-collection)

1. [Upload and search data](#gsgindex-collection)

This tutorial walks you through setting up a collection using the AWS Management Console. For the same steps using the AWS CLI, see [Tutorial: Getting started with security in Amazon OpenSearch Serverless (CLI)](gsg-serverless-cli.md).

## Step 1: Configure permissions
<a name="gsgpermissions"></a>

**Note**  
You can skip this step if you're already using a more broad identity-based policy, such as `Action":"aoss:*"` or `Action":"*"`. In production environments, however, we recommend that you follow the principal of least privilege and only assign the minimum permissions necessary to complete a task.

In order to complete this tutorial, you must have the correct IAM permissions. Your user or role must have an attached [identity-based policy](security-iam-serverless.md#security-iam-serverless-id-based-policies) with the following minimum permissions:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Action": [
        "aoss:ListCollections",
        "aoss:BatchGetCollection",
        "aoss:CreateCollection",
        "aoss:CreateSecurityPolicy",
        "aoss:GetSecurityPolicy",
        "aoss:ListSecurityPolicies",
        "aoss:CreateAccessPolicy",
        "aoss:GetAccessPolicy",
        "aoss:ListAccessPolicies"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
```

------

For a full list of OpenSearch Serverless permissions, see [Identity and Access Management for Amazon OpenSearch Serverless](security-iam-serverless.md).

## Step 2: Create an encryption policy
<a name="gsg-encryption"></a>

[Encryption policies](serverless-encryption.md) specify the AWS KMS key that OpenSearch Serverless will use to encrypt the collection. You can encrypt collections with an AWS managed key or a different key. For simplicity in this tutorial, we'll encrypt our collection with an AWS managed key.

**To create an encryption policy**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. Expand **Serverless** in the left navigation pane and choose **Encryption policies**.

1. Choose **Create encryption policy**.

1. Name the policy **books-policy**. For the description, enter **Encryption policy for books collection**.

1. Under **Resources**, enter **books**, which is what you'll name your collection. If you wanted to be more broad, you could include an asterisk (`books*`) to make the policy apply to all collections beginning with the word "books".

1. For **Encryption**, keep **Use AWS owned key** selected.

1. Choose **Create**.

## Step 3: Create a network policy
<a name="gsg-network"></a>

[Network policies](serverless-network.md) determine whether your collection is accessible over the internet from public networks, or whether it must be accessed through OpenSearch Serverless–managed VPC endpoints. In this tutorial, we'll configure public access.

**To create a network policy**

1. Choose **Network policies** in the left navigation pane and choose **Create network policy**.

1. Name the policy **books-policy**. For the description, enter **Network policy for books collection**.

1. Under **Rule 1**, name the rule **Public access for books collection**.

1. For simplicity in this tutorial, we'll configure public access for the *books* collection. For the access type, select **Public**.

1. We're going to access the collection from OpenSearch Dashboards. In order to do this, you need to configure network access for Dashboards *and* the OpenSearch endpoint, otherwise Dashboards won't function.

   For the resource type, enable both **Access to OpenSearch endpoints** and **Access to OpenSearch Dashboards**.

1. In both input boxes, enter **Collection Name = books**. This setting scopes the policy down so that it only applies to a single collection (`books`). Your rule should look like this:  
![\[Search interface showing two input fields for collection or prefix term selection, both set to "books".\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-tutorial-network.png)

1. Choose **Create**.

## Step 4: Create a data access policy
<a name="gsg-data-access"></a>

Your collection data won't be accessible until you configure data access. [Data access policies](serverless-data-access.md) are separate from the IAM identity-based policy that you configured in step 1. They allow users to access the actual data within a collection.

In this tutorial, we'll provide a single user the permissions required to index data into the *books* collection.

**To create a data access policy**

1. Choose **Data access policies** in the left navigation pane and choose **Create access policy**.

1. Name the policy **books-policy**. For the description, enter **Data access policy for books collection**.

1. Select **JSON** for the policy definition method and paste the following policy in the JSON editor.

   Replace the principal ARN with the ARN of the account that you'll use to log in to OpenSearch Dashboards and index data.

   ```
   [
      {
         "Rules":[
            {
               "ResourceType":"index",
               "Resource":[
                  "index/books/*"
               ],
               "Permission":[
                  "aoss:CreateIndex",
                  "aoss:DescribeIndex", 
                  "aoss:ReadDocument",
                  "aoss:WriteDocument",
                  "aoss:UpdateIndex",
                  "aoss:DeleteIndex"
               ]
            }
         ],
         "Principal":[
            "arn:aws:iam::123456789012:user/my-user"
         ]
      }
   ]
   ```

   This policy provides a single user the minimum permissions required to create an index in the *books* collection, index some data, and search for it.

1. Choose **Create**.

## Step 5: Create a collection
<a name="gsgcreate-collection"></a>

Now that you configured encryption and network policies, you can create a matching collection and the security settings will be automatically applied to it.

**To create an OpenSearch Serverless collection**

1. Choose **Collections** in the left navigation pane and choose **Create collection**.

1. Name the collection **books**.

1. For collection type, choose **Search**.

1. Under **Encryption**, OpenSearch Serverless informs you that the collection name matches the `books-policy` encryption policy.

1. Under **Network access settings**, OpenSearch Serverless informs you that the collection name matches the `books-policy` network policy.

1. Choose **Next**.

1. Under **Data access policy options**, OpenSearch Serverless informs you that the collection name matches the `books-policy` data access policy.

1. Choose **Next**.

1. Review the collection configuration and choose **Submit**. Collections typically take less than a minute to initialize.

## Step 6: Upload and search data
<a name="gsgindex-collection"></a>

You can upload data to an OpenSearch Serverless collection using Postman or curl. For brevity, these examples use **Dev Tools** within the OpenSearch Dashboards console.

**To index and search data in a collection**

1. Choose **Collections** in the left navigation pane and choose the **books** collection to open its details page.

1. Choose the OpenSearch Dashboards URL for the collection. The URL takes the format `https://collection-id.us-east-1.aoss.amazonaws.com/_dashboards`. 

1. Sign in to OpenSearch Dashboards using the [AWS access and secret keys](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html) for the principal that you specified in your data access policy.

1. Within OpenSearch Dashboards, open the left navigation menu and choose **Dev Tools**.

1. To create a single index called *books-index*, run the following command:

   ```
   PUT books-index 
   ```  
![\[OpenSearch Dashboards console showing PUT request for books-index with JSON response.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-createindex.png)

1. To index a single document into *books-index*, run the following command:

   ```
   PUT books-index/_doc/1
   { 
     "title": "The Shining",
     "author": "Stephen King",
     "year": 1977
   }
   ```

1. To search data in OpenSearch Dashboards, you need to configure at least one index pattern. OpenSearch uses these patterns to identify which indexes you want to analyze. Open the Dashboards main menu, choose **Stack Management**, choose **Index Patterns**, and then choose **Create index pattern**. For this tutorial, enter *books-index*.

1. Choose **Next step** and then choose **Create index pattern**. After the pattern is created, you can view the various document fields such as `author` and `title`.

1. To begin searching your data, open the main menu again and choose **Discover**, or use the [search API](https://opensearch.org/docs/latest/opensearch/rest-api/search/).

# Tutorial: Getting started with security in Amazon OpenSearch Serverless (CLI)
<a name="gsg-serverless-cli"></a>

This tutorial walks you through the steps described in the [console getting started tutorial](gsg-serverless.md) for security, but uses the AWS CLI rather than the OpenSearch Service console. 

You'll complete the following steps in this tutorial:

1. Create an IAM permissions policy

1. Attatch the IAM policy to an IAM role

1. Create an encryption policy

1. Create a network policy

1. Create a collection

1. Configure a data access policy

1. Retrieve the collection endpoint

1. Upload data to your connection

1. Search data in your collection

The goal of this tutorial is to set up a single OpenSearch Serverless collection with fairly simple encryption, network, and data access settings. For example, we'll configure public network access, an AWS managed key for encryption, and a simplified data access policy that grants minimal permissions to a single user. 

In a production scenario, consider implementing a more robust configuration, including SAML authentication, a custom encryption key, and VPC access.

**To get started with security policies in OpenSearch Serverless**

1. 
**Note**  
You can skip this step if you're already using a more broad identity-based policy, such as `Action":"aoss:*"` or `Action":"*"`. In production environments, however, we recommend that you follow the principal of least privilege and only assign the minimum permissions necessary to complete a task.

   To start, create an AWS Identity and Access Management policy with the minimum required permissions to perform the steps in this tutorial. We'll name the policy `TutorialPolicy`:

   ```
   aws iam create-policy \
     --policy-name TutorialPolicy \
     --policy-document "{\"Version\": \"2012-10-17\",\"Statement\": [{\"Action\": [\"aoss:ListCollections\",\"aoss:BatchGetCollection\",\"aoss:CreateCollection\",\"aoss:CreateSecurityPolicy\",\"aoss:GetSecurityPolicy\",\"aoss:ListSecurityPolicies\",\"aoss:CreateAccessPolicy\",\"aoss:GetAccessPolicy\",\"aoss:ListAccessPolicies\"],\"Effect\": \"Allow\",\"Resource\": \"*\"}]}"
   ```

   **Sample response**

   ```
   {
       "Policy": {
           "PolicyName": "TutorialPolicy",
           "PolicyId": "ANPAW6WRAECKG6QJWUV7U",
           "Arn": "arn:aws:iam::123456789012:policy/TutorialPolicy",
           "Path": "/",
           "DefaultVersionId": "v1",
           "AttachmentCount": 0,
           "PermissionsBoundaryUsageCount": 0,
           "IsAttachable": true,
           "CreateDate": "2022-10-16T20:57:18+00:00",
           "UpdateDate": "2022-10-16T20:57:18+00:00"
       }
   }
   ```

1. Attach `TutorialPolicy` to the IAM role who will index and search data in the collection. We'll name the user `TutorialRole`:

   ```
   aws iam attach-role-policy \
     --role-name TutorialRole \
     --policy-arn arn:aws:iam::123456789012:policy/TutorialPolicy
   ```

1. Before you create a collection, you need to create an [encryption policy](serverless-encryption.md) that assigns an AWS owned key to the *books* collection that you'll create in a later step.

   Send the following request to create an encryption policy for the *books* collection:

   ```
   aws opensearchserverless create-security-policy \
     --name books-policy \
     --type encryption --policy "{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/books\"]}],\"AWSOwnedKey\":true}"
   ```

   **Sample response**

   ```
   {
       "securityPolicyDetail": {
           "type": "encryption",
           "name": "books-policy",
           "policyVersion": "MTY2OTI0MDAwNTk5MF8x",
           "policy": {
               "Rules": [
                   {
                       "Resource": [
                           "collection/books"
                       ],
                       "ResourceType": "collection"
                   }
               ],
               "AWSOwnedKey": true
           },
           "createdDate": 1669240005990,
           "lastModifiedDate": 1669240005990
       }
   }
   ```

1. Create a [network policy](serverless-network.md) that provides public access to the *books* collection:

   ```
   aws opensearchserverless create-security-policy --name books-policy --type network \
     --policy "[{\"Description\":\"Public access for books collection\",\"Rules\":[{\"ResourceType\":\"dashboard\",\"Resource\":[\"collection\/books\"]},{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/books\"]}],\"AllowFromPublic\":true}]"
   ```

   **Sample response**

   ```
   {
       "securityPolicyDetail": {
           "type": "network",
           "name": "books-policy",
           "policyVersion": "MTY2OTI0MDI1Njk1NV8x",
           "policy": [
               {
                   "Rules": [
                       {
                           "Resource": [
                               "collection/books"
                           ],
                           "ResourceType": "dashboard"
                       },
                       {
                           "Resource": [
                               "collection/books"
                           ],
                           "ResourceType": "collection"
                       }
                   ],
                   "AllowFromPublic": true,
                   "Description": "Public access for books collection"
               }
           ],
           "createdDate": 1669240256955,
           "lastModifiedDate": 1669240256955
       }
   }
   ```

1. Create the *books* collection:

   ```
   aws opensearchserverless create-collection --name books --type SEARCH
   ```

   **Sample response**

   ```
   {
       "createCollectionDetail": {
           "id": "8kw362bpwg4gx9b2f6e0",
           "name": "books",
           "status": "CREATING",
           "type": "SEARCH",
           "arn": "arn:aws:aoss:us-east-1:123456789012:collection/8kw362bpwg4gx9b2f6e0",
           "kmsKeyArn": "auto",
           "createdDate": 1669240325037,
           "lastModifiedDate": 1669240325037
       }
   }
   ```

1. Create a [data access policy](serverless-data-access.md) that provides the minimum permissions to index and search data in the *books* collection. Replace the principal ARN with the ARN of `TutorialRole` from step 1:

   ```
   aws opensearchserverless create-access-policy \
     --name books-policy \
     --type data \
     --policy "[{\"Rules\":[{\"ResourceType\":\"index\",\"Resource\":[\"index\/books\/books-index\"],\"Permission\":[\"aoss:CreateIndex\",\"aoss:DescribeIndex\",\"aoss:ReadDocument\",\"aoss:WriteDocument\",\"aoss:UpdateIndex\",\"aoss:DeleteIndex\"]}],\"Principal\":[\"arn:aws:iam::123456789012:role\/TutorialRole\"]}]"
   ```

   **Sample response**

   ```
   {
       "accessPolicyDetail": {
           "type": "data",
           "name": "books-policy",
           "policyVersion": "MTY2OTI0MDM5NDY1M18x",
           "policy": [
               {
                   "Rules": [
                       {
                           "Resource": [
                               "index/books/books-index"
                           ],
                           "Permission": [
                               "aoss:CreateIndex",
                               "aoss:DescribeIndex",
                               "aoss:ReadDocument",
                               "aoss:WriteDocument",
                               "aoss:UpdateDocument",
                               "aoss:DeleteDocument"
                           ],
                           "ResourceType": "index"
                       }
                   ],
                   "Principal": [
                       "arn:aws:iam::123456789012:role/TutorialRole"
                   ]
               }
           ],
           "createdDate": 1669240394653,
           "lastModifiedDate": 1669240394653
       }
   }
   ```

   `TutorialRole` should now be able to index and search documents in the *books* collection. 

1. To make calls to the OpenSearch API, you need the collection endpoint. Send the following request to retrieve the `collectionEndpoint` parameter:

   ```
   aws opensearchserverless batch-get-collection --names books  
   ```

   **Sample response**

   ```
   {
       "collectionDetails": [
           {
               "id": "8kw362bpwg4gx9b2f6e0",
               "name": "books",
               "status": "ACTIVE",
               "type": "SEARCH",
               "description": "",
               "arn": "arn:aws:aoss:us-east-1:123456789012:collection/8kw362bpwg4gx9b2f6e0",
               "createdDate": 1665765327107,
               "collectionEndpoint": "https://8kw362bpwg4gx9b2f6e0.us-east-1.aoss.amazonaws.com",
               "dashboardEndpoint": "https://8kw362bpwg4gx9b2f6e0.us-east-1.aoss.amazonaws.com/_dashboards"
           }
       ],
       "collectionErrorDetails": []
   }
   ```
**Note**  
You won't be able to see the collection endpoint until the collection status changes to `ACTIVE`. You might have to make multiple calls to check the status until the collection is successfully created.

1. Use an HTTP tool such as [Postman](https://www.getpostman.com/) or curl to index data into the *books* collection. We'll create an index called *books-index* and add a single document.

   Send the following request to the collection endpoint that you retrieved in the previous step, using the credentials for `TutorialRole`.

   ```
   PUT https://8kw362bpwg4gx9b2f6e0.us-east-1.aoss.amazonaws.com/books-index/_doc/1
   { 
     "title": "The Shining",
     "author": "Stephen King",
     "year": 1977
   }
   ```

   **Sample response**

   ```
   {
     "_index" : "books-index",
     "_id" : "1",
     "_version" : 1,
     "result" : "created",
     "_shards" : {
       "total" : 0,
       "successful" : 0,
       "failed" : 0
     },
     "_seq_no" : 0,
     "_primary_term" : 0
   }
   ```

1. To begin searching data in your collection, use the [search API](https://opensearch.org/docs/latest/opensearch/rest-api/search/). The following query performs a basic search:

   ```
   GET https://8kw362bpwg4gx9b2f6e0.us-east-1.aoss.amazonaws.com/books-index/_search
   ```

   **Sample response**

   ```
   {
       "took": 405,
       "timed_out": false,
       "_shards": {
           "total": 6,
           "successful": 6,
           "skipped": 0,
           "failed": 0
       },
       "hits": {
           "total": {
               "value": 2,
               "relation": "eq"
           },
           "max_score": 1.0,
           "hits": [
               {
                   "_index": "books-index:0::3xJq14MBUaOS0wL26UU9:0",
                   "_id": "F_bt4oMBLle5pYmm5q4T",
                   "_score": 1.0,
                   "_source": {
                       "title": "The Shining",
                       "author": "Stephen King",
                       "year": 1977
                   }
               }
           ]
       }
   }
   ```

# Identity and Access Management for Amazon OpenSearch Serverless
<a name="security-iam-serverless"></a>

AWS Identity and Access Management (IAM) is an AWS service that helps an administrator securely control access to AWS resources. IAM administrators control who can be *authenticated* (signed in) and *authorized* (have permissions) to use OpenSearch Serverless resources. IAM is an AWS service that you can use with no additional charge.

**Topics**
+ [Identity-based policies for OpenSearch Serverless](#security-iam-serverless-id-based-policies)
+ [Policy actions for OpenSearch Serverless](#security-iam-serverless-id-based-policies-actions)
+ [Policy resources for OpenSearch Serverless](#security-iam-serverless-id-based-policies-resources)
+ [Policy condition keys for Amazon OpenSearch Serverless](#security_iam_serverless-conditionkeys)
+ [ABAC with OpenSearch Serverless](#security_iam_serverless-with-iam-tags)
+ [Using temporary credentials with OpenSearch Serverless](#security_iam_serverless-tempcreds)
+ [Service-linked roles for OpenSearch Serverless](#security_iam_serverless-slr)
+ [Other policy types](#security_iam_access-manage-other-policies)
+ [Identity-based policy examples for OpenSearch Serverless](#security_iam_serverless_id-based-policy-examples)
+ [IAM Identity Center support for Amazon OpenSearch Serverless](serverless-iam-identity-center.md)

## Identity-based policies for OpenSearch Serverless
<a name="security-iam-serverless-id-based-policies"></a>

**Supports identity-based policies:** Yes

Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

With IAM identity-based policies, you can specify allowed or denied actions and resources as well as the conditions under which actions are allowed or denied. To learn about all of the elements that you can use in a JSON policy, see [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*.

### Identity-based policy examples for OpenSearch Serverless
<a name="security_iam_id-based-policy-examples"></a>

To view examples of OpenSearch Serverless identity-based policies, see [Identity-based policy examples for OpenSearch Serverless](#security_iam_serverless_id-based-policy-examples).

## Policy actions for OpenSearch Serverless
<a name="security-iam-serverless-id-based-policies-actions"></a>

**Supports policy actions:** Yes

The `Action` element of a JSON policy describes the actions that you can use to allow or deny access in a policy. Policy actions usually have the same name as the associated AWS API operation. There are some exceptions, such as *permission-only actions* that don't have a matching API operation. There are also some operations that require multiple actions in a policy. These additional actions are called *dependent actions*.

Include actions in a policy to grant permissions to perform the associated operation.

Policy actions in OpenSearch Serverless use the following prefix before the action:

```
aoss
```

To specify multiple actions in a single statement, separate them with commas.

```
"Action": [
      "aoss:action1",
      "aoss:action2"
         ]
```

You can specify multiple actions using wildcard characters (\$1). For example, to specify all actions that begin with the word `Describe`, include the following action:

```
"Action": "aoss:List*"
```

To view examples of OpenSearch Serverless identity-based policies, see [Identity-based policy examples for OpenSearch Serverless](#security_iam_id-based-policy-examples).

## Policy resources for OpenSearch Serverless
<a name="security-iam-serverless-id-based-policies-resources"></a>

**Supports policy resources:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Resource` JSON policy element specifies the object or objects to which the action applies. As a best practice, specify a resource using its [Amazon Resource Name (ARN)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html). For actions that don't support resource-level permissions, use a wildcard (\$1) to indicate that the statement applies to all resources.

```
"Resource": "*"
```

## Policy condition keys for Amazon OpenSearch Serverless
<a name="security_iam_serverless-conditionkeys"></a>

**Supports service-specific policy condition keys:** Yes

Administrators can use AWS JSON policies to specify who has access to what. That is, which **principal** can perform **actions** on what **resources**, and under what **conditions**.

The `Condition` element specifies when statements execute based on defined criteria. You can create conditional expressions that use [condition operators](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition_operators.html), such as equals or less than, to match the condition in the policy with values in the request. To see all AWS global condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

In addition to attribute-based access control (ABAC), OpenSearch Serverless supports the following condition keys:
+ `aoss:collection`
+ `aoss:CollectionId`
+ `aoss:index`

You can use these condition keys even when providing permissions for access policies and security policies. For example:

```
[
   {
      "Effect":"Allow",
      "Action":[
         "aoss:CreateAccessPolicy",
         "aoss:CreateSecurityPolicy"
      ],
      "Resource":"*",
      "Condition":{
         "StringLike":{
            "aoss:collection":"log"
         }
      }
   }
]
```

In this example, the condition applies to policies that contain *rules* that match a collection name or pattern. The conditions have the following behavior:
+ `StringEquals` - Applies to policies with rules that contain the *exact* resource string "log" (i.e. `collection/log`).
+ `StringLike` - Applies to policies with rules that contain a resource string that *includes* the string "log" (i.e. `collection/log` but also `collection/logs-application` or `collection/applogs123`).

**Note**  
*Collection* condition keys don't apply at the index level. For example, in the policy above, the condition wouldn't apply to an access or security policy containing the resource string `index/logs-application/*`.

To see a list of OpenSearch Serverless condition keys, see [Condition keys for Amazon OpenSearch Serverless](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonopensearchserverless.html#amazonopensearchserverless-policy-keys) in the *Service Authorization Reference*. To learn with which actions and resources you can use a condition key, see [Actions defined by Amazon OpenSearch Serverless](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonopensearchserverless.html#amazonopensearchserverless-actions-as-permissions).

## ABAC with OpenSearch Serverless
<a name="security_iam_serverless-with-iam-tags"></a>

**Supports ABAC (tags in policies):** Yes

Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes called tags. You can attach tags to IAM entities and AWS resources, then design ABAC policies to allow operations when the principal's tag matches the tag on the resource.

To control access based on tags, you provide tag information in the [condition element](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) of a policy using the `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys` condition keys.

If a service supports all three condition keys for every resource type, then the value is **Yes** for the service. If a service supports all three condition keys for only some resource types, then the value is **Partial**.

For more information about ABAC, see [Define permissions with ABAC authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) in the *IAM User Guide*. To view a tutorial with steps for setting up ABAC, see [Use attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html) in the *IAM User Guide*.

For more information about tagging OpenSearch Serverless resources, see [Tagging Amazon OpenSearch Serverless collections](tag-collection.md).

## Using temporary credentials with OpenSearch Serverless
<a name="security_iam_serverless-tempcreds"></a>

**Supports temporary credentials:** Yes

Temporary credentials provide short-term access to AWS resources and are automatically created when you use federation or switch roles. AWS recommends that you dynamically generate temporary credentials instead of using long-term access keys. For more information, see [Temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) and [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) in the *IAM User Guide*.

## Service-linked roles for OpenSearch Serverless
<a name="security_iam_serverless-slr"></a>

**Supports service-linked roles:** Yes

 A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. 

For details about creating and managing OpenSearch Serverless service-linked roles, see [Using service-linked roles to create OpenSearch Serverless collections](serverless-service-linked-roles.md).

## Other policy types
<a name="security_iam_access-manage-other-policies"></a>

AWS supports additional, less-common policy types. These policy types can set the maximum permissions granted to you by the more common policy types.
+ **Service control policies (SCPs)** – SCPs are JSON policies that specify the maximum permissions for an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for grouping and centrally managing multiple AWS accounts that your business owns. If you enable all features in an organization, then you can apply service control policies (SCPs) to any or all of your accounts. The SCP limits permissions for entities in member accounts, including each AWS account root user. For more information about Organizations and SCPs, see [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ **Resource control policies (RCPs)** – RCPs are JSON policies that you can use to set the maximum available permissions for resources in your accounts without updating the IAM policies attached to each resource that you own. The RCP limits permissions for resources in member accounts and can impact the effective permissions for identities, including the AWS account root user, regardless of whether they belong to your organization. For more information about Organizations and RCPs, including a list of AWS services that support RCPs, see [Resource control policies (RCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*.

## Identity-based policy examples for OpenSearch Serverless
<a name="security_iam_serverless_id-based-policy-examples"></a>

By default, users and roles don't have permission to create or modify OpenSearch Serverless resources. To grant users permission to perform actions on the resources that they need, an IAM administrator can create IAM policies.

To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Create IAM policies (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*.

For details about actions and resource types defined by Amazon OpenSearch Serverless, including the format of the ARNs for each of the resource types, see [Actions, resources, and condition keys for Amazon OpenSearch Serverless](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonopensearchserverless.html) in the *Service Authorization Reference*.

**Topics**
+ [Policy best practices](#security_iam_serverless-policy-best-practices)
+ [Using OpenSearch Serverless in the console](#security_iam_serverless_id-based-policy-examples-console)
+ [Administering OpenSearch Serverless collections](#security_iam_id-based-policy-examples-collection-admin)
+ [Viewing OpenSearch Serverless collections](#security_iam_id-based-policy-examples-view-collections)
+ [Using OpenSearch API operations](#security_iam_id-based-policy-examples-data-plane)
+ [ABAC for OpenSearch API operations](#security_iam_id-based-policy-examples-data-plane-abac)

### Policy best practices
<a name="security_iam_serverless-policy-best-practices"></a>

Identity-based policies are very powerful. They determine whether someone can create, access, or delete OpenSearch Serverless resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:

Identity-based policies determine whether someone can create, access, or delete OpenSearch Serverless resources in your account. These actions can incur costs for your AWS account. When you create or edit identity-based policies, follow these guidelines and recommendations:
+ **Get started with AWS managed policies and move toward least-privilege permissions** – To get started granting permissions to your users and workloads, use the *AWS managed policies* that grant permissions for many common use cases. They are available in your AWS account. We recommend that you reduce permissions further by defining AWS customer managed policies that are specific to your use cases. For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) or [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html) in the *IAM User Guide*.
+ **Apply least-privilege permissions** – When you set permissions with IAM policies, grant only the permissions required to perform a task. You do this by defining the actions that can be taken on specific resources under specific conditions, also known as *least-privilege permissions*. For more information about using IAM to apply permissions, see [ Policies and permissions in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) in the *IAM User Guide*.
+ **Use conditions in IAM policies to further restrict access** – You can add a condition to your policies to limit access to actions and resources. For example, you can write a policy condition to specify that all requests must be sent using SSL. You can also use conditions to grant access to service actions if they are used through a specific AWS service, such as CloudFormation. For more information, see [ IAM JSON policy elements: Condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html) in the *IAM User Guide*.
+ **Use IAM Access Analyzer to validate your IAM policies to ensure secure and functional permissions** – IAM Access Analyzer validates new and existing policies so that the policies adhere to the IAM policy language (JSON) and IAM best practices. IAM Access Analyzer provides more than 100 policy checks and actionable recommendations to help you author secure and functional policies. For more information, see [Validate policies with IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*.
+ **Require multi-factor authentication (MFA)** – If you have a scenario that requires IAM users or a root user in your AWS account, turn on MFA for additional security. To require MFA when API operations are called, add MFA conditions to your policies. For more information, see [ Secure API access with MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_configure-api-require.html) in the *IAM User Guide*.

For more information about best practices in IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

### Using OpenSearch Serverless in the console
<a name="security_iam_serverless_id-based-policy-examples-console"></a>

To access OpenSearch Serverless within the OpenSearch Service console, you must have a minimum set of permissions. These permissions must allow you to list and view details about the OpenSearch Serverless resources in your AWS account. If you create an identity-based policy that is more restrictive than the minimum required permissions, the console won't function as intended for entities (such as IAM roles) with that policy.

You don't need to allow minimum console permissions for users that are making calls only to the AWS CLI or the AWS API. Instead, allow access to only the actions that match the API operation that you're trying to perform.

The following policy allows a user to access OpenSearch Serverless within the OpenSearch Service console:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Resource": "*",
            "Effect": "Allow",
            "Action": [
                "aoss:ListCollections",
                "aoss:BatchGetCollection",
                "aoss:ListAccessPolicies",
                "aoss:ListSecurityConfigs",
                "aoss:ListSecurityPolicies",
                "aoss:ListTagsForResource",
                "aoss:ListVpcEndpoints",
                "aoss:GetAccessPolicy",
                "aoss:GetAccountSettings",
                "aoss:GetSecurityConfig",
                "aoss:GetSecurityPolicy"
            ]
        }
    ]
}
```

------

### Administering OpenSearch Serverless collections
<a name="security_iam_id-based-policy-examples-collection-admin"></a>

This policy is an example of a "collection admin" policy that allows a user to manage and administer Amazon OpenSearch Serverless collections. The user can create, view, and delete collections.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/*",
            "Action": [
                "aoss:CreateCollection",
                "aoss:DeleteCollection",
                "aoss:UpdateCollection"
            ],
            "Effect": "Allow"
        },
        {
            "Resource": "*",
            "Action": [
                "aoss:BatchGetCollection",
                "aoss:ListCollections",
                "aoss:CreateAccessPolicy",
                "aoss:CreateSecurityPolicy"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

### Viewing OpenSearch Serverless collections
<a name="security_iam_id-based-policy-examples-view-collections"></a>

This example policy allows a user to view details for all Amazon OpenSearch Serverless collections in their account. The user can't modify the collections or any associated security policies.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Resource": "*",
            "Action": [
                "aoss:ListAccessPolicies",
                "aoss:ListCollections",
                "aoss:ListSecurityPolicies",
                "aoss:ListTagsForResource",
                "aoss:BatchGetCollection"
            ],
            "Effect": "Allow"
        }
    ]
}
```

------

### Using OpenSearch API operations
<a name="security_iam_id-based-policy-examples-data-plane"></a>

Data plane API operations consist of the functions you use in OpenSearch Serverless to derive realtime value from the service. Control plane API operations consist of the functions you use to set up the environment. 

To access Amazon OpenSearch Serverless data plane APIs and OpenSearch Dashboards from the browser, you need to add two IAM permissions for collection resources. These permissions are `aoss:APIAccessAll` and `aoss:DashboardsAccessAll`. 

**Note**  
Starting May 10, 2023, OpenSearch Serverless requires these two new IAM permissions for collection resources. The `aoss:APIAccessAll` permission allows data plane access, and the `aoss:DashboardsAccessAll` permission allows OpenSearch Dashboards from the browser. Failure to add the two new IAM permissions results in a 403 error. 

This example policy allows a user to access data plane APIs for a specified collection in their account, and to access OpenSearch Dashboards for all collections in their account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
         {
            "Effect": "Allow",
            "Action": "aoss:APIAccessAll",
            "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/collection-id"
        },
        {
            "Effect": "Allow",
            "Action": "aoss:DashboardsAccessAll",
            "Resource": "arn:aws:aoss:us-east-1:111122223333:dashboards/default"
        }
    ]
}
```

------

Both `aoss:APIAccessAll` and `aoss:DashboardsAccessAll` give full IAM permission to the collection resources, while the Dashboards permission also provides OpenSearch Dashboards access. Each permission works independently, so an explicit deny on `aoss:APIAccessAll` doesn't block `aoss:DashboardsAccessAll` access to the resources, including Dev Tools. The same is true for a deny on `aoss:DashboardsAccessAll`. OpenSearch Serverless supports the following global condition keys: 
+ `aws:CalledVia`
+ `aws:CalledViaAWSService`
+ `aws:CalledViaFirst`
+ `aws:CalledViaLast`
+ `aws:CurrentTime`
+ `aws:EpochTime`
+ `aws:PrincipalAccount`
+ `aws:PrincipalArn`
+ `aws:PrincipallsAWSService`
+ `aws:PrincipalOrgID`
+ `aws:PrincipalOrgPaths`
+ `aws:PrincipalType`
+ `aws:PrincipalServiceName`
+ `aws:PrincipalServiceNamesList`
+ `aws:ResourceAccount`
+ `aws:ResourceOrgID`
+ `aws:ResourceOrgPaths`
+ `aws:RequestedRegion`
+ `aws:ResourceTag`
+ `aws:SourceIp`
+ `aws:SourceVpce`
+ `aws:SourceVpc`
+ `aws:userid`
+ `aws:username`
+ `aws:VpcSourceIp`

The following is an example of using `aws:SourceIp` in the condition block in your principal's IAM policy for data plane calls:

```
"Condition": {
    "IpAddress": {
         "aws:SourceIp": "203.0.113.0"
    }
}
```

The following is an example of using `aws:SourceVpc` in the condition block in your principal's IAM policy for data plane calls:

```
"Condition": {
    "StringEquals": {
        "aws:SourceVpc": "vpc-0fdd2445d8EXAMPLE"
    }
}
```

Additonally, support is offered for the following OpenSearch Serverless specific keys: 
+ `aoss:CollectionId`
+ `aoss:collection`

The following is an example of using `aoss:collection` in the condition block in your principal's IAM policy for data plane calls:

```
"Condition": {
    "StringLike": {
         "aoss:collection": "log-*"
    }
}
```

### ABAC for OpenSearch API operations
<a name="security_iam_id-based-policy-examples-data-plane-abac"></a>

Identity-based policies let you use tags to control access to Amazon OpenSearch Serverless data plane APIs. The following policy is an example to allow attached principals to access data plane APIs if the collection has the `team:devops` tag:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "aoss:APIAccessAll",
            "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/collection-id",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/team": "devops"
                }
            }
        }
    ]
}
```

------

The following policy is an example to deny attached principals to access data plane APIs and Dashboards access if the collection has the `environment:production` tag:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "aoss:APIAccessAll",
                "aoss:DashboardsAccessAll"
            ],
            "Resource": "arn:aws:aoss:us-east-1:111122223333:collection/collection-id"
        }
    ]
}
```

------

Amazon OpenSearch Serverless do not support `RequestTag` and `TagKeys` global condition keys for data plane APIs. 

# IAM Identity Center support for Amazon OpenSearch Serverless
<a name="serverless-iam-identity-center"></a>

## IAM Identity Center support for Amazon OpenSearch Serverless
<a name="serverless-iam-identity-support"></a>

You can use IAM Identity Center principals (users and groups) to access Amazon OpenSearch Serverless data through Amazon OpenSearch Applications. In order to enable IAM Identity Center support for Amazon OpenSearch Serverless, you will need to enable use of IAM Identity Center. To learn more on how to do this, see [What is IAM Identity Center?](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html)

**Note**  
To access Amazon OpenSearch Serverless collections using IAM Identity Center users or groups, you must use the OpenSearch UI (Applications) feature. Direct access to OpenSearch Serverless Dashboards using IAM Identity Center credentials is not supported. For more information, see [Getting started with the OpenSearch user interface](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/application.html).

After the IAM Identity Center instance is created, the customer account administrator needs to create an IAM Identity Center application for the Amazon OpenSearch Serverless service. This can be done by calling the [CreateSecurityConfig:](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateSecurityConfig.html). The customer account administrator can specify what attributes will be used for authorizing the request. The default attributes used are `UserId` and `GroupId.`

The IAM Identity Center integration for Amazon OpenSearch Serverless uses the following AWS IAM Identity Center (IAM) permissions:
+ `aoss:CreateSecurityConfig` – Create an IAM Identity Center provider
+ `aoss:ListSecurityConfig` – List all IAM Identity Center providers in the current account.
+ `aoss:GetSecurityConfig` – View IAM Identity Center provider information.
+ `aoss:UpdateSecurityConfig` – Modify a given IAM Identity Center configuration
+ `aoss:DeleteSecurityConfig` – Delete an IAM Identity Centerprovider. 

The following identity-based access policy can be used to manage all IAM Identity Center configurations:

------
#### [ JSON ]

****  

```
{
"Version": "2012-10-17",
    "Statement": [
        {
"Action": [
                "aoss:CreateSecurityConfig",
                "aoss:DeleteSecurityConfig",
                "aoss:GetSecurityConfig",
                "aoss:UpdateSecurityConfig",
                "aoss:ListSecurityConfigs"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

------

**Note**  
The `Resource` element must be a wildcard.

## Creating an IAM Identity Center provider (console)
<a name="serverless-iam-console"></a>

You can create an IAM Identity Center provider to enable authentication with OpenSearch Application. To enable IAM Identity Center authentication for OpenSearch Dashboards, perform the following steps:

1. Sign in to the [Amazon OpenSearch Service console](https://console.aws.amazon.com/aos/home.).

1. On the left navigation panel, expand ** Serverless** and choose **Authentication**.

1. Choose** IAM Identity Center authentication**.

1. Select **Edit**

1. Check the box next to Authenticate with IAM Identity Center.

1. Select the **user and group** attribute key from the dropdown menu. User attributes will be used to authorize users based on `UserName`, `UserId`, and `Email`. Group attributes will be used to authenticate users based on `GroupName` and `GroupId`.

1. Select the **IAM Identity Center** instance.

1. Select **Save**

## Creating IAM Identity Center provider (AWS CLI)
<a name="serverless-iam-identity-center-cli"></a>

To create an IAM Identity Center provider using the AWS Command Line Interface (AWS CLI) use the following command:

```
aws opensearchserverless create-security-config \
--region us-east-2 \
--name "iamidentitycenter-config" \
--description "description" \
--type "iamidentitycenter" \
--iam-identity-center-options '{
    "instanceArn": "arn:aws:sso:::instance/ssoins-99199c99e99ee999",
    "userAttribute": "UserName",                  
    "groupAttribute": "GroupId"
}'
```

After an IAM Identity Center is enabled, customers can only modify **user and group** attributes.

```
aws opensearchserverless update-security-config \
--region us-east-1 \
--id <id_from_list_security_configs> \
--config-version <config_version_from_get_security_config> \
--iam-identity-center-options-updates '{
    "userAttribute": "UserId",
    "groupAttribute": "GroupId"
}'
```

In order to view the IAM Identity Center provider using the AWS Command Line Interface, use the following command:

```
aws opensearchserverless list-security-configs --type iamidentitycenter
```

## Deleting an IAM Identity Center provider
<a name="serverless-iam-identity-center-deleting"></a>

 IAM Identity Center offers two instances of providers, one for your organization account and one for your member account. If you need to change your IAM Identity Center instance, you need to delete your security configuration through the `DeleteSecurityConfig` API and create a new security configuration using the new IAM Identity Center instance. The following command can be used to delete an IAM Identity Center provider:

```
aws opensearchserverless delete-security-config \
--region us-east-1 \
--id <id_from_list_security_configs>
```

## Granting IAM Identity Center access to collection data
<a name="serverless-iam-identity-center-collection-data"></a>

After your IAM Identity Center provider is enabled, you can update the collection data access policy to include IAM Identity Center principals. IAM Identity Center principals need to be updated in the following format: 

```
[
   {
"Rules":[
       ...  
      ],
      "Principal":[
         "iamidentitycenter/<iamidentitycenter-instance-id>/user/<UserName>",
         "iamidentitycenter/<iamidentitycenter-instance-id>/group/<GroupId>"
      ]
   }
]
```

**Note**  
Amazon OpenSearch Serverless supports only one IAM Identity Center instance for all customer collections and can support up to 100 groups for a single user. If you try to use more than the number of allowed instances, you will experience inconsistency with your data access policy authorization processing and receive a `403`error message. 

You can grant access to collections, indexes, or both. If you want different users to have different permssions, you will need to create multiple rules. For a list of available permissions, see [Identity and Access Management in Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html). For information about how to format an access policy, see [Granting SAML identities access to collection data ](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-saml.html#serverless-saml-policies). 

# Encryption in Amazon OpenSearch Serverless
<a name="serverless-encryption"></a>

## Encryption at rest
<a name="serverless-encryption-at-rest"></a>

Each Amazon OpenSearch Serverless collection that you create is protected with encryption of data at rest, a security feature that helps prevent unauthorized access to your data. Encryption at rest uses AWS Key Management Service (AWS KMS) to store and manage your encryption keys. It uses the Advanced Encryption Standard algorithm with 256-bit keys (AES-256) to perform the encryption.

**Topics**
+ [Encryption policies](#serverless-encryption-policies)
+ [Considerations](#serverless-encryption-considerations)
+ [Permissions required](#serverless-encryption-permissions)
+ [Key policy for a customer managed key](#serverless-customer-cmk-policy)
+ [How OpenSearch Serverless uses grants in AWS KMS](#serverless-encryption-grants)
+ [Creating encryption policies (console)](#serverless-encryption-console)
+ [Creating encryption policies (AWS CLI)](#serverless-encryption-cli)
+ [Viewing encryption policies](#serverless-encryption-list)
+ [Updating encryption policies](#serverless-encryption-update)
+ [Deleting encryption policies](#serverless-encryption-delete)

### Encryption policies
<a name="serverless-encryption-policies"></a>

With encryption policies, you can manage many collections at scale by automatically assigning an encryption key to newly created collections that match a specific name or pattern.

When you create an encryption policy, you can either specify a *prefix*, which is a wildcard-based matching rule such as `MyCollection*`, or enter a single collection name. Then, when you create a collection that matches that name or prefix pattern, the policy and corresponding KMS key are automatically assigned to it.

When creating a collection, you can specify an AWS KMS key in two ways: through security policies or directly in the `CreateCollection` request. If you provide a AWS KMS key as part of the `CreateCollection` request, it takes precedence over any matching security policies. With this approach, you have the flexibility to override policy-based encryption settings for specific collections when needed.

![\[Encryption policy creation process with rules and collection matching to KMS key.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-encryption.png)


Encryption policies contain the following elements:
+ `Rules` – one or more collection matching rules, each with the following sub-elements:
  + `ResourceType` – Currently the only option is "collection". Encryption policies apply to collection resources only.
  + `Resource` – One or more collection names or patterns that the policy will apply to, in the format `collection/<collection name|pattern>`.
+ `AWSOwnedKey` – Whether to use an AWS owned key.
+ `KmsARN` – If you set `AWSOwnedKey` to false, specify the Amazon Resource Name (ARN) of the KMS key to encrypt the associated collections with. If you include this parameter, OpenSearch Serverless ignores the `AWSOwnedKey` parameter.

The following sample policy will assign a customer managed key to any future collection named `autopartsinventory`, as well as collections that begin with the term "sales":

```
{
   "Rules":[
      {
         "ResourceType":"collection",
         "Resource":[
            "collection/autopartsinventory",
            "collection/sales*"
         ]
      }
   ],
   "AWSOwnedKey":false,
   "KmsARN":"arn:aws:kms:us-east-1:123456789012:key/93fd6da4-a317-4c17-bfe9-382b5d988b36"
}
```

Even if a policy matches a collection name, you can choose to override this automatic assignment during collection creation if the resource pattern contains a wildcard (\$1). If you choose to override automatic key assignment, OpenSearch Serverless creates an encryption policy for you named **auto-<*collection-name*>** and attaches it to the collection. The policy initially only applies to a single collection, but you can modify it to include additional collections.

If you modify policy rules to no longer match a collection, the associated KMS key won't be unassigned from that collection. The collection always remains encrypted with its initial encryption key. If you want to change the encryption key for a collection, you must recreate the collection.

If rules from multiple policies match a collection, the more specific rule is used. For example, if one policy contains a rule for `collection/log*`, and another for `collection/logSpecial`, the encryption key for the second policy is used because it's more specific.

You can't use a name or a prefix in a policy if it already exists in another policy. OpenSearch Serverless displays an error if you try to configure identical resource patterns in different encryption policies.

### Considerations
<a name="serverless-encryption-considerations"></a>

Consider the following when you configure encryption for your collections:
+ Encryption at rest is *required* for all serverless collections.
+ You have the option to use a customer managed key or an AWS owned key. If you choose a customer managed key, we recommend that you enable [automatic key rotation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html).
+ You can't change the encryption key for a collection after the collection is created. Carefully choose which AWS KMS to use the first time you set up a collection.
+ A collection can only match a single encryption policy.
+ Collections with unique KMS keys can't share OpenSearch Compute Units (OCUs) with other collections. Each collection with a unique key requires its own 4 OCUs.
+ If you update the KMS key in an encryption policy, the change doesn't affect existing matching collections with KMS keys already assigned.
+ OpenSearch Serverless doesn't explicitly check user permissions on customer managed keys. If a user has permissions to access a collection through a data access policy, they will be able to ingest and query the data that is encrypted with the associated key.

### Permissions required
<a name="serverless-encryption-permissions"></a>

Encryption at rest for OpenSearch Serverless uses the following AWS Identity and Access Management (IAM) permissions. You can specify IAM conditions to restrict users to specific collections.
+ `aoss:CreateSecurityPolicy` – Create an encryption policy.
+ `aoss:ListSecurityPolicies` – List all encryption policies and collections that they are attached to.
+ `aoss:GetSecurityPolicy` – See details of a specific encryption policy.
+ `aoss:UpdateSecurityPolicy` – Modify an encryption policy.
+ `aoss:DeleteSecurityPolicy` – Delete an encryption policy.

The following sample identity-based access policy provides the minimum permissions necessary for a user to manage encryption policies with the resource pattern `collection/application-logs`.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "aoss:CreateSecurityPolicy",
            "aoss:UpdateSecurityPolicy",
            "aoss:DeleteSecurityPolicy",
            "aoss:GetSecurityPolicy"
         ],
         "Resource":"*",
         "Condition":{
            "StringEquals":{
               "aoss:collection":"application-logs"
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "aoss:ListSecurityPolicies"
         ],
         "Resource":"*"
      }
   ]
}
```

------

### Key policy for a customer managed key
<a name="serverless-customer-cmk-policy"></a>

If you select a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) to protect a collection, OpenSearch Serverless gets permission to use the KMS key on behalf of the principal who makes the selection. That principal, a user or role, must have the permissions on the KMS key that OpenSearch Serverless requires. You can provide these permissions in a [key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) or an [IAM policy](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html).

OpenSearch Serverless makes `GenerateDataKey` and `Decrypt` KMS API calls during maintenance operations such as autoscaling and software updates. You might observe these calls outside your typical traffic patterns. These calls are part of normal service operations and don't indicate active user traffic. 

OpenSearch Serverless throws a `KMSKeyInaccessibleException` when it cannot access the KMS key that encrypts your data at rest. This occurs when you disable or delete the KMS key, or revoke the grants that allow OpenSearch Serverless to use the key.

At a minimum, OpenSearch Serverless requires the following permissions on a customer managed key:
+ [kms:DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html)
+ [kms:CreateGrant](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateGrant.html)

For example:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
        "Action": "kms:DescribeKey",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Dale"
        },
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "kms:ViaService": "aoss.us-east-1.amazonaws.com"
            }
        }
    },
    {
        "Action": "kms:CreateGrant",
        "Effect": "Allow",
        "Principal": {
            "AWS": "arn:aws:iam::123456789012:user/Dale"
        },
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "kms:ViaService": "aoss.us-east-1.amazonaws.com"
            },
            "ForAllValues:StringEquals": {
                "kms:GrantOperations": [
                    "Decrypt",
                    "GenerateDataKey"
                ]
            },
            "Bool": {
                "kms:GrantIsForAWSResource": "true"
            }
        }
    }
  ]
}
```

------

OpenSearch Serverless create a grant with the [kms:GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html) and [kms:Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) permissions.

For more information, see [Using key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*.

### How OpenSearch Serverless uses grants in AWS KMS
<a name="serverless-encryption-grants"></a>

OpenSearch Serverless requires a [grant](https://docs.aws.amazon.com/kms/latest/developerguide/grants.html) in order to use a customer managed key.

When you create an encryption policy in your account with a new key, OpenSearch Serverless creates a grant on your behalf by sending a [CreateGrant](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateGrant.html) request to AWS KMS. Grants in AWS KMS are used to give OpenSearch Serverless access to a KMS key in a customer account.

OpenSearch Serverless requires the grant to use your customer managed key for the following internal operations:
+ Send [DescribeKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html) requests to AWS KMS to verify that the symmetric customer managed key ID provided is valid. 
+ Send [GenerateDataKey](https://docs.aws.amazon.com/kms/latest/APIReference/API_GenerateDataKey.html) requests to KMS key to create data keys with which to encrypt objects.
+ Send [Decrypt](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) requests to AWS KMS to decrypt the encrypted data keys so that they can be used to encrypt your data. 

You can revoke access to the grant, or remove the service's access to the customer managed key at any time. If you do, OpenSearch Serverless won't be able to access any of the data encrypted by the customer managed key, which affects all the operations that are dependent on that data, leading to `AccessDeniedException` errors and failures in the asynchronous workflows.

OpenSearch Serverless retires grants in an asynchronous workflow when a given customer managed key isn't associated with any security policies or collections.

### Creating encryption policies (console)
<a name="serverless-encryption-console"></a>

In an encryption policy, you specify an KMS key and a series of collection patterns that the policy will apply to. Any new collections that match one of the patterns defined in the policy will be assigned the corresponding KMS key when you create the collection. We recommend that you create encryption policies *before* you start creating collections.

**To create an OpenSearch Serverless encryption policy**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. On the left navigation panel, expand **Serverless** and choose **Encryption policies**.

1. Choose **Create encryption policy**.

1. Provide a name and description for the policy.

1. Under **Resources**, enter one or more resource patterns for this encryption policy. Any newly created collections in the current AWS account and Region that match one of the patterns are automatically assigned to this policy. For example, if you enter `ApplicationLogs` (with no wildcard), and later create a collection with that name, the policy and corresponding KMS key are assigned to that collection.

   You can also provide a prefix such as `Logs*`, which assigns the policy to any new collections with names beginning with `Logs`. By using wildcards, you can manage encryption settings for multiple collections at scale.

1. Under **Encryption**, choose an KMS key to use.

1. Choose **Create**.

#### Next step: Create collections
<a name="serverless-encryption-next"></a>

After you configure one or more encryption policies, you can start creating collections that match the rules defined in those policies. For instructions, see [Creating collections](serverless-create.md).

In the **Encryptions** step of collection creation, OpenSearch Serverless informs you that the name that you entered matches the pattern defined in an encryption policy, and automatically assigns the corresponding KMS key to the collection. If the resource pattern contains a wildcard (\$1), you can choose to override the match and select your own key.

### Creating encryption policies (AWS CLI)
<a name="serverless-encryption-cli"></a>

To create an encryption policy using the OpenSearch Serverless API operations, you specify resource patterns and an encryption key in JSON format. The [CreateSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateSecurityPolicy.html) request accepts both inline policies and .json files.

Encryption policies take the following format. This sample `my-policy.json` file matches any future collection named `autopartsinventory`, as well as any collections with names beginning with `sales`.

```
{
   "Rules":[
      {
         "ResourceType":"collection",
         "Resource":[
            "collection/autopartsinventory",
            "collection/sales*"
         ]
      }
   ],
   "AWSOwnedKey":false,
   "KmsARN":"arn:aws:kms:us-east-1:123456789012:key/93fd6da4-a317-4c17-bfe9-382b5d988b36"
}
```

To use a service-owned key, set `AWSOwnedKey` to `true`:

```
{
   "Rules":[
      {
         "ResourceType":"collection",
         "Resource":[
            "collection/autopartsinventory",
            "collection/sales*"
         ]
      }
   ],
   "AWSOwnedKey":true
}
```

The following request creates the encryption policy:

```
aws opensearchserverless create-security-policy \
    --name sales-inventory \
    --type encryption \
    --policy file://my-policy.json
```

Then, use the [CreateCollection](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateCollection.html) API operation to create one or more collections that match one of the resource patterns.

### Viewing encryption policies
<a name="serverless-encryption-list"></a>

Before you create a collection, you might want to preview the existing encryption policies in your account to see which one has a resource pattern that matches your collection's name. The following [ListSecurityPolicies](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListSecurityPolicies.html) request lists all encryption policies in your account:

```
aws opensearchserverless list-security-policies --type encryption
```

The request returns information about all configured encryption policies. Use the contents of the `policy` element to view the pattern rules that are defined in the policy:

```
{
   "securityPolicyDetails": [ 
      { 
         "createdDate": 1663693217826,
         "description": "Sample encryption policy",
         "lastModifiedDate": 1663693217826,
         "name": "my-policy",
         "policy": "{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection/autopartsinventory\",\"collection/sales*\"]}],\"AWSOwnedKey\":true}",
         "policyVersion": "MTY2MzY5MzIxNzgyNl8x",
         "type": "encryption"
      }
   ]
}
```

To view detailed information about a specific policy, including the KMS key, use the [GetSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_GetSecurityPolicy.html) command.

### Updating encryption policies
<a name="serverless-encryption-update"></a>

If you update the KMS key in an encryption policy, the change only applies to the newly created collections that match the configured name or pattern. It doesn't affect existing collections that have KMS keys already assigned. 

The same applies to policy matching rules. If you add, modify, or delete a rule, the change only applies to newly created collections. Existing collections don't lose their assigned KMS key if you modify a policy's rules so that it no longer matches a collection's name.

To update an encryption policy in the OpenSearch Serverless console, choose **Encryption policies**, select the policy to modify, and choose **Edit**. Make your changes and choose **Save**.

To update an encryption policy using the OpenSearch Serverless API, use the [UpdateSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateSecurityPolicy.html) operation. The following request updates an encryption policy with a new policy JSON document:

```
aws opensearchserverless update-security-policy \
    --name sales-inventory \
    --type encryption \
    --policy-version 2 \
    --policy file://my-new-policy.json
```

### Deleting encryption policies
<a name="serverless-encryption-delete"></a>

When you delete an encryption policy, any collections that are currently using the KMS key defined in the policy are not affected. To delete a policy in the OpenSearch Serverless console, select the policy and choose **Delete**.

You can also use the [DeleteSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_DeleteSecurityPolicy.html) operation:

```
aws opensearchserverless delete-security-policy --name my-policy --type encryption
```

## Encryption in transit
<a name="serverless-encryption-in-transit"></a>

Within OpenSearch Serverless, all paths in a collection are encrypted in transit using Transport Layer Security 1.2 (TLS) with an industry-standard AES-256 cipher. Access to all APIs and Dashboards for Opensearch is also through TLS 1.2 . TLS is a set of industry-standard cryptographic protocols used for encrypting information that is exchanged over the network.

# Network access for Amazon OpenSearch Serverless
<a name="serverless-network"></a>

The network settings for an Amazon OpenSearch Serverless collection determine whether the collection is accessible over the internet from public networks, or whether it must be accessed privately.

Private access can apply to one or both of the following:
+ OpenSearch Serverless-managed VPC endpoints
+ Supported AWS services such as Amazon Bedrock

You can configure network access separately for a collection's *OpenSearch* endpoint and its corresponding *OpenSearch Dashboards* endpoint.

Network access is the isolation mechanism for allowing access from different source networks. For example, if a collection's OpenSearch Dashboards endpoint is publicly accessible but the OpenSearch API endpoint isn't, a user can access the collection data only through Dashboards when connecting from a public network. If they try to call the OpenSearch APIs directly from a public network, they'll be blocked. Network settings can be used for such permutations of source to resource type. Amazon OpenSearch Serverless supports both IPv4 and IPv6 connectivity.

**Topics**
+ [Network policies](#serverless-network-policies)
+ [Considerations](#serverless-network-considerations)
+ [Permissions required to configure network policies](#serverless-network-permissions)
+ [Policy precedence](#serverless-network-precedence)
+ [Creating network policies (console)](#serverless-network-console)
+ [Creating network policies (AWS CLI)](#serverless-network-cli)
+ [Viewing network policies](#serverless-network-list)
+ [Updating network policies](#serverless-network-update)
+ [Deleting network policies](#serverless-network-delete)

## Network policies
<a name="serverless-network-policies"></a>

Network policies let you manage many collections at scale by automatically assigning network access settings to collections that match the rules defined in the policy.

In a network policy, you specify a series of *rules*. These rule define access permissions to collection endpoints and OpenSearch Dashboards endpoints. Each rule consists of an access type (public or private) and a resource type (collection and/or OpenSearch Dashboards endpoint). For each resource type (`collection` and `dashboard`), you specify a series of rules that define which collection(s) the policy will apply to.

In this sample policy, the first rule specifies VPC endpoint access to both the collection endpoint and the Dashboards endpoint for all collections beginning with the term `marketing*`. It also specifies Amazon Bedrock access. 

**Note**  
Private access to AWS services such as Amazon Bedrock *only* applies to the collection's OpenSearch endpoint, not to the OpenSearch Dashboards endpoint. Even if the `ResourceType` is `dashboard`, AWS services cannot be granted access to OpenSearch Dashboards.

The second rule specifies public access to the `finance` collection, but only for the collection endpoint (no Dashboards access).

```
[
   {
      "Description":"Marketing access",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/marketing*"
            ]
         },
         {
            "ResourceType":"dashboard",
            "Resource":[
               "collection/marketing*"
            ]
         }
      ],
      "AllowFromPublic":false,
      "SourceVPCEs":[
         "vpce-050f79086ee71ac05"
      ],
      "SourceServices":[
         "bedrock.amazonaws.com"
      ],
   },
   {
      "Description":"Sales access",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/finance"
            ]
         }
      ],
      "AllowFromPublic":true
   }
]
```

This policy provides public access only to OpenSearch Dashboards for collections beginning with "finance". Any attempts to directly access the OpenSearch API will fail.

```
[
  {
    "Description": "Dashboards access",
    "Rules": [
      {
        "ResourceType": "dashboard",
        "Resource": [
          "collection/finance*"
        ]
      }
    ],
    "AllowFromPublic": true
  }
]
```

Network policies can apply to existing collections as well as future collections. For example, you can create a collection and then create a network policy with a rule that matches the collection name. You don't need to create network policies before you create collections.

## Considerations
<a name="serverless-network-considerations"></a>

Consider the following when you configure network access for your collections:
+ If you plan to configure VPC endpoint access for a collection, you must first create at least one [OpenSearch Serverless-managed VPC endpoint](serverless-vpc.md).
+ Private access to AWS services only applies to the collection's OpenSearch endpoint, not to the OpenSearch Dashboards endpoint. Even if the `ResourceType` is `dashboard`, AWS services cannot be granted access to OpenSearch Dashboards.
+ If a collection is accessible from public networks, it's also accessible from all OpenSearch Serverless-managed VPC endpoints and all AWS services.
+ Multiple network policies can apply to a single collection. For more information, see [Policy precedence](#serverless-network-precedence).

## Permissions required to configure network policies
<a name="serverless-network-permissions"></a>

Network access for OpenSearch Serverless uses the following AWS Identity and Access Management (IAM) permissions. You can specify IAM conditions to restrict users to network policies associated with specific collections.
+ `aoss:CreateSecurityPolicy` – Create a network access policy.
+ `aoss:ListSecurityPolicies` – List all network policies in the current account.
+ `aoss:GetSecurityPolicy` – View a network access policy specification.
+ `aoss:UpdateSecurityPolicy` – Modify a given network access policy, and change the VPC ID or public access designation.
+ `aoss:DeleteSecurityPolicy` – Delete a network access policy (after it's detached from all collections).

The following identity-based access policy allows a user to view all network policies, and update policies with the resource pattern `collection/application-logs`:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "aoss:UpdateSecurityPolicy"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aoss:collection": "application-logs"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "aoss:ListSecurityPolicies",
                "aoss:GetSecurityPolicy"
            ],
            "Resource": "*"
        }
    ]
}
```

------

**Note**  
In addition, OpenSearch Serverless requires the `aoss:APIAccessAll` and `aoss:DashboardsAccessAll` permissions for collection resources. For more information, see [Using OpenSearch API operations](security-iam-serverless.md#security_iam_id-based-policy-examples-data-plane).

## Policy precedence
<a name="serverless-network-precedence"></a>

There can be situations where network policy rules overlap, within or across policies. When this happens, a rule that specifies public access overrides a rule that specifies private access for any collections that are common to *both* rules.

For example, in the following policy, both rules assign network access to the `finance` collection, but one rule specifies VPC access while the other specifies public access. In this situation, public access overrides VPC access *only for the finance collection* (because it exists in both rules), so the finance collection will be accessible from public networks. The sales collection will have VPC access from the specified endpoint.

```
[
   {
      "Description":"Rule 1",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/sales",
               "collection/finance"
            ]
         }
      ],
      "AllowFromPublic":false,
      "SourceVPCEs":[
         "vpce-050f79086ee71ac05"
      ]
   },
   {
      "Description":"Rule 2",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/finance"
            ]
         }
      ],
      "AllowFromPublic":true
   }
]
```

If multiple VPC endpoints from different rules apply to a collection, the rules are additive and the collection will be accessible from all specified endpoints. If you set `AllowFromPublic` to `true` but also provide one or more `SourceVPCEs` or `SourceServices`, OpenSearch Serverless ignores the VPC endpoints and service identifiers, and the associated collections will have public access.

## Creating network policies (console)
<a name="serverless-network-console"></a>

Network policies can apply to existing collections as well as future collections. We recommend that you create network policies before you start creating collections.

**To create an OpenSearch Serverless network policy**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. On the left navigation panel, expand **Serverless** and choose **Network policies**.

1. Choose **Create network policy**.

1. Provide a name and description for the policy.

1. Provide one or more *rules*. These rules define access permissions for your OpenSearch Serverless collections and their OpenSearch Dashboards endpoints.

   Each rule contains the following elements:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-network.html)

   For each resource type that you select, you can choose existing collections to apply the policy settings to, and/or create one or more resource patterns. Resource patterns consist of a prefix and a wildcard (\$1), and define which collections the policy settings will apply to. 

   For example, if you include a pattern called `Marketing*`, any new or existing collections whose names start with "Marketing" will have the network settings in this policy automatically applied to them. A single wildcard (`*`) applies the policy to all current and future collections.

   In addition, you can specify the name of a *future* collection without a wildcard, such as `Finance`. OpenSearch Serverless will apply the policy settings to any newly created collection with that exact name.

1. When you're satisfied with your policy configuration, choose **Create**.

## Creating network policies (AWS CLI)
<a name="serverless-network-cli"></a>

To create a network policy using the OpenSearch Serverless API operations, you specify rules in JSON format. The [CreateSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateSecurityPolicy.html) request accepts both inline policies and .json files. All collections and patterns must take the form `collection/<collection name|pattern>`.

**Note**  
The resource type `dashboards` only allows permission to OpenSearch Dashboards, but in order for OpenSearch Dashboards to function, you must also allow collection access from the same sources. See the second policy below for an example.

To specify private access, include one or both of the following elements:
+ `SourceVPCEs` – Specify one or more OpenSearch Serverless–managed VPC endpoints.
+ `SourceServices` – Specify the identifier of one or more supported AWS services. Currently, the following service identifiers are supported:
  + `bedrock.amazonaws.com` – Amazon Bedrock

The following sample network policy provides private access, to a VPC endpoint and Amazon Bedrock, to collection endpoints only for collections beginning with the prefix `log*`. Authenticated users can't sign in to OpenSearch Dashboards; they can only access the collection endpoint programmatically.

```
[
   {
      "Description":"Private access for log collections",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/log*"
            ]
         }
      ],
      "AllowFromPublic":false,
      "SourceVPCEs":[
         "vpce-050f79086ee71ac05"
      ],
      "SourceServices":[
         "bedrock.amazonaws.com"
      ],
   }
]
```

The following policy provides public access to the OpenSearch endpoint *and* OpenSearch Dashboards for a single collection named `finance`. If the collection doesn't exist, the network settings will be applied to the collection if and when it's created.

```
[
   {
      "Description":"Public access for finance collection",
      "Rules":[
         {
            "ResourceType":"dashboard",
            "Resource":[
               "collection/finance"
            ]
         },
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/finance"
            ]
         }
      ],
      "AllowFromPublic":true
   }
]
```

The following request creates the above network policy:

```
aws opensearchserverless create-security-policy \
    --name sales-inventory \
    --type network \
    --policy "[{\"Description\":\"Public access for finance collection\",\"Rules\":[{\"ResourceType\":\"dashboard\",\"Resource\":[\"collection\/finance\"]},{\"ResourceType\":\"collection\",\"Resource\":[\"collection\/finance\"]}],\"AllowFromPublic\":true}]"
```

To provide the policy in a JSON file, use the format `--policy file://my-policy.json`

## Viewing network policies
<a name="serverless-network-list"></a>

Before you create a collection, you might want to preview the existing network policies in your account to see which one has a resource pattern that matches your collection's name. The following [ListSecurityPolicies](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListSecurityPolicies.html) request lists all network policies in your account:

```
aws opensearchserverless list-security-policies --type network
```

The request returns information about all configured network policies. To view the pattern rules defined in the one specific policy, find the policy information in the contents of the `securityPolicySummaries` element in the response. Note the `name` and `type` of this policy and use these properties in a [GetSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_GetSecurityPolicy.html) request to receive a response with the following policy details: 

```
{
    "securityPolicyDetail": [
        {
            "type": "network",
            "name": "my-policy",
            "policyVersion": "MTY2MzY5MTY1MDA3Ml8x",
            "policy": "[{\"Description\":\"My network policy rule\",\"Rules\":[{\"ResourceType\":\"dashboard\",\"Resource\":[\"collection/*\"]}],\"AllowFromPublic\":true}]",
            "createdDate": 1663691650072,
            "lastModifiedDate": 1663691650072
        }
    ]
}
```

To view detailed information about a specific policy, use the [GetSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_GetSecurityPolicy.html) command.

## Updating network policies
<a name="serverless-network-update"></a>

When you modify the VPC endpoints or public access designation for a network, all associated collections are impacted. To update a network policy in the OpenSearch Serverless console, expand **Network policies**, select the policy to modify, and choose **Edit**. Make your changes and choose **Save**.

To update a network policy using the OpenSearch Serverless API, use the [UpdateSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateSecurityPolicy.html) command. You must include a policy version in the request. You can retrieve the policy version by using the `ListSecurityPolicies` or `GetSecurityPolicy` commands. Including the most recent policy version ensures that you don't inadvertently override a change made by someone else. 

The following request updates a network policy with a new policy JSON document:

```
aws opensearchserverless update-security-policy \
    --name sales-inventory \
    --type network \
    --policy-version MTY2MzY5MTY1MDA3Ml8x \
    --policy file://my-new-policy.json
```

## Deleting network policies
<a name="serverless-network-delete"></a>

Before you can delete a network policy, you must detach it from all collections. To delete a policy in the OpenSearch Serverless console, select the policy and choose **Delete**.

You can also use the [DeleteSecurityPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_DeleteSecurityPolicy.html) command:

```
aws opensearchserverless delete-security-policy --name my-policy --type network
```

# FIPS compliance in Amazon OpenSearch Serverless
<a name="fips-compliance-opensearch-serverless"></a>

Amazon OpenSearch Serverless supports Federal Information Processing Standards (FIPS) 140-2, which is a U.S. and Canadian government standard that specifies security requirements for cryptographic modules that protect sensitive information. When you connect to FIPS-enabled endpoints with OpenSearch Serverless, cryptographic operations occur using FIPS-validated cryptographic libraries.

OpenSearch Serverless FIPS endpoints are available in AWS Regions where FIPS is supported. These endpoints use TLS 1.2 or later and FIPS-validated cryptographic algorithms for all communications. For more information, see [FIPS compliance](https://docs.aws.amazon.com/verified-access/latest/ug/fips-compliance.html) in the *AWS Verified access User Guide*.

**Topics**
+ [Using FIPS endpoints with OpenSearch Serverless](#using-fips-endpoints-opensearch-serverless)
+ [Use FIPS endpoints with AWS SDKs](#using-fips-endpoints-aws-sdks)
+ [Configure security groups for VPC endpoints](#configuring-security-groups-vpc-endpoints)
+ [Use the FIPS VPC endpoint](#using-fips-vpc-endpoint)
+ [Verify FIPS compliance](#verifying-fips-compliance)
+ [Resolve FIPS endpoint connectivity issues in private hosted zones](serverless-fips-endpoint-issues.md)

## Using FIPS endpoints with OpenSearch Serverless
<a name="using-fips-endpoints-opensearch-serverless"></a>

In AWS Regions where FIPS is supported, OpenSearch Serverless collections are accessible through both standard and FIPS-compliant endpoints. For more information, see [FIPS compliance](https://docs.aws.amazon.com/verified-access/latest/ug/fips-compliance.html) in the *AWS Verified access User Guide*.

In the following examples, replace *collection\$1id* and *AWS Region* with your collection ID and its AWS Region.
+ **Standard endpoint** – **https://*collection\$1id*.*AWS Region*.aoss.amazonaws.com**.
+ **FIPS-compliant endpoint** – **https://*collection\$1id*.*AWS Region*.aoss-fips.amazonaws.com**.

Similarly, OpenSearch Dashboards are accessible through both standard and FIPS-compliant endpoints:
+ **Standard Dashboards endpoint** – **https://*collection\$1id*.*AWS Region*.aoss.amazonaws.com/\$1dashboards**.
+ **FIPS-compliant Dashboards endpoint** – **https://*collection\$1id*.*AWS Region*.aoss-fips.amazonaws.com/\$1dashboards**.

**Note**  
In FIPS-enabled Regions, both standard and FIPS-compliant endpoints provide FIPS-compliant cryptography. The FIPS-specific endpoints help you meet compliance requirements that specifically mandate the use of endpoints with **FIPS** in the name.

## Use FIPS endpoints with AWS SDKs
<a name="using-fips-endpoints-aws-sdks"></a>

When using AWS SDKs, you can specify the FIPS endpoint when creating the client. In the following example, replace *collection\$1id* and *AWS Region* with your collection ID and its AWS Region.

```
# Python SDK example
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
import boto3
host = '"https://collection_id.AWS Region.aoss-fips.amazonaws.com"
region = 'us-west-2'
service = 'aoss'
credentials = boto3.Session().get_credentials()
auth = AWSV4SignerAuth(credentials, region, service)
client = OpenSearch(
    hosts = [{'host': host, 'port': 443}],
    http_auth = auth,
    use_ssl = True,
    verify_certs = True,
    connection_class = RequestsHttpConnection,
    pool_maxsize = 20
)
```

## Configure security groups for VPC endpoints
<a name="configuring-security-groups-vpc-endpoints"></a>

To ensure proper communication with your FIPS-compliant Amazon VPC (VPC) endpoint, create or modify a security group to allow inbound HTTPS traffic (TCP port 443) from the resources in your VPC that need to access OpenSearch Serverless. Then associate this security group with your VPC endpoint during creation or by modifying the endpoint after creation. For more information, see [Create a security group](https://docs.aws.amazon.com/vpc/latest/userguide/creating-security-groups.html) in the *Amazon VPC User Guide*.

## Use the FIPS VPC endpoint
<a name="using-fips-vpc-endpoint"></a>

After creating the FIPS-compliant VPC endpoint, you can use it to access OpenSearch Serverless from resources within your VPC. To use the endpoint for API operations, configure your SDK to use the Regional FIPS endpoint as described in the [Using FIPS endpoints with OpenSearch Serverless](#using-fips-endpoints-opensearch-serverless) section. For OpenSearch Dashboards access, use the collection-specific Dashboards URL, which will automatically route through the FIPS-compliant VPC endpoint when accessed from within your VPC. For more information, see [Using OpenSearch Dashboards with Amazon OpenSearch Service](dashboards.md).

## Verify FIPS compliance
<a name="verifying-fips-compliance"></a>

To verify that your connections to OpenSearch Serverless are using FIPS-compliant cryptography, use AWS CloudTrail to monitor API calls made to OpenSearch Serverless. Check that the `eventSource` field in CloudTrail logs displays `aoss-fips.amazonaws.com` for API calls. 

For OpenSearch Dashboards access, you can use browser developer tools to inspect the TLS connection details and verify that FIPS-compliant cipher suites are being used. 

# Resolve FIPS endpoint connectivity issues in private hosted zones
<a name="serverless-fips-endpoint-issues"></a>

FIPS endpoints work with Amazon OpenSearch Serverless collections that have public access. For newly created VPC collections that use newly created VPC endpoints, FIPS endpoints function as expected. For other VPC collections, you might need to perform manual setup to ensure FIPS endpoints operate correctly.

**To configure FIPS private hosted zones in Amazon Route 53**

1. Open the Route 53 console at [https://console.aws.amazon.com/route53/](https://console.aws.amazon.com/route53/).

1. Review your hosted zones:

   1. Locate the hosted zones for the AWS Regions your collections are in.

   1. Verify the hosted zone naming patterns:
      + Non-FIPS format: `region.aoss.amazonaws.com`.
      + FIPS format: `region.aoss-fips.amazonaws.com`.

   1. Confirm the **Type** for all of your hosted zones is set to **Private hosted zone**.

1. If the FIPS private hosted zone is missing:

   1. Select the corresponding non-FIPS private hosted zone.

   1. Copy the **Associated VPCs** information. For example: `vpc-1234567890abcdef0 | us-east-2`.

   1. Find the wildcard domain record. For example: `*.us-east-2.aoss.amazonaws.com`.

   1. Copy the **Value/Route traffic to** information. For example:`uoc1c1qsw7poexampleewjeno1pte3rw.3ym756xh7yj.aoss.searchservices.aws`.

1. Create the FIPS private hosted zone:

   1. Create a new private hosted zone with the FIPS format. For example: `us-east-2.aoss-fips.amazonaws.com`.

   1. For **Associated VPCs**, enter the VPC information you copied from the non-FIPS private hosted zone.

1. Add a new record with the following settings:

   1. Record name: \$1

   1. Record type: CNAME

   1. Value: Enter the **Value/Route traffic to** information you copied earlier.

## Common Issues
<a name="serverless-fips-endpoint-common-problems"></a>

If you experience connectivity issues with your FIPS-compliant VPC endpoints, use the following information to help resolve the problem.
+ DNS resolution failures - You cannot resolve the FIPS endpoint domain name within your VPC
+ Connection timeouts - Your requests to the FIPS endpoint time out
+ Access denied errors - Authentication or authorization fails when using FIPS endpoints
+ Missing private hosted zone records for VPC-only collections

**To troubleshoot FIPS endpoint connectivity**

1. Verify your Private Hosted Zone configuration:

   1. Confirm that a Private Hosted Zone exists for the FIPS endpoint domain (`*.region.aoss-fips.amazonaws.com`.

   1. Verify that the private hosted zone is associated with the correct VPC.

      For more information, see [Private hosted zones](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted- zones-private.html) in the *Amazon Route 53 Developer Guide*, and [Manage DNS names](https://docs.aws.amazon.com/vpc/latest/privatelink/manage-dns-names.html) in the *AWS PrivateLink Guide*.

1. Test DNS resolution:

   1. Connect to an EC2 instance in your VPC.

   1. Run the following command:

      ```
      nslookup collection-id.region.aoss-fips.amazonaws.com
      ```

   1. Confirm that the response includes the private IP address of your VPC endpoint.

      For more information, see [Endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints- access.html#endpoint-dns-verification), and [DNS attributes](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc- dns-troubleshooting) in the *Amazon VPC User Guide*.

1. Check your security group settings:

   1. Verify that the security group attached to the VPC endpoint permits HTTPS traffic (port 443) from your resources.

   1. Confirm that security groups for your resources permit outbound traffic to the VPC endpoint.

   For more information, see [Endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html#vpc-endpoint-security-groups) in the *AWS PrivateLink Guide*, and [Security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html#SecurityGroupRules) in the *Amazon VPC User Guide *.

1. Review your network ACL configuration:

   1. Verify that network ACLs permit traffic between your resources and the VPC endpoint.

     For more information, see [Network ACLs](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network- acls.html#nacl-troubleshooting) in the *Amazon VPC User Guide*.

1. Review your endpoint policy:

   1. Check that the VPC endpoint policy permits the required actions on your OpenSearch Serverless resources.

     For more information, see [VPC endpoint permissions required](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-vpc.html#serverless-vpc-permissions), and [Endpoints policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints- access.html#vpc-endpoint-policies) in the *AWS PrivateLink Guide*.

**Tip**  
If you use custom DNS resolvers in your VPC, configure them to forward requests for `*.amazonaws.com` domains to the AWS servers.

# Data access control for Amazon OpenSearch Serverless
<a name="serverless-data-access"></a>

With data access control in Amazon OpenSearch Serverless, you can allow users to access collections and indexes, regardless of their access mechanism or network source. You can provide access to IAM roles and [SAML identities](serverless-saml.md).

You manage access permissions through *data access policies*, which apply to collections and index resources. Data access policies help you manage collections at scale by automatically assigning access permissions to collections and indexes that match a specific pattern. Multiple data access policies can apply to a single resource. Note that you must have a data access policy for your collection in order to access your OpenSearch Dashboards URL.

**Topics**
+ [Data access policies versus IAM policies](#serverless-data-access-vs-iam)
+ [IAM permissions required to configure data access policies](#serverless-data-access-permissions)
+ [Policy syntax](#serverless-data-access-syntax)
+ [Supported policy permissions](#serverless-data-supported-permissions)
+ [Sample datasets on OpenSearch Dashboards](#serverless-data-sample-index)
+ [Creating data access policies (console)](#serverless-data-access-console)
+ [Creating data access policies (AWS CLI)](#serverless-data-access-cli)
+ [Viewing data access policies](#serverless-data-access-list)
+ [Updating data access policies](#serverless-data-access-update)
+ [Deleting data access policies](#serverless-data-access-delete)
+ [Cross-account data access](#serverless-data-access-cross)

## Data access policies versus IAM policies
<a name="serverless-data-access-vs-iam"></a>

Data access policies are logically separate from AWS Identity and Access Management (IAM) policies. IAM permissions control access to the [serverless API operations](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/Welcome.html), such as `CreateCollection` and `ListAccessPolicies`. Data access policies control access to the [OpenSearch operations](#serverless-data-supported-permissions) that OpenSearch Serverless supports, such as `PUT <index>` or `GET _cat/indices`.

The IAM permissions that control access to data access policy API operations, such as `aoss:CreateAccessPolicy` and `aoss:GetAccessPolicy` (described in the next section), don't affect the permission specified in a data access policy.

For example, suppose an IAM policy denies a user from creating data access policies for `collection-a`, but allows them to create data access policies for all collections (`*`):

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "aoss:CreateAccessPolicy"
            ],
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "aoss:collection": "collection-a"
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": [
                "aoss:CreateAccessPolicy"
            ],
            "Resource": "*"
        }
    ]
}
```

------

If the user creates a data access policy that allows certain permission to *all* collections (`collection/*` or `index/*/*`) the policy will apply to all collections, including collection A.

**Important**  
Being granted permissions within a data access policy is not sufficient to access data in your OpenSearch Serverless collection. An associated principal must *also* be granted access to the IAM permissions `aoss:APIAccessAll` and `aoss:DashboardsAccessAll`. Both permissions grant full access to collection resources, while the Dashboards permission also provides access to OpenSearch Dashboards. If a principal doesn't have both of these IAM permissions, they will receive 403 errors when attempting to send requests to the collection. For more information, see [Using OpenSearch API operations](security-iam-serverless.md#security_iam_id-based-policy-examples-data-plane).

## IAM permissions required to configure data access policies
<a name="serverless-data-access-permissions"></a>

Data access control for OpenSearch Serverless uses the following IAM permissions. You can specify IAM conditions to restrict users to specific access policy names.
+ `aoss:CreateAccessPolicy` – Create an access policy.
+ `aoss:ListAccessPolicies` – List all access policies.
+ `aoss:GetAccessPolicy` – See details about a specific access policy.
+ `aoss:UpdateAccessPolicy` – Modify an access policy.
+ `aoss:DeleteAccessPolicy` – Delete an access policy.

The following identity-based access policy allows a user to view all access policies, and update policies that contain the resource pattern `collection/logs`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "aoss:ListAccessPolicies",
                "aoss:GetAccessPolicy"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Action": [
                "aoss:UpdateAccessPolicy"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aoss:collection": [
                        "logs"
                    ]
                }
            }
        }
    ]
}
```

------

**Note**  
In addition, OpenSearch Serverless requires the `aoss:APIAccessAll` and `aoss:DashboardsAccessAll` permissions for collection resources. For more information, see [Using OpenSearch API operations](security-iam-serverless.md#security_iam_id-based-policy-examples-data-plane).

## Policy syntax
<a name="serverless-data-access-syntax"></a>

A data access policy includes a set of rules, each with the following elements:


| Element | Description | 
| --- | --- | 
| ResourceType | The type of resource (collection or index) that the permissions apply to. Alias and template permissions are at the collection level, while permissions for creating, modifying, and searching data are at the index level. For more information, see [Supported policy permissions](#serverless-data-supported-permissions). | 
| Resource | A list of resource names and/or patterns. Patterns are prefixes followed by a wildcard (\$1), which allow the associated permissions to apply to multiple resources.[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-data-access.html) | 
| Permission | A list of permissions to grant for the specified resources. For a complete list of permissions and the API operations they allow, see [Supported OpenSearch API operations and permissions](serverless-genref.md#serverless-operations). | 
| Principal | A list of one or more principals to grant access to. Principals can be IAM role ARNs or SAML identities. These principals must be within the current AWS account. Data access policies don't directly support cross-account access, but you can include a role in your policy that a user from a different AWS account can assume in the collection-owning account. For more information, see [Cross-account data access](#serverless-data-access-cross). | 

The following example policy grants alias and template permissions to the collection called `autopartsinventory`, as well as any collections that begin with the prefix `sales*`. It also grants read and write permissions to all indexes within the `autopartsinventory` collection, and any indexes in the `salesorders` collection that begin with the prefix `orders*`.

```
[
   {
      "Description": "Rule 1",
      "Rules":[
         {
            "ResourceType":"collection",
            "Resource":[
               "collection/autopartsinventory",
               "collection/sales*"
            ],
            "Permission":[
               "aoss:CreateCollectionItems",
               "aoss:UpdateCollectionItems",
               "aoss:DescribeCollectionItems"
            ]
         },
         {
            "ResourceType":"index",
            "Resource":[
               "index/autopartsinventory/*",
               "index/salesorders/orders*"
            ],
            "Permission":[
               "aoss:*"
            ]
         }
      ],
      "Principal":[
         "arn:aws:iam::123456789012:user/Dale",
         "arn:aws:iam::123456789012:role/RegulatoryCompliance",
         "saml/123456789012/myprovider/user/Annie",
         "saml/123456789012/anotherprovider/group/Accounting"
      ]
   }
]
```

You can't explicitly deny access within a policy. Therefore, all policy permissions are additive. For example, if one policy grants a user `aoss:ReadDocument`, and another policy grants `aoss:WriteDocument`, the user will have *both* permissions. If a third policy grants the same user `aoss:*`, then the user can perform *all* actions on the associated index; more restrictive permissions don't override less restrictive ones.

## Supported policy permissions
<a name="serverless-data-supported-permissions"></a>

The following permissions are supported in data access policies. For the OpenSearch API operations that each permission allows, see [Supported OpenSearch API operations and permissions](serverless-genref.md#serverless-operations).

**Collection permissions**
+ `aoss:CreateCollectionItems`
+ `aoss:DeleteCollectionItems`
+ `aoss:UpdateCollectionItems`
+ `aoss:DescribeCollectionItems`
+ `aoss:*`

**Index permissions**
+ `aoss:ReadDocument`
+ `aoss:WriteDocument`
+ `aoss:CreateIndex`
+ `aoss:DeleteIndex`
+ `aoss:UpdateIndex`
+ `aoss:DescribeIndex`
+ `aoss:*`

## Sample datasets on OpenSearch Dashboards
<a name="serverless-data-sample-index"></a>

OpenSearch Dashboards provides [sample datasets](https://opensearch.org/docs/latest/dashboards/quickstart-dashboards/#adding-sample-data) that come with visualizations, dashboards, and other tools to help you explore Dashboards before you add your own data. To create indexes from this sample data, you need a data access policy that provides permissions to the dataset that you want to work with. The following policy uses a wildcard (`*`) to provide permissions to all three sample datasets.

```
[
  {
    "Rules": [
      {
        "Resource": [
          "index/<collection-name>/opensearch_dashboards_sample_data_*"
        ],
        "Permission": [
          "aoss:CreateIndex",
          "aoss:DescribeIndex",
          "aoss:ReadDocument"
        ],
        "ResourceType": "index"
      }
    ],
    "Principal": [
      "arn:aws:iam::<account-id>:user/<user>"
    ]
  }
]
```

## Creating data access policies (console)
<a name="serverless-data-access-console"></a>

You can create a data access policy using the visual editor, or in JSON format. Any new collections that match one of the patterns defined in the policy will be assigned the corresponding permissions when you create the collection.

**To create an OpenSearch Serverless data access policy**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. In the left navigation pane, expand **Serverless** and under **Security**, choose **Data access policies**.

1. Choose **Create access policy**.

1. Provide a name and description for the policy.

1. Provide a name for the first rule in your policy. For example, "Logs collection access".

1. Choose **Add principals** and select one or more IAM roles or [SAML users and groups](serverless-saml.md) to provide data access to.
**Note**  
In order to select principals from the dropdown menus, you must have the `iam:ListUsers` and `iam:ListRoles` permissions (for IAM principals) and `aoss:ListSecurityConfigs` permission (for SAML identities). 

1. Choose **Grant** and select the alias, template, and index permissions to grant the associated principals. For a full list of permissions and the access they allow, see [Supported OpenSearch API operations and permissions](serverless-genref.md#serverless-operations).

1. (Optional) Configure additional rules for the policy.

1. Choose **Create**. There might be about a minute of lag time between when you create the policy and when the permissions are enforced. If it takes more than 5 minutes, contact [Support](https://console.aws.amazon.com/support/home).

**Important**  
If your policy only includes index permissions (and no collection permissions), you might still see a message for matching collections stating `Collection cannot be accessed yet. Configure data access policies so that users can access the data within this collection`. You can ignore this warning. Allowed principals can still perform their assigned index-related operations on the collection.

## Creating data access policies (AWS CLI)
<a name="serverless-data-access-cli"></a>

To create a data access policy using the OpenSearch Serverless API, use the `CreateAccessPolicy` command. The command accepts both inline policies and .json files. Inline policies must be encoded as a [JSON escaped string](https://www.freeformatter.com/json-escape.html).

The following request creates a data access policy:

```
aws opensearchserverless create-access-policy \
    --name marketing \
    --type data \
    --policy "[{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection/autopartsinventory\",\"collection/sales*\"],\"Permission\":[\"aoss:UpdateCollectionItems\"]},{\"ResourceType\":\"index\",\"Resource\":[\"index/autopartsinventory/*\",\"index/salesorders/orders*\"],\"Permission\":[\"aoss:ReadDocument\",\"aoss:DescribeIndex\"]}],\"Principal\":[\"arn:aws:iam::123456789012:user/Shaheen\"]}]"
```

To provide the policy within a .json file, use the format `--policy file://my-policy.json`.

The principals included in the policy can now use the [OpenSearch operations](#serverless-data-supported-permissions) that they were granted access to.

## Viewing data access policies
<a name="serverless-data-access-list"></a>

Before you create a collection, you might want to preview the existing data access policies in your account to see which one has a resource pattern that matches your collection's name. The following [ListAccessPolicies](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListAccessPolicies.html) request lists all data access policies in your account:

```
aws opensearchserverless list-access-policies --type data
```

The request returns information about all configured data access policies. To view the pattern rules defined in the one specific policy, find the policy information in the contents of the `accessPolicySummaries` element in the response. Note the `name` and `type` of this policy and use these properties in a [GetAccessPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_GetAccessPolicy.html) request to receive a response with the following policy details: 

```
{
    "accessPolicyDetails": [
        {
            "type": "data",
            "name": "my-policy",
            "policyVersion": "MTY2NDA1NDE4MDg1OF8x",
            "description": "My policy",
            "policy": "[{\"Rules\":[{\"ResourceType\":\"collection\",\"Resource\":[\"collection/autopartsinventory\",\"collection/sales*\"],\"Permission\":[\"aoss:UpdateCollectionItems\"]},{\"ResourceType\":\"index\",\"Resource\":[\"index/autopartsinventory/*\",\"index/salesorders/orders*\"],\"Permission\":[\"aoss:ReadDocument\",\"aoss:DescribeIndex\"]}],\"Principal\":[\"arn:aws:iam::123456789012:user/Shaheen\"]}]",
            "createdDate": 1664054180858,
            "lastModifiedDate": 1664054180858
        }
    ]
}
```

You can include resource filters to limit the results to policies that contain specific collections or indexes:

```
aws opensearchserverless list-access-policies --type data --resource "index/autopartsinventory/*"
```

To view details about a specific policy, use the [GetAccessPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_GetAccessPolicy.html) command.

## Updating data access policies
<a name="serverless-data-access-update"></a>

When you update a data access policy, all associated collections are impacted. To update a data access policy in the OpenSearch Serverless console, choose **Data access control**, select the policy to modify, and choose **Edit**. Make your changes and choose **Save**.

To update a data access policy using the OpenSearch Serverless API, send an `UpdateAccessPolicy` request. You must include a policy version, which you can retrieve using the `ListAccessPolicies` or `GetAccessPolicy` commands. Including the most recent policy version ensures that you don't inadvertently override a change made by someone else.

The following [UpdateAccessPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateAccessPolicy.html) request updates a data access policy with a new policy JSON document:

```
aws opensearchserverless update-access-policy \
    --name sales-inventory \
    --type data \
    --policy-version MTY2NDA1NDE4MDg1OF8x \
    --policy file://my-new-policy.json
```

There might be a few minutes of lag time between when you update the policy and when the new permissions are enforced.

## Deleting data access policies
<a name="serverless-data-access-delete"></a>

When you delete a data access policy, all associated collections lose the access that is defined in the policy. Make sure that your IAM and SAML users have the appropriate access to the collection before you delete a policy. To delete a policy in the OpenSearch Serverless console, select the policy and choose **Delete**.

You can also use the [DeleteAccessPolicy](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_DeleteAccessPolicy.html) command:

```
aws opensearchserverless delete-access-policy --name my-policy --type data
```

## Cross-account data access
<a name="serverless-data-access-cross"></a>

While you can't create a data access policy with cross-account identity or cross-account collections, you can still set up cross-account access with the assume role option. For example, if `account-a` owns a collection that `account-b` needs access to, the user from `account-b` can assume a role in `account-a`. The role must have the IAM permissions `aoss:APIAccessAll` and `aoss:DashboardsAccessAll`, and be included in the data access policy on `account-a`.

# Data plane access through AWS PrivateLink
<a name="serverless-vpc"></a>

Amazon OpenSearch Serverless supports two types of AWS PrivateLink connections for control plane and data plane operations. Control plane operations include the creation and deletion of collections and the management of access policies. Data plane operations are for indexing and querying data within a collection. This page covers data plane VPC endpoints. For information about control plane AWS PrivateLink endpoints, see [Control plane access through AWS PrivateLink](serverless-vpc-cp.md).

You can use AWS PrivateLink to create a private connection between your VPC and Amazon OpenSearch Serverless. You can access OpenSearch Serverless as if it were in your VPC, without the use of an internet gateway, NAT device, VPN connection, or Direct Connect connection. Instances in your VPC don't need public IP addresses to access OpenSearch Serverless. For more information on VPC network access, see [Network connectivity patterns for Amazon OpenSearch Serverless](https://aws.amazon.com/blogs/big-data/network-connectivity-patterns-for-amazon-opensearch-serverless/).

You establish this private connection by creating an *interface endpoint*, powered by AWS PrivateLink. We create an endpoint network interface in each subnet that you specify for the interface endpoint. These are requester-managed network interfaces that serve as the entry point for traffic destined for OpenSearch Serverless.

For more information, see [Access AWS services through AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/privatelink-access-aws-services.html) in the *AWS PrivateLink Guide*.

**Topics**
+ [DNS resolution of collection endpoints](#vpc-endpoint-dnc)
+ [VPCs and network access policies](#vpc-endpoint-network)
+ [VPCs and endpoint policies](#vpc-endpoint-policy)
+ [Considerations](#vpc-endpoint-considerations)
+ [Permissions required](#serverless-vpc-permissions)
+ [Create an interface endpoint for OpenSearch Serverless](#serverless-vpc-create)
+ [Shared VPC setup for Amazon OpenSearch Serverless](#shared-vpc-setup)

## DNS resolution of collection endpoints
<a name="vpc-endpoint-dnc"></a>

When you create a data plane VPC endpoint through the OpenSearch Serverless console, the service creates a new Amazon Route 53 [private hosted zone](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html) and attaches it to the VPC. This private hosted zone consists of a record to resolve the wildcard DNS record for OpenSearch Serverless collections (`*.us-east-1.aoss.amazonaws.com`) to the interface addresses used for the endpoint. You only need one OpenSearch Serverless VPC endpoint in a VPC to access any and all collections and Dashboards in each AWS Region. Every VPC with an endpoint for OpenSearch Serverless has its own private hosted zone attached.

The OpenSearch Serverless interface endpoint also creates a public Route 53 wildcard DNS record for all collections in the Region. The DNS name resolves to the OpenSearch Serverless public IP addresses. Clients in VPCs that don't have an OpenSearch Serverless VPC endpoint or clients in public networks can use the public Route 53 resolver and access the collections and Dashboards with those IP addresses. The IP address type (IPv4, IPv6, or Dualstack) of VPC endpoint is determined based on the subnets provided when you [create an interface endpoint for OpenSearch Serverless](#serverless-vpc-create).

**Note**  
OpenSearch Serverless creates an additonal Amazon Route 53 private hosted zone (``<region>.opensearch.amazonaws.com``) for an OpenSearch Service domain resolution. You can update your existing IPv4 VPC endpoint to Dualstack by using the [update-vpc-endpoint](https://docs.aws.amazon.com/cli/latest/reference/opensearchserverless/update-vpc-endpoint.html) command in the AWS CLI.

The DNS resolver address for a given VPC is the second IP address of the VPC CIDR. Any client in the VPC needs to use that resolver to get the VPC endpoint address for any collection. The resolver uses private hosted zone created by OpenSearch Serverless. It's sufficient to use that resolver for all collections in any account. It's also possible to use the VPC resolver for some collection endpoints and the public resolver for others, although it's not typically necessary.

## VPCs and network access policies
<a name="vpc-endpoint-network"></a>

To grant network permission to OpenSearch APIs and Dashboards for your collections, you can use OpenSearch Serverless [network access policies](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-network.html). You can control this network access either from your VPC endpoint(s) or the public internet. Since your network policy only controls traffic permissions, you must also set up a [data access policy](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-data-access.html) that specifies permission to operate on the data in a collection and its indices. Think of an OpenSearch Serverless VPC endpoint as an access point to the service, a network access policy as the network-level access point to collections and Dashboards, and a data access policy as the access point for fine-grained access control for any operation on data in the collection. 

Since you can specify multiple VPC endpoint IDs in a network policy, we recommend that you create a VPC endpoint for every VPC that needs to access a collection. These VPCs can belong to different AWS accounts than the account that owns the OpenSearch Serverless collection and network policy. We don't recommend that you create a VPC-to-VPC peering or other proxying solution between two accounts so that one account's VPC can use another account's VPC endpoint. This is less secure and cost effective than each VPC having its own endpoint. The first VPC will not be easily visible to the other VPC's admin, who has set up access to that VPC's endpoint in the network policy. 

## VPCs and endpoint policies
<a name="vpc-endpoint-policy"></a>

 Amazon OpenSearch Serverless supports endpoint policies for VPCs. An endpoint policy is an IAM resource-based policy that you attach to a VPC endpoint to control which AWS principals can use the endpoint to access your AWS service. For more information, see [Control access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html). 

To use an endpoint policy, you must first create an interface endpoint. You can create an interface endpoint using either the OpenSearch Serverless console or the OpenSearch Serverless API. After you create your interface endpoint, you will need to add the endpoint policy to the endpoint. For more information, see [Create an interface endpoint for OpenSearch Serverless](#serverless-vpc-create).

**Note**  
You can't define an endpoint policy directly in the OpenSearch Service console. 

An endpoint policy does not override or replace other identity-based policies, resource-based policies, network policies, or data access policies you may have configured. For more information on updating endpoint policies, see [Control access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html).

By default, an endpoint policy grants full access to your VPC endpoint. 

```
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "*",
            "Resource": "*"
        }
    ]
}
```

Although the default VPC endpoint policy grants full endpoint access, you can configure a VPC endpoint policy to allow access to specific roles and users. To do this, see the following example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "123456789012",
                    "987654321098"
                ]
            },
            "Action": "*",
            "Resource": "*"
        }
    ]
}
```

------

You can specify an OpenSearch Serverless collection to be included as a conditional element in your VPC endpoint policy. To do this, see the following example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aoss:collection": [
                        "coll-abc"
                    ]
                }
            }
        }
    ]
}
```

------

Support for `aoss:CollectionId` is supported.

```
Condition": {
         "StringEquals": {
               "aoss:CollectionId": "collection-id"
          }
}
```

You can use SAML identities in your VPC endpoint policy to determine VPC endpoint access. You must use a wildcard `(*)` in the principal section of your VPC endpoint policy. To do this, see the following example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "saml:cn": [
                        "saml/111122223333/idp123/group/football",
                        "saml/111122223333/idp123/group/soccer",
                        "saml/111122223333/idp123/group/cricket"
                    ]
                }
            }
        }
    ]
}
```

------

Additionally, you can configure your endpoint policy to include a specific SAML principal policy. To do this, see the following:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalTag/Department": [
                        "Engineering"]
                    }
                }
            }
        ]
    }
```

------

For more information on using SAML authentication with Amazon OpenSearch Serverless, see [SAML authentication for Amazon OpenSearch Serverless](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-saml.html).

You can also include IAM and SAML users in the same VPC endpoint policy. To do this, see the following example:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "saml:cn": [
                        "saml/111122223333/idp123/group/football",
                        "saml/111122223333/idp123/group/soccer",
                        "saml/111122223333/idp123/group/cricket"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "111122223333"
                ]
            },
            "Action": "*",
            "Resource": "*"
        }
    ]
}
```

------

You can also access an Amazon OpenSearch Serverless collection from Amazon EC2 via interface VPC endpoints. For more information see, [Access an OpenSearch Serverless collection from Amazon EC2 (via interface VPC endpoints)](https://aws.amazon.com/blogs/big-data/network-connectivity-patterns-for-amazon-opensearch-serverless/).

## Considerations
<a name="vpc-endpoint-considerations"></a>

Before you set up an interface endpoint for OpenSearch Serverless, consider the following:
+ OpenSearch Serverless supports making calls to all supported [OpenSearch API operations](serverless-genref.md#serverless-operations) (not configuration API operations) through the interface endpoint.
+ After you create an interface endpoint for OpenSearch Serverless, you still need to include it in [network access policies](serverless-network.md) in order for it to access serverless collections.
+ By default, full access to OpenSearch Serverless is allowed through the interface endpoint. You can associate a security group with the endpoint network interfaces to control traffic to OpenSearch Serverless through the interface endpoint.
+ A single AWS account can have a maximum of 50 OpenSearch Serverless VPC endpoints.
+ If you enable public internet access to your collection's API or Dashboards in a network policy, your collection is accessible by any VPC and by the public internet.
+ If you're on-premises and outside of the VPC, you can't use a DNS resolver for the OpenSearch Serverless VPC endpoint resolution directly. If you need VPN access, the VPC needs a DNS proxy resolver for external clients to use. Route 53 provides an inbound endpoint option that you can use to resolve DNS queries to your VPC from your on-premises network or another VPC.
+ The private hosted zone that OpenSearch Serverless creates and attaches to the VPC is managed by the service, but it shows up in your Amazon Route 53 resources and is billed to your account.
+ For other considerations, see [Considerations](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#considerations-interface-endpoints) in the *AWS PrivateLink Guide*.

## Permissions required
<a name="serverless-vpc-permissions"></a>

VPC access for OpenSearch Serverless uses the following AWS Identity and Access Management (IAM) permissions. You can specify IAM conditions to restrict users to specific collections.
+ `aoss:CreateVpcEndpoint` – Create a VPC endpoint.
+ `aoss:ListVpcEndpoints` – List all VPC endpoints.
+ `aoss:BatchGetVpcEndpoint` – See details about a subset of VPC endpoints.
+ `aoss:UpdateVpcEndpoint` – Modify a VPC endpoint.
+ `aoss:DeleteVpcEndpoint` – Delete a VPC endpoint.

In addition, you need the following Amazon EC2 and Route 53 permissions in order to create a VPC endpoint.
+ `ec2:CreateTags`
+ `ec2:CreateVpcEndpoint`
+ `ec2:DeleteVpcEndPoints`
+ `ec2:DescribeSecurityGroups`
+ `ec2:DescribeSubnets`
+ `ec2:DescribeVpcEndpoints`
+ `ec2:DescribeVpcs`
+ `ec2:ModifyVpcEndPoint`
+ `route53:AssociateVPCWithHostedZone`
+ `route53:ChangeResourceRecordSets`
+ `route53:CreateHostedZone`
+ `route53:DeleteHostedZone`
+ `route53:GetChange`
+ `route53:GetHostedZone`
+ `route53:ListHostedZonesByName`
+ `route53:ListHostedZonesByVPC`
+ `route53:ListResourceRecordSets`

## Create an interface endpoint for OpenSearch Serverless
<a name="serverless-vpc-create"></a>

You can create an interface endpoint for OpenSearch Serverless using either the console or the OpenSearch Serverless API. 

**To create an interface endpoint for an OpenSearch Serverless collection**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation pane, expand **Serverless** and choose **VPC endpoints**.

1. Choose **Create VPC endpoint**.

1. Provide a name for the endpoint.

1. For **VPC**, select the VPC that you'll access OpenSearch Serverless from.

1. For **Subnets**, select one subnet that you'll access OpenSearch Serverless from.
   + Endpoint's IP address and DNS type is based on subnet type
     + Dualstack: If all subnets have both IPv4 and IPv6 address ranges
     + IPv6: If all subnets are IPv6 only subnets
     + IPv4: If all subnets have IPv4 address ranges

1. For **Security groups**, select the security groups to associate with the endpoint network interfaces. This is a critical step where you limit the ports, protocols, and sources for inbound traffic that you're authorizing into your endpoint. Make sure that the security group rules allow the resources that will use the VPC endpoint to communicate with OpenSearch Serverless to communicate with the endpoint network interface.

1. Choose **Create endpoint**.

To create a VPC endpoint using the OpenSearch Serverless API, use the `CreateVpcEndpoint` command.

**Note**  
After you create an endpoint, note its ID (for example, `vpce-abc123def4EXAMPLE`. In order to provide the endpoint access to your collections, you must include this ID in one or more network access policies. 

After you create an interface endpoint, you must provide it access to collections through network access policies. For more information, see [Network access for Amazon OpenSearch Serverless](serverless-network.md).

## Shared VPC setup for Amazon OpenSearch Serverless
<a name="shared-vpc-setup"></a>

You can use Amazon Virtual Private Cloud (VPC) to share VPC subnets with other AWS accounts in your organization, as well as share networking infrastructure such as a VPN between resources in multiple AWS accounts. 

Currently, Amazon OpenSearch Serverless doesn't support creating an AWS PrivateLink connection into a shared VPC unless you are an owner of that VPC. AWS PrivateLink also doesn't support sharing connections between AWS accounts. 

However, based on the flexible and modular architecture of OpenSearch Serverless, you can still set up a shared VPC. This is because the OpenSearch Serverless networking infrastructure is separate from that of the individual collection (OpenSearch Service) infrastructure. You can therefore create an AWS PrivateLink VPCe endpoint for one account where a VPC located, and then use a VPCe ID in the network policy of other accounts to restrict traffic to come only from that shared VPC. 

The following procedures refer to an *owner account* and a *consumer account*.

An owner account acts as a common networking account where you set up a VPC and share it with other accounts. Consumer accounts are those accounts that create and maintain their OpenSearch Serverless collections in the VPC shared with them by the owner account. 

**Prerequisites**  
Ensure the following requirements are met before setting up the shared VPC:
+ The intended owner account must have already set up a VPC, subnets, route table, and other required resources in Amazon Virtual Private Cloud. For more information, see the *[Amazon VPC User Guide](https://docs.aws.amazon.com/vpc/latest/userguide/)*.
+ The intended owner account and consumer accounts must belong to the same organization in AWS Organizations. For more information, see the *[AWS Organizations User Guide](https://docs.aws.amazon.com/organizations/latest/userguide/)*.

**To set up a shared VPC in an owner account/common networking account.**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. Follow the steps in [Create an interface endpoint for OpenSearch Serverless](#serverless-vpc-create). As you do, make the following selections:
   + Select a VPC and subnets that are shared with the consumer accounts in your organization.

1. After you create the endpoint, make a note of the VPCe ID that is generated and provide it to the administrators who are to perform the setup task in consumer accounts.

   VPCe IDs are in the format `vpce-abc123def4EXAMPLE`.

**To set up a shared VPC in a consumer account**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. Use the information in [Managing Amazon OpenSearch Serverless collections](serverless-manage.md) to create a collection, if you don't have one already.

1. Use the information in [Creating network policies (console)](serverless-network.md#serverless-network-console) to create a network policy. As you do, make the following selections.
**Note**  
You can also update an existing network policy for this purpose.

   1. For **Access type**, select **VPC (recommended)**.

   1. For **VPC endpoints for access**, choose the VPCe ID provided to you by the owner account, in the format `vpce-abc123def4EXAMPLE`.

   1. In the **Resource type** area, do the following:
      + Select the **Enable access to OpenSearch endpoint** box, and then select the collection name or collection pattern to use to enable access from that shared VPC.
      + Select the **Enable access to OpenSearch Dashboard** box, and then select the collection name or collection pattern to use to enable access from that shared VPC.

1. For a new policy, choose **Create**. For an existing policy, choose **Update**.

# Control plane access through AWS PrivateLink
<a name="serverless-vpc-cp"></a>

Amazon OpenSearch Serverless supports two types of AWS PrivateLink connections for control plane and data plane operations. Control plane operations include the creation and deletion of collections and the management of access policies. Data plane operations are for indexing and querying data within a collection. This page covers the control plane AWS PrivateLink endpoint. For information about data plane VPC endpoints, see [Data plane access through AWS PrivateLink](serverless-vpc.md).

## Creating a control plane AWS PrivateLink endpoint
<a name="serverless-vpc-privatelink"></a>

You can improve the security posture of your VPC by configuring OpenSearch Serverless to use an interface VPC endpoint. Interface endpoints are powered by AWS PrivateLink. This technology enables you to privately access OpenSearch Serverless APIs without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

For more information about AWS PrivateLink and VPC endpoints, see [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html#concepts-vpc-endpoints) in the Amazon VPC User Guide.

### Considerations
<a name="serverless-vpc-cp-considerations"></a>
+ VPC endpoints are supported within the same Region only.
+ VPC endpoints only support Amazon-provided DNS through Amazon Route 53.
+ VPC endpoints support endpoint policies to control access to OpenSearch Serverless Collections, Policies and VpcEndpoints.
+ OpenSearch Serverless supports interface endpoints only. Gateway endpoints are not supported.

### Creating the VPC endpoint
<a name="serverless-vpc-cp-create"></a>

To create the control plane VPC endpoint for Amazon OpenSearch Serverless, use the [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint) procedure in the *Amazon VPC Developer Guide*. Create the following endpoint:
+ `com.amazonaws.region.aoss`

**To create a control plane VPC endpoint using the console**

1. Open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create Endpoint**.

1. For **Service category**, choose **AWS services**.

1. For **Services**, choose `com.amazonaws.region.aoss`. For example, `com.amazonaws.us-east-1.aoss`.

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Subnets**, choose the subnets (Availability Zones) in which to create the endpoint network interfaces.

1. For **Security groups**, choose the security groups to associate with the endpoint network interfaces. Ensure HTTPS (port 443) is allowed.

1. For **Policy**, choose **Full access** to allow all operations, or choose **Custom** to attach a custom policy.

1. Choose **Create endpoint**.

### Creating an endpoint policy
<a name="serverless-vpc-cp-endpoint-policy"></a>

You can attach an endpoint policy to your VPC endpoint that controls access to Amazon OpenSearch Serverless. The policy specifies the following information:
+ The principal that can perform actions.
+ The actions that can be performed.
+ The resources on which actions can be performed.

For more information, see [Controlling access to services with VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *Amazon VPC User Guide*.

**Example VPC endpoint policy for OpenSearch Serverless**  

```
{  
  "Version": "2012-10-17",		 	 	   
  "Statement": [  
    {  
      "Effect": "Allow",  
      "Principal": "*",  
      "Action": [  
        "aoss:ListCollections",  
        "aoss:BatchGetCollection"  
      ],  
      "Resource": "*"  
    }  
  ]  
}
```

**Example Restrictive policy allowing only list operations**  

```
{  
  "Version": "2012-10-17",		 	 	   
  "Statement": [  
    {  
      "Effect": "Allow",  
      "Principal": "*",  
      "Action": "aoss:ListCollections",  
      "Resource": "*"  
    }  
  ]  
}
```

# SAML authentication for Amazon OpenSearch Serverless
<a name="serverless-saml"></a>

With SAML authentication for Amazon OpenSearch Serverless, you can use your existing identity provider to offer single sign-on (SSO) for the OpenSearch Dashboards endpoints of serverless collections.

SAML authentication lets you use third-party identity providers to sign in to OpenSearch Dashboards to index and search data. OpenSearch Serverless supports providers that use the SAML 2.0 standard, such as IAM Identity Center, Okta, Keycloak, Active Directory Federation Services (AD FS), and Auth0. You can configure IAM Identity Center to synchronize users and groups from other identity sources like Okta, OneLogin, and Microsoft Entra ID. For a list of identity sources supported by IAM Identity Center and steps to configure them, see [Getting started tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html) in the *IAM Identity Center User Guide*.

**Note**  
SAML authentication is only for accessing OpenSearch Dashboards through a web browser. Authenticated users can only make requests to the OpenSearch API operations through **Dev Tools** in OpenSearch Dashboards. Your SAML credentials do *not* let you make direct HTTP requests to the OpenSearch API operations.

To set up SAML authentication, you first configure a SAML identity provider (IdP). You then include one or more users from that IdP in a [data access policy](serverless-data-access.md). This policy grants it certain permissions to collections and/or indexes. A user can then sign in to OpenSearch Dashboards and perform the actions that are allowed in the data access policy.

![\[SAML authentication flow with data access policy, OpenSearch interface, and JSON configuration.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/serverless-saml-flow.png)


**Topics**
+ [Considerations](#serverless-saml-considerations)
+ [Permissions required](#serverless-saml-permissions)
+ [Creating SAML providers (console)](#serverless-saml-creating)
+ [Accessing OpenSearch Dashboards](#serverless-saml-dashboards)
+ [Granting SAML identities access to collection data](#serverless-saml-policies)
+ [Creating SAML providers (AWS CLI)](#serverless-saml-creating-api)
+ [Viewing SAML providers](#serverless-saml-viewing)
+ [Updating SAML providers](#serverless-saml-updating)
+ [Deleting SAML providers](#serverless-saml-deleting)

## Considerations
<a name="serverless-saml-considerations"></a>

Consider the following when configuring SAML authentication:
+ Signed and encrypted requests are not supported.
+ Encrypted assertions are not supported.
+ IdP-initiated authentication and sign-out are not supported.
+ Service Control Policies (SCP) will not be applicable or evaluated in the case of non-IAM identities (like SAML in Amazon OpenSearch Serverless & SAML and basic internal user authorization for Amazon OpenSearch Service).

## Permissions required
<a name="serverless-saml-permissions"></a>

SAML authentication for OpenSearch Serverless uses the following AWS Identity and Access Management (IAM) permissions:
+ `aoss:CreateSecurityConfig` – Create a SAML provider.
+ `aoss:ListSecurityConfig` – List all SAML providers in the current account.
+ `aoss:GetSecurityConfig` – View SAML provider information.
+ `aoss:UpdateSecurityConfig` – Modify a given SAML provider configuration, including the XML metadata.
+ `aoss:DeleteSecurityConfig` – Delete a SAML provider.

The following identity-based access policy allows a user to manage all IdP configurations:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "aoss:CreateSecurityConfig",
                "aoss:DeleteSecurityConfig",
                "aoss:GetSecurityConfig",
                "aoss:UpdateSecurityConfig",
                "aoss:ListSecurityConfigs"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}
```

------

Note that the `Resource` element must be a wildcard.

## Creating SAML providers (console)
<a name="serverless-saml-creating"></a>

These steps explain how to create SAML providers. This enables SAML authentication with service provider (SP)-initiated authentication for OpenSearch Dashboards. IdP-initiated authentication is not supported.

**To enable SAML authentication for OpenSearch Dashboards**

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. On the left navigation panel, expand **Serverless** and choose **SAML authentication**.

1. Choose **Add SAML provider**.

1. Provide a name and description for the provider.
**Note**  
The name that you specify is publicly accessible and will appear in a dropdown menu when users sign in to OpenSearch Dashboards. Make sure that the name is easily recognizable and doesn't reveal sensitive information about your identity provider.

1. Under **Configure your IdP**, copy the assertion consumer service (ACS) URL.

1. Use the ACS URL that you just copied to configure your identity provider. Terminology and steps vary by provider. Consult your provider's documentation.

   In Okta, for example, you create a "SAML 2.0 web application" and specify the ACS URL as the **Single Sign On URL**, **Recipient URL**, and **Destination URL**. For Auth0, you specify it in **Allowed Callback URLs**.

1. Provide the audience restriction if your IdP has a field for it. The audience restriction is a value within the SAML assertion that specifies who the assertion is intended for. With OpenSearch Serverless, you can do the following. Make sure to replace the *content* in the following code example with your own AWS account ID: 

   1. Use the default audience restriction `:opensearch:111122223333`.

   1. (Optional) configure a custom audience restriction using the AWS CLI. For more information, see [Creating SAML providers (AWS CLI)](#serverless-saml-creating-api).

   The name of the audience restriction field varies by provider. For Okta it's **Audience URI (SP Entity ID)**. For IAM Identity Center it's **Application SAML audience**.

1. If you're using IAM Identity Center, you also need to specify the following [attribute mapping](https://docs.aws.amazon.com/singlesignon/latest/userguide/attributemappingsconcept.html): `Subject=${user:name}`, with a format of `unspecified`.

1. After you configure your identity provider, it generates an IdP metadata file. This XML file contains information about the provider, such as a TLS certificate, single sign-on endpoints, and the identity provider's entity ID.

   Copy the text in the IdP metadata file and paste it under **Provide metadata from your IdP** field. Alternately, choose **Import from XML file** and upload the file. The metadata file should look something like this:

   ```
   <?xml version="1.0" encoding="UTF-8"?>
   <md:EntityDescriptor entityID="entity-id" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
     <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
       <md:KeyDescriptor use="signing">
         <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
           <ds:X509Data>
             <ds:X509Certificate>tls-certificate</ds:X509Certificate>
           </ds:X509Data>
         </ds:KeyInfo>s
       </md:KeyDescriptor>
       <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</md:NameIDFormat>
       <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
       <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="idp-sso-url"/>
       <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="idp-sso-url"/>
     </md:IDPSSODescriptor>
   </md:EntityDescriptor>
   ```

1. Keep the **Custom user ID attribute** field empty to use the `NameID` element of the SAML assertion for the username. If your assertion doesn't use this standard element and instead includes the username as a custom attribute, specify that attribute here. Attributes are case-sensitive. Only a single user attribute is supported.

   The following example shows an override attribute for `NameID` in the SAML assertion:

   ```
   <saml2:Attribute Name="UserId" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
     <saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" 
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
     xsi:type="xs:string">annie</saml2:AttributeValue>
   </saml2:Attribute>
   ```

1. (Optional) Specify a custom attribute in the **Group attribute** field, such as `role` or `group`. Only a single group attribute is supported. There's no default group attribute. If you don't specify one, your data access policies can only contain user principals.

   The following example shows a group attribute in the SAML assertion:

   ```
   <saml2:Attribute Name="department" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
       <saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema" 
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
       xsi:type="xs:string">finance</saml2:AttributeValue>
   </saml2:Attribute>
   ```

1. By default, OpenSearch Dashboards signs users out after 24 hours. You can configure this value to any number between 1 and 12 hours (15 and 720 minutes) by specifying the **OpenSearch Dashboards timeout**. If you try to set the timeout equal to or less than 15 minutes, your session will be reset to one hour.

1. Choose **Create SAML provider**.

## Accessing OpenSearch Dashboards
<a name="serverless-saml-dashboards"></a>

After you configure a SAML provider, all users and groups associated with that provider can navigate to the OpenSearch Dashboards endpoint. The Dashboards URL has the format `collection-endpoint/_dashboards/` *for all collections*. 

If you have SAML enabled, selecting the link in the AWS Management Console directs you to the IdP selection page, where you can sign in using your SAML credentials. First, use the dropdown to select an identity provider:

![\[OpenSearch login page with dropdown menu for selecting SAML Identity Provider options.\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/images/idpList.png)


Then sign in using your IdP credentials. 

If you don't have SAML enabled, selecting the link in the AWS Management Console directs you to log in as an IAM user or role, with no option for SAML.

## Granting SAML identities access to collection data
<a name="serverless-saml-policies"></a>

After you create a SAML provider, you still need to grant the underlying users and groups access to the data within your collections. You grant access through [data access policies](serverless-data-access.md). Until you provide users access, they won't be able to read, write, or delete any data within your collections.

To grant access, create a data access policy and specify your SAML user and/or group IDs in the `Principal` statement:

```
[
   {
      "Rules":[
       ...  
      ],
      "Principal":[
         "saml/987654321098/myprovider/user/Shaheen",
         "saml/987654321098/myprovider/group/finance"
      ]
   }
]
```

You can grant access to collections, indexes, or both. If you want different users to have different permissions, create multiple rules. For a list of available permissions, see [Supported policy permissions](serverless-data-access.md#serverless-data-supported-permissions). For information about how to format an access policy, see [Policy syntax](serverless-data-access.md).

## Creating SAML providers (AWS CLI)
<a name="serverless-saml-creating-api"></a>

To create a SAML provider using the OpenSearch Serverless API, send a [CreateSecurityConfig](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_CreateSecurityConfig.html) request:

```
aws opensearchserverless create-security-config \
    --name myprovider \
    --type saml \
    --saml-options file://saml-auth0.json
```

Specify `saml-options`, including the metadata XML, as a key-value map within a .json file. The metadata XML must be encoded as a [JSON escaped string](https://www.freeformatter.com/json-escape.html).

```
{
   "sessionTimeout": 70,
   "groupAttribute": "department",
   "userAttribute": "userid",
   "openSearchServerlessEntityId": "aws:opensearch:111122223333:app1",
   "metadata": "EntityDescriptor xmlns=\"urn:oasis:names:tc:SAML:2.0:metadata\" ... ... ... IDPSSODescriptor\r\n\/EntityDescriptor"
}
```

**Note**  
(Optional) configure a custom audience restriction using the AWS CLI. For more information, see [Creating SAML providers (AWS CLI)](#serverless-saml-creating-api).

## Viewing SAML providers
<a name="serverless-saml-viewing"></a>

The following [ListSecurityConfigs](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListSecurityConfigs.html) request lists all SAML providers in your account:

```
aws opensearchserverless list-security-configs --type saml
```

The request returns information about all existing SAML providers, including the full IdP metadata that your identity provider generates:

```
{
   "securityConfigDetails": [ 
      { 
         "configVersion": "MTY2NDA1MjY4NDQ5M18x",
         "createdDate": 1664054180858,
         "description": "Example SAML provider",
         "id": "saml/111122223333/myprovider",
         "lastModifiedDate": 1664054180858,
         "samlOptions": { 
            "groupAttribute": "department",
            "metadata": "EntityDescriptorxmlns=\"urn:oasis:names:tc:SAML:2.0:metadata\" ...... ...IDPSSODescriptor\r\n/EntityDescriptor",
            "sessionTimeout": 120,
            "openSearchServerlessEntityId": "aws:opensearch:111122223333:app1",
            "userAttribute": "userid"
         }
      }
   ]
}
```

To view details about a specific provider, including the `configVersion` for future updates, send a `GetSecurityConfig` request.

## Updating SAML providers
<a name="serverless-saml-updating"></a>

To update a SAML provider using the OpenSearch Serverless console, choose **SAML authentication**, select your identity provider, and choose **Edit**. You can modify all fields, including the metadata and custom attributes.

To update a provider through the OpenSearch Serverless API, send an [UpdateSecurityConfig](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UpdateSecurityConfig.html) request and include the identifier of the policy to be updated. You must also include a configuration version, which you can retrieve using the `ListSecurityConfigs` or `GetSecurityConfig` commands. Including the most recent version ensures that you don't inadvertently override a change made by someone else.

The following request updates the SAML options for a provider:

```
aws opensearchserverless update-security-config \
    --id saml/123456789012/myprovider \
    --type saml \
    --saml-options file://saml-auth0.json \
    --config-version MTY2NDA1MjY4NDQ5M18x
```

Specify your SAML configuration options as a key-value map within a .json file.

**Important**  
**Updates to SAML options are *not* incremental**. If you don't specify a value for a parameter in the `SAMLOptions` object when you make an update, the existing values will be overridden with empty values. For example, if the current configuration contains a value for `userAttribute`, and then you make an update and don't include this value, the value is removed from the configuration. Make sure you know what the existing values are before you make an update by calling the `GetSecurityConfig` operation.

## Deleting SAML providers
<a name="serverless-saml-deleting"></a>

When you delete a SAML provider, any references to associated users and groups in your data access policies are no longer functional. To avoid confusion, we suggest that you remove all references to the endpoint in your access policies before you delete the endpoint.

To delete a SAML provider using the OpenSearch Serverless console, choose **Authentication**, select the provider, and choose **Delete**.

To delete a provider through the OpenSearch Serverless API, send a [DeleteSecurityConfig](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_DeleteSecurityConfig.html) request:

```
aws opensearchserverless delete-security-config --id saml/123456789012/myprovider
```

# Compliance validation for Amazon OpenSearch Serverless
<a name="serverless-compliance-validation"></a>

Third-party auditors assess the security and compliance of Amazon OpenSearch Serverless as part of multiple AWS compliance programs. These programs include SOC, PCI, and HIPAA.

To learn whether an AWS service is within the scope of specific compliance programs, see [AWS services in Scope by Compliance Program](https://aws.amazon.com/compliance/services-in-scope/) and choose the compliance program that you are interested in. For general information, see [AWS Compliance Programs](https://aws.amazon.com/compliance/programs/).

You can download third-party audit reports using AWS Artifact. For more information, see [Downloading Reports in AWS Artifact](https://docs.aws.amazon.com/artifact/latest/ug/downloading-documents.html).

Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations. For more information about your compliance responsibility when using AWS services, see [AWS Security Documentation](https://docs.aws.amazon.com/security/).

# Tagging Amazon OpenSearch Serverless collections
<a name="tag-collection"></a>

Tags let you assign arbitrary information to an Amazon OpenSearch Serverless collection so you can categorize and filter on that information. A *tag* is a metadata label that you assign or that AWS assigns to an AWS resource. 

Each tag consists of a *key* and a *value*. For tags that you assign, you define the key and value. For example, you might define the key as `stage` and the value for one resource as `test`.

With tags, you can identify and organize your AWS resources. Many AWS services support tagging, so you can assign the same tag to resources from different services to indicate that the resources are related. For example, you could assign the same tag to an OpenSearch Serverless collection that you assign to an Amazon OpenSearch Service domain.

In OpenSearch Serverless, the primary resource is a collection. You can use the OpenSearch Service console, the AWS CLI, the OpenSearch Serverless API operations, or the AWS SDKs to add, manage, and remove tags from a collection.

## Permissions required
<a name="collection-tag-permissions"></a>

OpenSearch Serverless uses the following AWS Identity and Access Management Access Analyzer (IAM) permissions for tagging collections:
+ `aoss:TagResource`
+ `aoss:ListTagsForResource`
+ `aoss:UntagResource`

# Tagging collections (console)
<a name="tag-collection-console"></a>

The console is the simplest way to tag a collection.

****To create a tag (console)****

1. Sign in to the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. Expand **Serverless** in the left navigation pane and choose **Collections**.

1. Select the collection that you want to add tags to, and go to the **Tags** tab.

1. Choose **Manage** and **Add new tag**.

1. Enter a tag key and an optional value.

1. Choose **Save**.

To delete a tag, follow the same steps and choose **Remove** on the **Manage tags** page.

For more information about using the console to work with tags, see [Tag Editor](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/tag-editor.html) in the *AWS Management Console Getting Started Guide*.

# Tagging collections (AWS CLI)
<a name="tag-collection-cli"></a>

To tag a collection using the AWS CLI, send a [TagResource](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_TagResource.html) request: 

```
aws opensearchserverless tag-resource
  --resource-arn arn:aws:aoss:us-east-1:123456789012:collection/my-collection 
  --tags Key=service,Value=aoss Key=source,Value=logs
```

View the existing tags for a collection with the [ListTagsForResource](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_ListTagsForResource.html) command:

```
aws opensearchserverless list-tags-for-resource
  --resource-arn arn:aws:aoss:us-east-1:123456789012:collection/my-collection
```

Remove tags from a collection using the [UntagResource](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/API_UntagResource.html) command:

```
aws opensearchserverless untag-resource
  --resource-arn arn:aws:aoss:us-east-1:123456789012:collection/my-collection
  --tag-keys service
```

# Supported operations and plugins in Amazon OpenSearch Serverless
<a name="serverless-genref"></a>

Amazon OpenSearch Serverless supports a variety of OpenSearch plugins, as well as a subset of the indexing, search, and metadata [API operations](https://opensearch.org/docs/latest/opensearch/rest-api/index/) available in OpenSearch. You can include the permissions in the left column of the table within [data access policies](serverless-data-access.md) in order to limit access to certain operations.

**Topics**
+ [Supported OpenSearch API operations and permissions](#serverless-operations)
+ [Supported OpenSearch plugins](#serverless-plugins)

## Supported OpenSearch API operations and permissions
<a name="serverless-operations"></a>

The following table lists the API operations that OpenSearch Serverless supports, along with their corresponding data access policy permissions:


| Data access policy permission | OpenSearch API operations | Description and caveats | 
| --- | --- | --- | 
|  `aoss:CreateIndex`  | PUT <index> |  Create indexes. For more information, see [Create index](https://opensearch.org/docs/latest/api-reference/index-apis/create-index/).  This permission also applies to creating indexes with the sample data on OpenSearch Dashboards.   | 
|  `aoss:DescribeIndex`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Describe indexes. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | 
|  `aoss:WriteDocument`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Write and update documents. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  Some operations are only allowed for collections of type `SEARCH`. For more information, see [Choosing a collection type](serverless-overview.md#serverless-usecase).   | 
|  `aoss:ReadDocument`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | Read documents. For more information, see the following resources:[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html) | 
|  `aoss:DeleteIndex`  | DELETE <target> | Delete indexes. For more information, see [Delete index](https://opensearch.org/docs/latest/api-reference/index-apis/delete-index/). | 
|  `aoss:UpdateIndex`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Update index settings. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | 
|  `aoss:CreateCollectionItems`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html) | 
|  `aoss:DescribeCollectionItems`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Describes how to work with aliases, index and framework templates, and pipelines. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | 
|  `aoss:UpdateCollectionItems`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | Update aliases, index templates, and framework templates. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html) \$1 The API to de-provision templates. The ML Commons Client and OpenSearch Serverless services manage dependent policies.  | 
|  `aoss:DeleteCollectionItems`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Delete aliases, index and framework templates, and pipelines. For more information, see the following resources: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  | 
|  `aoss:DescribeMLResource`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Describes GET and search APIs to retrieve information about models, and connectors.   | 
|  `aoss:CreateMLResource`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Provides permission to create ML resources.  | 
|  `aoss:UpdateMLResource`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Provides permission to update existing ML resources.  | 
|  `aoss:DeleteMLResource`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Provides permission to delete ML resources.  | 
|  `aoss:ExecuteMLResource`  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-genref.html)  |  Provides permission to run models.  | 

## Supported OpenSearch plugins
<a name="serverless-plugins"></a>

OpenSearch Serverless collections come prepackaged with the following plugins from the OpenSearch community. Serverless automatically deploys and manages plugins for you.

**Analysis plugins**
+  [ICU Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-icu) 
+  [Japanese (kuromoji) Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-kuromoji)
+  [Korean (Nori) Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-nori) 
+  [Phonetic Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-phonetic) 
+  [Smart Chinese Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-smartcn) 
+  [Stempel Polish Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-stempel)
+  [Ukrainian Analysis](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/analysis-ukrainian)

**Mapper plugins**
+  [Mapper Size](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/mapper-size) 
+  [Mapper Murmur3](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/mapper-murmur3) 
+  [Mapper Annotated Text](https://github.com/opensearch-project/OpenSearch/tree/main/plugins/mapper-annotated-text)

**Scripting plugins**
+  [Painless](https://opensearch.org/docs/latest/api-reference/script-apis/exec-script/)
+  [Expression](https://opensearch.org/docs/latest/data-prepper/pipelines/expression-syntax/) 
+  [Mustache](https://mustache.github.io/mustache.5.html)

In addition, OpenSearch Serverless includes all plugins that ship as modules. 

# Monitoring Amazon OpenSearch Serverless
<a name="serverless-monitoring"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon OpenSearch Serverless and your other AWS solutions. AWS provides the following monitoring tools to watch OpenSearch Serverless, report when something is wrong, and take automatic actions when appropriate:
+ *Amazon CloudWatch* monitors your AWS resources and the applications that you run on AWS in real time. You can collect and track metrics, create customized dashboards, and set alarms that notify you or take actions when a specified metric reaches a threshold that you specify. 

  For example, you can have CloudWatch track CPU usage or other metrics of your Amazon EC2 instances and automatically launch new instances when needed. For more information, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).
+ *AWS CloudTrail* captures API calls and related events made by or on behalf of your AWS account. It delivers the log files to an Amazon S3 bucket that you specify. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred. For more information, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).
+ *Amazon EventBridge* delivers a near real-time stream of system events that describe changes in your OpenSearch Service domains. You can create rules that watch for certain events, and trigger automated actions in other AWS services when these events occur. For more information, see the [Amazon EventBridge User Guide](https://docs.aws.amazon.com/eventbridge/latest/userguide/).

# Monitoring OpenSearch Serverless with Amazon CloudWatch
<a name="monitoring-cloudwatch"></a>

You can monitor Amazon OpenSearch Serverless using CloudWatch, which collects raw data and processes it into readable, near real-time metrics. These statistics are kept for 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing. 

You can also set alarms that watch for certain thresholds, and send notifications or take actions when those thresholds are met. For more information, see the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

OpenSearch Serverless reports the following metrics in the `AWS/AOSS` namespace.


| Metric | Description | 
| --- | --- | 
| ActiveCollection |  Indicates whether a collection is active. A value of 1 means that the collection is in an `ACTIVE` state. This value is emitted upon successful creation of a collection and remains 1 until you delete the collection. The metric can't have a value of 0. **Relevant statistics**: Max **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| DeletedDocuments |  The total number of deleted documents. **Relevant statistics**: Average, Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| IndexingOCU |  The number of OpenSearch Compute Units (OCUs) used to ingest collection data. This metric applies at the account level. Represents usage only for collections that are not part of any collection group. **Relevant statistics**: Sum **Dimensions**: `ClientId` **Frequency**: 60 seconds  | 
| IndexingOCU |  The number of OpenSearch Compute Units (OCUs) used to ingest collection data. This metric applies at the collection group level. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionGroupId`, `CollectionGroupName` **Frequency**: 60 seconds  | 
| IngestionDataRate |  The indexing rate in GiB per second to a collection or index. This metric only applies to bulk indexing requests. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| IngestionDocumentErrors |  The total number of document errors during ingestion for a collection or index. After a successful bulk indexing request, writers process the request and emit errors for all failed documents within the request. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| IngestionDocumentRate |  The rate per second at which documents are being ingested to a collection or index. This metric only applies to bulk indexing requests. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| IngestionRequestErrors |  The total number of bulk indexing request errors to a collection. OpenSearch Serverless emits this metric when a bulk indexing request fails for any reason, such as an authentication or availability issue. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| IngestionRequestLatency |  The latency, in seconds, for bulk write operations to a collection. **Relevant statistics**: Minimum, Maximum, Average **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| IngestionRequestRate |  The total number of bulk write operations received by a collection. **Relevant statistics**: Minimum, Maximum, Average **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| IngestionRequestSuccess |  The total number of successful indexing operations to a collection. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| SearchableDocuments |  The total number of searchable documents in a collection or index. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| SearchRequestErrors |  The total number of query errors per minute for a collection. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| SearchRequestLatency |  The average time, in milliseconds, that it takes to complete a search operation against a collection. **Relevant statistics**: Minimum, Maximum, Average **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| SearchOCU |  The number of OpenSearch Compute Units (OCUs) used to search collection data. This metric applies at the account level. Represents usage only for collections that are not part of any collection group. **Relevant statistics**: Sum **Dimensions**: `ClientId` **Frequency**: 60 seconds  | 
| SearchOCU |  The number of OpenSearch Compute Units (OCUs) used to search collection data. This metric applies at the collection group level. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionGroupId`, `CollectionGroupName` **Frequency**: 60 seconds  | 
| SearchRequestRate |  The total number of search requests per minute to a collection. **Relevant statistics**: Average, Maximum, Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 
| StorageUsedInS3 |  The amount, in bytes, of Amazon S3 storage used. OpenSearch Serverless stores indexed data in Amazon S3. You must select the period at one minute to get an accurate value. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName`, `IndexId`, `IndexName` **Frequency**: 60 seconds  | 
| VectorIndexBuildAccelerationOCU |  The number of OpenSearch Compute Units (OCUs) used to accelerate vector indexing. This metric applies at the collection level. **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId` **Frequency**: 60 seconds  | 
| 2xx, 3xx, 4xx, 5xx |  The number of requests to the collection that resulted in the given HTTP response code (2*xx*, 3*xx*, 4*xx*, 5*xx*). **Relevant statistics**: Sum **Dimensions**: `ClientId`, `CollectionId`, `CollectionName` **Frequency**: 60 seconds  | 

# Logging OpenSearch Serverless API calls using AWS CloudTrail
<a name="logging-using-cloudtrail"></a>

Amazon OpenSearch Serverless is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Serverless. 

CloudTrail captures all API calls for OpenSearch Serverless as events. The calls captured include calls from the Serverless section of the OpenSearch Service console and code calls to the OpenSearch Serverless API operations.

If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for OpenSearch Serverless. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**. 

Using the information collected by CloudTrail, you can determine the request that was made to OpenSearch Serverless, the IP address from which the request was made, who made the request, when it was made, and additional details.

To learn more about CloudTrail, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html).

## OpenSearch Serverless information in CloudTrail
<a name="service-name-info-in-cloudtrail"></a>

CloudTrail is enabled on your AWS account when you create the account. When activity occurs in OpenSearch Serverless, that activity is recorded in a CloudTrail event along with other AWS service events in **Event history**. You can view, search, and download recent events in your AWS account. For more information, see [Viewing events with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html).

For an ongoing record of events in your AWS account, including events for OpenSearch Serverless, create a trail. A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all AWS Regions. 

The trail logs events from all Regions in the AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following:
+ [Overview for creating a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail supported services and integrations](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html)
+ [Configuring Amazon SNS notifications for CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/configure-sns-notifications-for-cloudtrail.html)
+ [Receiving CloudTrail log files from multiple regions](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail log files from multiple accounts](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

All OpenSearch Serverless actions are logged by CloudTrail and are documented in the [OpenSearch Serverless API reference](https://docs.aws.amazon.com/opensearch-service/latest/ServerlessAPIReference/Welcome.html). For example, calls to the `CreateCollection`, `ListCollections`, and `DeleteCollection` actions generate entries in the CloudTrail log files.

Every event or log entry contains information about who generated the request. The identity information helps you determine:
+ Whether the request was made with root or AWS Identity and Access Management (IAM) user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another AWS service.

For more information, see the [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).

## OpenSearch Serverless data events in CloudTrail
<a name="cloudtrail-data-events"></a>

[Data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) provide information about the resource operations performed on or in a resource (for example, searching or indexing to an OpenSearch Serverless Collection). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail doesn’t log data events. The CloudTrail **Event history** doesn't record data events.

Additional charges apply for data events. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

You can log data events for the `AWS::AOSS::Collection` resource types by using the CloudTrail console, AWS CLI, or CloudTrail API operations. For more information about how to log data events, see [Logging data events with the AWS Management Console](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events-console) and [Logging data events with the AWS Command Line Interface](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-with-the-AWS-CLI) in the *AWS CloudTrail User Guide*.

You can configure advanced event selectors to filter on the `eventName`, `readOnly`, and `resources.ARN` fields to log only those events that are important to you. For more information about these fields, see [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) in the *AWS CloudTrail API Reference*.

## Understanding OpenSearch Serverless Data Event entries
<a name="understanding-data-event-entries"></a>

In the following example:
+ The `requestParameters` field contains details about the API call made to the collection. It includes the base request path (without query parameters).
+ The `responseElements` field includes a status code that indicates the outcome of your request when modifying resources. This status code helps you track whether your changes were processed successfully or require attention.
+ OpenSearch Serverless logs CloudTrail data events only for requests that have successfully completed IAM authentication.

**Example**  

```
 {
      "eventVersion": "1.11",
      "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROA123456789EXAMPLE",
        "arn": "arn:aws::sts::111122223333:assumed-role/Admin/user-role",
        "accountId": "111122223333",
        "accessKeyId": "access-key",
        "userName": "",
        "sessionContext": {
          "sessionIssuer": {
            "type": "Role",
            "principalId": "AROA123456789EXAMPLE",
            "arn": "arn:aws:iam::111122223333:role/Admin",
            "accountId": "111122223333",
            "userName": "Admin"
          },
          "attributes": {
            "creationDate": "2025-08-15T22:57:38Z",
            "mfaAuthenticated": "false"
          },
          "sourceIdentity": "",
          "ec2RoleDelivery": "",
          "assumedRoot": ""
        },
        "identityProvider": "",
        "credentialId": ""
      },
      "eventTime": "2025-08-15T22:58:00Z",
      "eventSource": "aoss.amazonaws.com",
      "eventName": "Search",
      "awsRegion": "us-east-1",
      "sourceIPAddress": "AWS Internal",
      "userAgent": "python-requests/2.32.3",
      "requestParameters": {
        "pathPrefix": "/_search"
      },
      "responseElements": null,
      "requestID": "2cfee788-EXAM-PLE1-8617-4018cEXAMPLE",
      "eventID": "48d43617-EXAM-PLE1-9d9c-f7EXAMPLE",
      "readOnly": true,
      "resources": [
        {
          "type": "AWS::AOSS::Collection",
          "ARN": "arn:aws:aoss:us-east-1:111122223333:collection/aab9texampletu45xh77"
        }
      ],
      "eventType": "AwsApiCall",
      "managementEvent": false,
      "recipientAccountId": "111122223333",
      "eventCategory": "Data"
    }
  ]
}
```

## Understanding OpenSearch Serverless Management Events entries
<a name="understanding-service-name-entries"></a>

A trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. 

An event represents a single request from any source. It includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order. 

The following example displays a CloudTrail log entry that demonstrates the `CreateCollection` action.

```
{
   "eventVersion":"1.08",
   "userIdentity":{
      "type":"AssumedRole",
      "principalId":"AIDACKCEVSQ6C2EXAMPLE",
      "arn":"arn:aws:iam::123456789012:user/test-user",
      "accountId":"123456789012",
      "accessKeyId":"access-key",
      "sessionContext":{
         "sessionIssuer":{
            "type":"Role",
            "principalId":"AIDACKCEVSQ6C2EXAMPLE",
            "arn":"arn:aws:iam::123456789012:role/Admin",
            "accountId":"123456789012",
            "userName":"Admin"
         },
         "webIdFederationData":{
            
         },
         "attributes":{
            "creationDate":"2022-04-08T14:11:34Z",
            "mfaAuthenticated":"false"
         }
      }
   },
   "eventTime":"2022-04-08T14:11:49Z",
   "eventSource":"aoss.amazonaws.com",
   "eventName":"CreateCollection",
   "awsRegion":"us-east-1",
   "sourceIPAddress":"AWS Internal",
   "userAgent":"aws-cli/2.1.30 Python/3.8.8 Linux/5.4.176-103.347.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off command/aoss.create-collection",
   "errorCode":"HttpFailureException",
   "errorMessage":"An unknown error occurred",
   "requestParameters":{
      "accountId":"123456789012",
      "name":"test-collection",
      "description":"A sample collection",
      "clientToken":"d3a227d2-a2a7-49a6-8fb2-e5c8303c0718"
   },
   "responseElements": null,
   "requestID":"12345678-1234-1234-1234-987654321098",
   "eventID":"12345678-1234-1234-1234-987654321098",
   "readOnly":false,
   "eventType":"AwsApiCall",
   "managementEvent":true,
   "recipientAccountId":"123456789012",
   "eventCategory":"Management",
   "tlsDetails":{
      "clientProvidedHostHeader":"user.aoss-sample.us-east-1.amazonaws.com"
   }
}
```

# Monitoring OpenSearch Serverless events using Amazon EventBridge
<a name="serverless-monitoring-events"></a>

Amazon OpenSearch Service integrates with Amazon EventBridge to notify you of certain events that affect your domains. Events from AWS services are delivered to EventBridge in near real time. The same events are also sent to [Amazon CloudWatch Events](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatchEvents.html), the predecessor of Amazon EventBridge. You can write rules to indicate which events are of interest to you, and what automated actions to take when an event matches a rule. Examples of actions that you can automatically activate include the following:
+ Invoking an AWS Lambda function
+ Invoking an Amazon EC2 Run Command
+ Relaying the event to Amazon Kinesis Data Streams
+ Activating an AWS Step Functions state machine
+ Notifying an Amazon SNS topic or an Amazon SQS queue

For more information, see [Get started with Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-get-started.html) in the *Amazon EventBridge User Guide*.

## Setting up notifications
<a name="monitoring-events-notifications"></a>

You can use [AWS User Notifications](https://docs.aws.amazon.com/notifications/latest/userguide/what-is-service.html) to receive notifications when an OpenSearch Serverless event occurs. An event is an indicator of a change in OpenSearch Serverless environment, such as when you reach the maximum limit of your OCU usage. Amazon EventBridge receives the event and routes a notification to the AWS Management Console Notifications Center and your chosen delivery channels. You receive a notification when an event matches a rule that you specify.

## OpenSearch Compute Units (OCU) events
<a name="monitoring-events-ocu"></a>

OpenSearch Serverless sends events to EventBridge when one of the following OCU-related events occur. 

### OCU usage approaching maximum limit
<a name="monitoring-events-ocu-approaching-max"></a>

OpenSearch Serverless sends this event when your search or index OCU usage reaches 75% of your capacity limit. Your OCU usage is calculated based on your configured capacity limit and your current OCU consumption.

**Example**

The following is an example event of this type (search OCU):

```
{
  "version": "0",
  "id": "01234567-0123-0123-0123-012345678901",
  "detail-type": "OCU Utilization Approaching Max Limit",
  "source": "aws.aoss",
  "account": "123456789012",
  "time": "2016-11-01T13:12:22Z",
  "region": "us-east-1",
  "resources": ["arn:aws:es:us-east-1:123456789012:domain/test-domain"],
  "detail": {
    "eventTime" : 1678943345789,
    "description": "Your search OCU usage is at 75% and is approaching the configured maximum limit."
  }
}
```

The following is an example event of this type (index OCU):

```
{
  "version": "0",
  "id": "01234567-0123-0123-0123-012345678901",
  "detail-type": "OCU Utilization Approaching Max Limit",
  "source": "aws.aoss",
  "account": "123456789012",
  "time": "2016-11-01T13:12:22Z",
  "region": "us-east-1",
  "resources": ["arn:aws:es:us-east-1:123456789012:domain/test-domain"],
  "detail": {
    "eventTime" : 1678943345789,
    "description": "Your indexing OCU usage is at 75% and is approaching the configured maximum limit."
  }
```

### OCU usage reached maximum limit
<a name="monitoring-events-ocu-approaching-max"></a>

OpenSearch Serverless sends this event when your search or index OCU usage reaches 100% of your capacity limit. Your OCU usage is calculated based on your configured capacity limit and your current OCU consumption.

**Example**

The following is an example event of this type (search OCU):

```
{
  "version": "0",
  "id": "01234567-0123-0123-0123-012345678901",
  "detail-type": "OCU Utilization Reached Max Limit",
  "source": "aws.aoss",
  "account": "123456789012",
  "time": "2016-11-01T13:12:22Z",
  "region": "us-east-1",
  "resources": ["arn:aws:es:us-east-1:123456789012:domain/test-domain"],
  "detail": {
    "eventTime" : 1678943345789,
    "description": "Your search OCU usage has reached the configured maximum limit."
  }
}
```

The following is an example event of this type (index OCU):

```
{
  "version": "0",
  "id": "01234567-0123-0123-0123-012345678901",
  "detail-type": "OCU Utilization Reached Max Limit",
  "source": "aws.aoss",
  "account": "123456789012",
  "time": "2016-11-01T13:12:22Z",
  "region": "us-east-1",
  "resources": ["arn:aws:es:us-east-1:123456789012:domain/test-domain"],
  "detail": {
    "eventTime" : 1678943345789,
    "description": "Your indexing OCU usage has reached the configured maximum limit."
  }
}
```