

# Machine learning for Amazon OpenSearch Service
<a name="ml"></a>

ML Commons is an OpenSearch plugin that provides a set of common machine learning (ML) algorithms through transport and REST API calls. Those calls choose the right nodes and resources for each ML request and monitors ML tasks to ensure uptime. This allows you to leverage existing open-source ML algorithms and reduce the effort required to develop new ML features. For more about the plugin, see [Machine learning](https://opensearch.org/docs/latest/ml-commons-plugin/index/) in the OpenSearch documentation. This chapter covers how to use the plugin with Amazon OpenSearch Service.

**Topics**
+ [Amazon OpenSearch Service ML connectors for AWS services](ml-amazon-connector.md)
+ [Amazon OpenSearch Service ML connectors for third-party platforms](ml-external-connector.md)
+ [Using CloudFormation to set up remote inference for semantic search](cfn-template.md)
+ [Unsupported ML Commons settings](#sm)
+ [OpenSearch Service flow framework templates](ml-workflow-framework.md)

# Amazon OpenSearch Service ML connectors for AWS services
<a name="ml-amazon-connector"></a>

When you use Amazon OpenSearch Service machine learning (ML) connectors with another AWS service, you need to set up an IAM role to securely connect OpenSearch Service to that service. AWS services that you can set up a connector to include Amazon SageMaker AI and Amazon Bedrock. In this tutorial, we cover how to create a connector from OpenSearch Service to SageMaker Runtime. For more information about connectors, see [Supported connectors](https://opensearch.org/docs/latest/ml-commons-plugin/remote-models/connectors/#supported-connectors).

**Topics**
+ [Prerequisites](#connector-sagemaker-prereq)
+ [Create an OpenSearch Service connector](#connector-sagemaker-create)

## Prerequisites
<a name="connector-sagemaker-prereq"></a>

To create a connector, you must have an Amazon SageMaker AI Domain endpoint and an IAM role that grants OpenSearch Service access. 

### Set up an Amazon SageMaker AI domain
<a name="connector-sagemaker"></a>

See [Deploy a Model in Amazon SageMaker AI](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-deployment.html) in the *Amazon SageMaker AI Developer Guide* to deploy your machine learning model. Note the endpoint URL for your model, which you need in order to create an AI connector.

### Create an IAM role
<a name="connector-sagemaker-iam"></a>

Set up an IAM role to delegate SageMaker Runtime permissions to OpenSearch Service. To create a new role, see [Creating an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-user-console) in the *IAM User Guide*. Optionally, you could use an existing role as long as it has the same set of privileges. If you do create a new role instead of using an AWS managed role, replace `opensearch-sagemaker-role` in this tutorial with the name of your own role.

1. Attach the following managed IAM policy to your new role to allow OpenSearch Service to access to your SageMaker AI endpoint. To attach a policy to a role, see [Adding IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console). 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {   
               "Action": [
                   "sagemaker:InvokeEndpointAsync",
                   "sagemaker:InvokeEndpoint"
               ],
               "Effect": "Allow",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Follow the instructions in [Modifying a role trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-managingrole-editing-console.html#roles-managingrole_edit-trust-policy) to edit the trust relationship of the role. In the following policy, replace *service-principal* with one of the following service principals for OpenSearch Service or OpenSearch Serverless:  
**For OpenSearch Service**  
`opensearchservice.amazonaws.com`  
**For OpenSearch Serverless**  
`ml.opensearchservice.amazonaws.com`

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Action": [
                   "sts:AssumeRole"
               ],
               "Effect": "Allow",
               "Principal": {
                   "Service": [
                       "ml.opensearchservice.amazonaws.com"
                   ]
               }
           }
       ]
   }
   ```

------

   We recommend that you use the `aws:SourceAccount` and `aws:SourceArn` condition keys to limit access to a specific domain. The `SourceAccount` is the AWS account ID that belongs to the owner of the domain, and the `SourceArn` is the ARN of the domain. For example, you can add the following condition block to the trust policy: 

   ```
   "Condition": {
       "StringEquals": {
           "aws:SourceAccount": "account-id"
       },
       "ArnLike": {
           "aws:SourceArn": "arn:aws:es:region:account-id:domain/domain-name"
       }
   }
   ```

### Configure permissions
<a name="connector-sagemaker-permissions"></a>

In order to create the connector, you need permission to pass the IAM role to OpenSearch Service. You also need access to the `es:ESHttpPost` action. To grant both of these permissions, attach the following policy to the IAM role whose credentials are being used to sign the request:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::111122223333:role/opensearch-sagemaker-role"
        },
        {
            "Effect": "Allow",
            "Action": "es:ESHttpPost",
            "Resource": "arn:aws:es:us-east-1:111122223333:domain/domain-name/*"
        }
    ]
}
```

------

If your user or role doesn't have `iam:PassRole` permissions to pass your role, you might encounter an authorization error when you try to register a repository in the next step.

### Map the ML role in OpenSearch Dashboards (if using fine-grained access control)
<a name="connector-sagemaker-fgac"></a>

Fine-grained access control introduces an additional step when setting up a connector. Even if you use HTTP basic authentication for all other purposes, you need to map the `ml_full_access` role to your IAM role that has `iam:PassRole` permissions to pass `opensearch-sagemaker-role`.

1. Navigate to the OpenSearch Dashboards plugin for your OpenSearch Service domain. You can find the Dashboards endpoint on your domain dashboard on the OpenSearch Service console. 

1. From the main menu choose **Security**, **Roles**, and select the **ml\$1full\$1access** role.

1. Choose **Mapped users**, **Manage mapping**. 

1. Under **Backend roles**, add the ARN of the role that has permissions to pass `opensearch-sagemaker-role`.

   ```
   arn:aws:iam::account-id:role/role-name
   ```

1. Select **Map** and confirm the user or role shows up under **Mapped users**.

## Create an OpenSearch Service connector
<a name="connector-sagemaker-create"></a>

To create a connector, send a `POST` request to the OpenSearch Service domain endpoint. You can use curl, the sample Python client, Postman, or another method to send a signed request. Note that you can't use a `POST` request in the Kibana console. The request takes the following format:

```
POST domain-endpoint/_plugins/_ml/connectors/_create
{
   "name": "sagemaker: embedding",
   "description": "Test connector for Sagemaker embedding model",
   "version": 1,
   "protocol": "aws_sigv4",
   "credential": {
      "roleArn": "arn:aws:iam::account-id:role/opensearch-sagemaker-role"
   },
   "parameters": {
      "region": "region",
      "service_name": "sagemaker"
   },
   "actions": [
      {
         "action_type": "predict",
         "method": "POST",
         "headers": {
            "content-type": "application/json"
         },
         "url": "https://runtime.sagemaker.region.amazonaws.com/endpoints/endpoint-id/invocations",
         "request_body": "{ \"inputs\": { \"question\": \"${parameters.question}\", \"context\": \"${parameters.context}\" } }"
      }
   ]
}
```

If your domain resides within a virtual private cloud (VPC), your computer must be connected to the VPC for the request to successfully create the AI connector. Accessing a VPC varies by network configuration, but usually involves connecting to a VPN or corporate network. To check that you can reach your OpenSearch Service domain, navigate to `https://your-vpc-domain.region.es.amazonaws.com` in a web browser and verify that you receive the default JSON response.

### Sample Python client
<a name="connector-sagemaker-python"></a>

The Python client is simpler to automate than a HTTP request and has better re-usability. To create the AI connector with the Python client, save the following sample code to a Python file. The client requires the [AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python/), [https://requests.readthedocs.io/en/latest/](https://requests.readthedocs.io/en/latest/), and [https://pypi.org/project/requests-aws4auth/](https://pypi.org/project/requests-aws4auth/) packages. 

```
import boto3
import requests 
from requests_aws4auth import AWS4Auth

host = 'domain-endpoint/'
region = 'region'
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

# Register repository
path = '_plugins/_ml/connectors/_create'
url = host + path

payload = {
   "name": "sagemaker: embedding",
   "description": "Test connector for Sagemaker embedding model",
   "version": 1,
   "protocol": "aws_sigv4",
   "credential": {
      "roleArn": "arn:aws:iam::account-id:role/opensearch-sagemaker-role"
   },
   "parameters": {
      "region": "region",
      "service_name": "sagemaker"
   },
   "actions": [
      {
         "action_type": "predict",
         "method": "POST",
         "headers": {
            "content-type": "application/json"
         },
         "url": "https://runtime.sagemaker.region.amazonaws.com/endpoints/endpoint-id/invocations",
         "request_body": "{ \"inputs\": { \"question\": \"${parameters.question}\", \"context\": \"${parameters.context}\" } }"
      }
   ]
}
headers = {"Content-Type": "application/json"}

r = requests.post(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
```

# Amazon OpenSearch Service ML connectors for third-party platforms
<a name="ml-external-connector"></a>

In this tutorial, we cover how to create a connector from OpenSearch Service to Cohere. For more information about connectors, see [Supported connectors](https://opensearch.org/docs/latest/ml-commons-plugin/remote-models/connectors/#supported-connectors).

When you use an Amazon OpenSearch Service machine learning (ML) connector with an external remote model, you need to store your specific authorization credentials in AWS Secrets Manager. This could be an API key, or a username and password combination. This means you also need to create an IAM role that allows OpenSearch Service access to read from Secrets Manager. 

**Topics**
+ [Prerequisites](#connector-external-prereq)
+ [Create an OpenSearch Service connector](#connector-external-create)

## Prerequisites
<a name="connector-external-prereq"></a>

To create a connector for Cohere or any external provider with OpenSearch Service, you must have an IAM role that grants OpenSearch Service access to AWS Secrets Manager, where you store your credentials. You must also store your credentials in Secrets Manager.

### Create an IAM role
<a name="connector-external-iam"></a>

Set up an IAM role to delegate Secrets Manager permissions to OpenSearch Service. You can also use the existing `SecretManagerReadWrite` role. To create a new role, see [Creating an IAM role (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html#roles-creatingrole-user-console) in the *IAM User Guide*. If you do create a new role instead of using an AWS managed role, replace `opensearch-secretmanager-role` in this tutorial with the name of your own role.

1. Attach the following managed IAM policy to your new role to allow OpenSearch Service to access to your Secrets Manager values. To attach a policy to a role, see [Adding IAM Identity Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console). 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {   
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Effect": "Allow",
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Follow the instructions in [Modifying a role trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/roles-managingrole-editing-console.html#roles-managingrole_edit-trust-policy) to edit the trust relationship of the role. In the following policy, replace *service-principal* with one of the following service principals for OpenSearch Service or OpenSearch Serverless:  
**For OpenSearch Service**  
`opensearchservice.amazonaws.com`  
**For OpenSearch Serverless**  
`ml.opensearchservice.amazonaws.com`

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Action": [
                   "sts:AssumeRole"
               ],
               "Effect": "Allow",
               "Principal": {
                   "Service": [
                       "service-principle"
                   ]
               }
           }
       ]
   }
   ```

------

   We recommend that you use the `aws:SourceAccount` and `aws:SourceArn` condition keys to limit access to specific domain. The `SourceAccount` is the AWS account ID that belongs to the owner of the domain, and the `SourceArn` is the ARN of the domain. For example, you can add the following condition block to the trust policy: 

   ```
   "Condition": {
       "StringEquals": {
           "aws:SourceAccount": "account-id"
       },
       "ArnLike": {
           "aws:SourceArn": "arn:aws:es:region:account-id:domain/domain-name"
       }
   }
   ```

### Configure permissions
<a name="connector-external-permissions"></a>

In order to create the connector, you need permission to pass the IAM role to OpenSearch Service. You also need access to the `es:ESHttpPost` action. To grant both of these permissions, attach the following policy to the IAM role whose credentials are being used to sign the request:

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "arn:aws:iam::111122223333:role/opensearch-secretmanager-role"
    },
    {
      "Effect": "Allow",
      "Action": "es:ESHttpPost",
      "Resource": "arn:aws:es:us-east-1:111122223333:domain/domain-name/*"
    }
  ]
}
```

------

If your user or role doesn't have `iam:PassRole` permissions to pass your role, you might encounter an authorization error when you try to register a repository in the next step.

### Set up AWS Secrets Manager
<a name="connector-external-sm"></a>

To store your authorization credentials in Secrets Manager, see [Create an AWS Secrets Manager secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the *AWS Secrets Manager User Guide*. 

After Secrets Manager accepts your key-value pair as a secret, you receive an ARN with the format: `arn:aws:secretsmanager:us-west-2:123456789012:secret:MySecret-a1b2c3`. Keep a record of this ARN, as you use it and your key when you create a connector in the next step.

### Map the ML role in OpenSearch Dashboards (if using fine-grained access control)
<a name="connector-external-fgac"></a>

Fine-grained access control introduces an additional step when setting up a connector. Even if you use HTTP basic authentication for all other purposes, you need to map the `ml_full_access` role to your IAM role that has `iam:PassRole` permissions to pass `opensearch-sagemaker-role`.

1. Navigate to the OpenSearch Dashboards plugin for your OpenSearch Service domain. You can find the Dashboards endpoint on your domain dashboard on the OpenSearch Service console. 

1. From the main menu choose **Security**, **Roles**, and select the **ml\$1full\$1access** role.

1. Choose **Mapped users**, **Manage mapping**. 

1. Under **Backend roles**, add the ARN of the role that has permissions to pass `opensearch-sagemaker-role`.

   ```
   arn:aws:iam::account-id:role/role-name
   ```

1. Select **Map** and confirm the user or role shows up under **Mapped users**.

## Create an OpenSearch Service connector
<a name="connector-external-create"></a>

To create a connector, send a `POST` request to the OpenSearch Service domain endpoint. You can use curl, the sample Python client, Postman, or another method to send a signed request. Note that you can't use a `POST` request in the Kibana console. The request takes the following format:

```
POST domain-endpoint/_plugins/_ml/connectors/_create
{
    "name": "Cohere Connector: embedding",
    "description": "The connector to cohere embedding model",
    "version": 1,
    "protocol": "http",
    "credential": {
        "secretArn": "arn:aws:secretsmanager:region:account-id:secret:cohere-key-id",
        "roleArn": "arn:aws:iam::account-id:role/opensearch-secretmanager-role"
    },
    "actions": [
        {
            "action_type": "predict",
            "method": "POST",
            "url": "https://api.cohere.ai/v1/embed",
            "headers": {
                "Authorization": "Bearer ${credential.secretArn.cohere-key-used-in-secrets-manager}"
            },
            "request_body": "{ \"texts\": ${parameters.texts}, \"truncate\": \"END\" }"
        }
    ]
}
```

The request body for this request is different than that of an open-source connector request in two ways. Inside the `credential` field, you pass the ARN for the IAM role that permits OpenSearch Service to read from Secrets Manager, along with the ARN for the what secret. In the `headers` field, you refer to the secret using the secret key and the fact its coming from an ARN. 

If your domain resides within a virtual private cloud (VPC), your computer must be connected to the VPC for the request to successfully create the AI connetor. Accessing a VPC varies by network configuration, but usually involves connecting to a VPN or corporate network. To check that you can reach your OpenSearch Service domain, navigate to `https://your-vpc-domain.region.es.amazonaws.com` in a web browser and verify that you receive the default JSON response.

### Sample Python client
<a name="connector-external-python"></a>

The Python client is simpler to automate than a HTTP request and has better re-usability. To create the AI connector with the Python client, save the following sample code to a Python file. The client requires the [AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python/), [https://requests.readthedocs.io/en/latest/](https://requests.readthedocs.io/en/latest/), and [https://pypi.org/project/requests-aws4auth/](https://pypi.org/project/requests-aws4auth/) packages. 

```
import boto3
import requests 
from requests_aws4auth import AWS4Auth

host = 'domain-endpoint/'
region = 'region'
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

path = '_plugins/_ml/connectors/_create'
url = host + path

payload = {
    "name": "Cohere Connector: embedding",
    "description": "The connector to cohere embedding model",
    "version": 1,
    "protocol": "http",
    "credential": {
        "secretArn": "arn:aws:secretsmanager:region:account-id:secret:cohere-key-id",
        "roleArn": "arn:aws:iam::account-id:role/opensearch-secretmanager-role"
    },
    "actions": [
        {
            "action_type": "predict",
            "method": "POST",
            "url": "https://api.cohere.ai/v1/embed",
            "headers": {
                "Authorization": "Bearer ${credential.secretArn.cohere-key-used-in-secrets-manager}"
            },
            "request_body": "{ \"texts\": ${parameters.texts}, \"truncate\": \"END\" }"
        }
    ]
}

headers = {"Content-Type": "application/json"}

r = requests.post(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
```

# Using CloudFormation to set up remote inference for semantic search
<a name="cfn-template"></a>

Starting with OpenSearch version 2.9, you can use remote inference with [semantic search](https://opensearch.org/docs/latest/search-plugins/semantic-search/) to host your own machine learning (ML) models. Remote inference uses the [ML Commons plugin](https://opensearch.org/docs/latest/ml-commons-plugin/index/).

With Remote inference, you can host your model inferences remotely on ML services, such as Amazon SageMaker AI and Amazon Bedrock, and connect them to Amazon OpenSearch Service with ML connectors. 

To ease the setup of remote inference, Amazon OpenSearch Service provides an [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) template in the console. CloudFormation is an AWS service where you can, provision, and manage AWS and third-party resources by treating infrastructure as code. 

The OpenSearch CloudFormation template automates the model provisioning process for you, so that you can easily create a model in your OpenSearch Service domain and then use the model ID to ingest data and run neural search queries.

When you use neural sparse encoders with OpenSearch Service version 2.12 and onwards, we recommend that you use the tokenizer model locally instead of deploying remotely. For more information, see [Sparse encoding models](https://opensearch.org/docs/latest/ml-commons-plugin/pretrained-models/#sparse-encoding-models) in the OpenSearch documentation. 

**Topics**
+ [Available CloudFormation templates](#cfn-template-list)
+ [Prerequisites](#cfn-template-prereq)
+ [Amazon Bedrock templates](cfn-template-bedrock.md)
+ [Configuring Agentic Search with Bedrock Claude](cfn-template-agentic-search.md)
+ [MCP server integration templates](cfn-template-mcp-server.md)
+ [Amazon SageMaker templates](cfn-template-sm.md)
+ [Remote inference for semantic highlighting templates](#cfn-template-semantic-highlighting)

## Available CloudFormation templates
<a name="cfn-template-list"></a>

The following AWS CloudFormation machine learning (ML) templates are available for use:[Amazon Bedrock templates](cfn-template-bedrock.md)

**Amazon Titan Text Embeddings Integration**  
Connects to Amazon Bedrock's hosted ML models, eliminates the need for separate model deployment, and uses predetermined Amazon Bedrock endpoints. For more information, see [Amazon Titan Text Embeddings](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-embedding-models.html) in the *Amazon Bedrock User Guide*.

**Cohere Embed Integration**  
Provides access to Cohere Embed models, and is optimized for specific text processing workflows. For more information, see [Embed](https://docs.cohere.com/docs/cohere-embed) on the *Cohere docs* website.

**Amazon Titan Multimodal Embeddings**  
Supports both text and image embeddings, and enables multimodal search capabilities. For more information, see [Amazon Titan Multimodal Embeddings](https://docs.aws.amazon.com/bedrock/latest/userguide/titan-multiemb-models.html) in the *Amazon Bedrock User Guide*.[MCP server integration templates](cfn-template-mcp-server.md)

**MCP server integration**  
Deploys an [Amazon Bedrock AgentCore Runtime](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html), provides an agent endpoint, handles inbound and outbound authentication, and supports OAuth for enterprise authentication.[Amazon SageMaker templates](cfn-template-sm.md)

**Integration with text embedding models through Amazon SageMaker**  
Deploys text embedding models in Amazon SageMaker Runtime, creates IAM roles for model artifact access, and establishes ML connectors for semantic search.

**Integration with Sparse Encoders through SageMaker**  
Sets up sparse encoding models for neural search, creates AWS Lambda functions for connector management, and returns model IDs for immediate use.

## Prerequisites
<a name="cfn-template-prereq"></a>

To use a CloudFormation template with OpenSearch Service, complete the following prerequisites.

### Set up an OpenSearch Service domain
<a name="cfn-template-domain"></a>

Before you can use a CloudFormation template, you must set up an [Amazon OpenSearch Service domain](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/osis-get-started.html) with version 2.9 or later and fine-grained access control enabled. [Create an OpenSearch Service backend role](fgac.md#fgac-roles) to give the ML Commons plugin permission to create your connector for you. 

The CloudFormation template creates a Lambda IAM role for you with the default name `LambdaInvokeOpenSearchMLCommonsRole`, which you can override if you want to choose a different name. After the template creates this IAM role, you need to give the Lambda function permission to call your OpenSearch Service domain. To do so, [map the role](fgac.md#fgac-mapping) named `ml_full_access` to your OpenSearch Service backend role with the following steps:

1. Navigate to the OpenSearch Dashboards plugin for your OpenSearch Service domain. You can find the Dashboards endpoint on your domain dashboard on the OpenSearch Service console. 

1. From the main menu choose **Security**, **Roles**, and select the **ml\$1full\$1access** role.

1. Choose **Mapped users**, **Manage mapping**. 

1. Under **Backend roles**, add the ARN of the Lambda role that needs permission to call your domain.

   ```
   arn:aws:iam::account-id:role/role-name
   ```

1. Select **Map** and confirm the user or role shows up under **Mapped users**.

After you've mapped the role, navigate to the security configuration of your domain and add the Lambda IAM role to your OpenSearch Service access policy.

### Enable permissions on your AWS account
<a name="connector-sagemaker-iam"></a>

Your AWS account must have permission to access CloudFormation and Lambda, along with whichever AWS service you choose for your template – either SageMaker Runtime or Amazon Bedrock. 

If you're using Amazon Bedrock, you must also register your model. See [Model access](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) in the *Amazon Bedrock User Guide* to register your model. 

If you're using your own Amazon S3 bucket to provide model artifacts, you must add the CloudFormation IAM role to your S3 access policy. For more information, see [Adding and removing IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) in the *IAM User Guide*.

# Amazon Bedrock templates
<a name="cfn-template-bedrock"></a>

The Amazon Bedrock CloudFormation templates provision the AWS resources needed to create connectors between OpenSearch Service and Amazon Bedrock. 

First, the template creates an IAM role that allows the future Lambda function to access your OpenSearch Service domain. The template then creates the Lambda function, which has the domain create a connector using the ML Commons plugin. After OpenSearch Service creates the connector, the remote inference set up is finished and you can run semantic searches using the Amazon Bedrock API operations.

**Note**  
Since Amazon Bedrock hosts its own ML models, you don’t need to deploy a model to SageMaker Runtime. Instead, the template uses a predetermined endpoint for Amazon Bedrock and skips the endpoint provision steps.

**To use the Amazon Bedrock CloudFormation template**

1. Open the [Amazon OpenSearch Service console](https://console.aws.amazon.com/aos/home ).

1. In the left navigation pane, choose **Integrations**.

1. Under **Integrate with Amazon Titan Text Embeddings model through Amazon Bedrock**, choose **Configure domain**, **Configure public domain**.

1. Follow the prompt to set up your model.

**Note**  
OpenSearch Service also provides a separate template to configure an Amazon VPC domain. If you use this template, you need to provide the Amazon VPC ID for the Lambda function.

In addition, OpenSearch Service provides the following Amazon Bedrock templates to connect to the Cohere model and the Amazon Titan Multimodal Embeddings model:
+ `Integration with Cohere Embed through Amazon Bedrock`
+ `Integrate with Amazon Bedrock Titan Multi-modal`

# Configuring Agentic Search with Bedrock Claude
<a name="cfn-template-agentic-search"></a>

Agentic search leverages autonomous agents to execute complex searches on your behalf by understanding user intent, orchestrating the right tools, generating optimized queries, and providing transparent summaries of their decisions through a natural language interface. These agents are powered by reasoning models, such as Bedrock Claude.

Follow the steps below to open and run a CloudFormation template that automatically configures Bedrock Claude models for agentic search, and how to configure and create your agents in the AI Search Flows plugin on OpenSearch Dashboards.

## Enabling Bedrock Claude Access
<a name="agentic-search-bedrock-access"></a>

1. **Prerequisite:** If your domain uses fine-grained access control, map `arn:aws:iam::your-account-id:role/LambdaInvokeOpenSearchMLCommonsRole` as a backend role to the `ml_full_access` role before running the template. This IAM role will be created automatically by CloudFormation if it doesn't already exist. For more information on how to configure the mapping, see [Map the ML role in OpenSearch Dashboards (if using fine-grained access control)](ml-external-connector.md#connector-external-fgac).

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home).

1. In the left navigation, choose **Integrations**.

1. Under **Integration with Bedrock Claude for Agentic Search**, choose **Configure domain**. Ensure your domain is on version 3.3 or greater.

1. In the CloudFormation template, enter your OpenSearch Service domain endpoint and select a model. The remaining fields are optional or pre-filled. Click **Create Stack** and wait for the provisioning to complete.

1. From the Amazon OpenSearch Service console, select **Domains**, and select your domain. Click the **OpenSearch Dashboards URL** to access OpenSearch Dashboards.

## Building agents and running Agentic Search
<a name="agentic-search-building-agents"></a>

1. From OpenSearch Dashboards, open the menu on the left-hand side. Select **OpenSearch Plugins** > **AI Search Flows** to access the plugin.

1. On the **Workflows** page, select the **New workflow** tab, and under the **Agentic Search** card, click **Create**.

1. Provide a unique name for your search configuration, and click **Create**.

1. Under **Configure agent**, click **Create new agent**. Select your newly-created Bedrock Claude model, then click **Create agent**. If the button is disabled, check **Advanced Settings** > **LLM Interface**, and ensure there is a valid interface selected. All models from CloudFormation will be Bedrock Claude models, so you can select **Bedrock Claude**, if it isn't already, then click **Create agent**.

1. Under **Test flow**, try running agentic searches. Provide a natural language search query, and click **Search**.

For complete documentation of the AI Search Flows plugin, see [Configuring Agentic Search](https://docs.opensearch.org/latest/vector-search/ai-search/building-agentic-search-flows/) in the OpenSearch documentation.

For more information about how Agentic Search works, see [Agentic Search](https://opensearch.org/docs/latest/vector-search/ai-search/agentic-search/) in the OpenSearch documentation.

# MCP server integration templates
<a name="cfn-template-mcp-server"></a>

With the Model Context Protocol (MCP) server templates, you can deploy an OpenSearch hosted MCP server on Amazon Bedrock AgentCore, reducing the integration complexity between AI agents and OpenSearch tools. For more information, see [What is Amazon Bedrock AgentCore?](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html).

## Template features
<a name="template-mcp-server-features"></a>

This template includes the following key features for deploying and managing your MCP server.

**Managed MCP server deployment**  
Deploys **opensearch-mcp-server-py** using Amazon Bedrock AgentCore Runtime, and provides an agent endpoint that proxies requests to the underlying MCP server. For more information, see [opensearch-mcp-server-py](https://github.com/opensearch-project/opensearch-mcp-server-py) on *GitHub*.

**Authentication and security**  
Handles both inbound authentication (from users to MCP server) and outbound authentication (from MCP server to OpenSearch), and supports OAuth for enterprise authentication.

**Note**  
The MCP server template is only available in the following AWS Regions:  
US East (N. Virginia)
US West (Oregon)
Europe (Frankfurt)
Asia Pacific (Sydney)

## To use the MCP server template
<a name="template-mcp-server-procedure"></a>

Follow these steps to deploy the MCP server template and connect it to your OpenSearch domain.

1. Open the [Amazon OpenSearch Service console](https://console.aws.amazon.com//aos/home ). 

1. In the left navigation pane, choose **Integrations**.

1. Locate the **MCP server integration** template.

1. Choose **Configure domain**. Then, enter your OpenSearch domain endpoint.

The template creates an AgentCore Runtime and the following components, if the corresponding optional parameters are not specified:
+ An Amazon ECR repository
+ An Amazon Cognito user pool as the OAuth authorizer
+ An execution role used by the AgentCore Runtime

After you complete this procedure, you should follow these post-creation steps:

1. **For Amazon OpenSearch Service**: Map your execution role ARN to an OpenSearch backend role to control access to your domain.

   **For Amazon OpenSearch Serverless**: Create a data access policy that allows your execution role to access your collection.

1. Get an OAuth access token from your authorizer. Then use this token to access the MCP server at the URL listed in your CloudFormation stack output.

For more information, see [Policy actions for OpenSearch Serverless](security-iam-serverless.md#security-iam-serverless-id-based-policies-actions).

## Integration with AI agents
<a name="cfn-template-mcp-agent-integrations"></a>

After deployment, you can integrate the MCP server with any MCP compatible agent. For more information, see [Invoke your deployed MCP server](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/runtime-mcp.html#runtime-mcp-invoke-server) in the *Amazon Bedrock Developer Guide*. 

**Developer Integration**  
You can add the MCP server endpoint to your agent configuration. You can also use it with the Amazon Q Developer CLI, custom agents, or other MCP-compatible agents.

**Enterprise Deployment**  
Central hosted agents can connect to multiple services with OpenSearch as one component. This agent supports OAuth and enterprise authentication systems, and scales to support multiple users and use cases.

### Example using the Strands Agents framework
<a name="strands-agent-integration-id"></a>

```
import os
import requests
from strands import Agent
from strands.tools.mcp import MCPClient
from mcp.client.streamable_http import streamablehttp_client

def get_bearer_token(discovery_url: str, client_id: str, client_secret: str):
    response = requests.get(discovery_url)
    discovery_data = response.json()
    token_endpoint = discovery_data['token_endpoint']

    data = {
        'grant_type': 'client_credentials',
        'client_id': client_id,
        'client_secret': client_secret
    }
    headers = {
        'Content-Type': 'application/x-www-form-urlencoded'
    }

    response = requests.post(token_endpoint, data=data, headers=headers)
    token_data = response.json()
    return token_data['access_token']

if __name__ == "__main__":
    discovery_url = os.environ["DISCOVERY_URL"]
    client_id = os.environ["CLIENT_ID"]
    client_secret = os.environ["CLIENT_SECRET"]
    mcp_url = os.environ["MCP_URL"]

    bearer_token = get_bearer_token(discovery_url, client_id, client_secret)

    opensearch_mcp_client = MCPClient(lambda: streamablehttp_client(mcp_url, {
        "authorization": f"Bearer {bearer_token}",
        "Content-Type": "application/json"
    }))

    with opensearch_mcp_client:
        tools = opensearch_mcp_client.list_tools_sync()
        agent = Agent(tools=tools)
        agent("list indices")
```

For more information, see [Hosting OpenSearch MCP Server with Amazon Bedrock AgentCore](https://opensearch.org/blog/hosting-opensearch-mcp-server-with-amazon-bedrock-agentcore/) on the *OpenSearch website*.

# Amazon SageMaker templates
<a name="cfn-template-sm"></a>

The Amazon SageMaker CloudFormation templates define multiple AWS resources in order to set up the neural plugin and semantic search for you. 

Begin by using the **Integration with text embedding models through Amazon SageMaker** template to deploy a text embedding model in SageMaker Runtime as a server. If you don't provide a model endpoint, CloudFormation creates an IAM role that allows SageMaker Runtime to download model artifacts from Amazon S3 and deploy them to the server. If you provide an endpoint, CloudFormation creates an IAM role that allows the Lambda function to access the OpenSearch Service domain or, if the role already exists, updates and reuses the role. The endpoint serves the remote model that is used for the ML connector with the ML Commons plugin. 

Then, use the **Integration with Sparse Encoders through Amazon SageMaker** template to create a Lambda function that has your domain set up remote inference connectors. After the connector is created in OpenSearch Service, the remote inference can run semantic search using the remote model in SageMaker Runtime. The template returns the model ID in your domain back to you to so you can start searching.

**To use the Amazon SageMaker AI CloudFormation templates**

1. Open the [Amazon OpenSearch Service console](https://console.aws.amazon.com//aos/home ).

1. In the left navigation pane, choose **Integrations**.

1. Under each of the Amazon SageMaker AI templates, choose **Configure domain**, **Configure public domain**.

1. Follow the prompt in the CloudFormation console to provision your stack and set up a model.

**Note**  
OpenSearch Service also provides a separate template to configure VPC domain. If you use this template, you need to provide the VPC ID for the Lambda function.

## Remote inference for semantic highlighting templates
<a name="cfn-template-semantic-highlighting"></a>

Semantic highlighting is an advanced search feature that enhances result relevance by analyzing the meaning and context of queries rather than relying solely on exact keyword matches. This capability uses machine learning models to evaluate semantic similarity between search queries and document content, identifying and highlighting the most contextually relevant sentences or passages within documents. Unlike traditional highlighting methods that focus on exact term matches, semantic highlighting leverages AI models to assess each sentence using contextual information from both the query and surrounding text, enabling it to surface pertinent information even when exact search terms aren' t present in the highlighted passages. This approach is particularly valuable for AI-driven search implementations where users prioritize semantic meaning over literal word matching, allowing search administrators to deliver more intelligent and contextually aware search experiences that highlight meaningful content spans rather than just keyword occurrences. For more information, see [Using semantic highlighting](https://docs.opensearch.org/latest/tutorials/vector-search/semantic-highlighting-tutorial/).

Use the following procedure open and run an CloudFormation template that automatically configures Amazon SageMaker models for semantic highlighting.

**To use the semantic highlighting CloudFormation template**

1. Open the Amazon OpenSearch Service console at [https://console.aws.amazon.com/aos/home](https://console.aws.amazon.com/aos/home ).

1. In the left navigation, choose **Integrations**.

1. Under **Enable Semantic Highlighting through Amazon SageMaker integration**, choose **Configure domain**, **Configure public domain**.

1. Follow the prompt to set up your model.

**Note**  
OpenSearch Service also provides a separate template to configure VPC domain. If you use this template, you need to provide the VPC ID for the Lambda function.

## Unsupported ML Commons settings
<a name="sm"></a>

Amazon OpenSearch Service doesn't support use of the following ML Commons settings: 
+ `plugins.ml_commons.allow_registering_model_via_url`
+ `plugins.ml_commons.allow_registering_model_via_local_file`

**Important**  
On *production clusters*, do not disable the cluster setting `plugins.ml_commons.only_run_on_ml_node` (don't set it to `false`). The option to disable this safeguard is for facilitating development, but production clusters should be using the connectors. For more information, see [Amazon OpenSearch Service ML connectors for AWS services](ml-amazon-connector.md).

For more information on ML Commons settings, see [ML Commons cluster settings](https://opensearch.org/docs/latest/ml-commons-plugin/cluster-settings/).

# OpenSearch Service flow framework templates
<a name="ml-workflow-framework"></a>

Amazon OpenSearch Service flow framework templates allow you to automate complex OpenSearch Service setup and preprocessing tasks by providing templates for common use cases. For example, you can use flow framework templates to automate machine learning setup tasks. Amazon OpenSearch Service flow framework templates provide a compact description of the setup process in a JSON or YAML document. These templates describe automated workflow configurations for conversational chat or query generation, AI connectors, tools, agents, and other components that prepare OpenSearch Service for backend use for generative models.

Amazon OpenSearch Service flow framework templates can be customized to meet your specific needs. To see an example of a custom flow framework template, see [flow-framework](https://github.com/opensearch-project/flow-framework/blob/main/sample-templates/deploy-bedrock-claude-model.json). For OpenSearch Service provided templates, see [workflow-templates](https://opensearch.org/docs/2.13/automating-configurations/workflow-templates/). For comprehensive documentation, including detailed steps, an API reference, and a reference of all available settings, see [automating configuration](https://github.com/opensearch-project/flow-framework/blob/main/sample-templates/deploy-bedrock-claude-model.json) in the open source OpenSearch documentation.

**Note**  
Flow-framework does not support backend role filtering for OpenSearch Service 2.17.

# Creating ML connectors in OpenSearch Service
<a name="ml-create"></a>

Amazon OpenSearch Service flow framework templates allow you to configure and install ML connectors by utilizing the create connector API offered in ml-commons. You can use ML connectors to connect OpenSearch Service to other AWS services or third party platforms. For more information on this, see [Creating connectors for third-party ML platforms](https://opensearch.org/docs/2.13/ml-commons-plugin/remote-models/connectors/). The Amazon OpenSearch Service flow framework API allows you to automate OpenSearch Service setup and preprocessing tasks and can be used to create ML connectors. 

Before you can create a connector in OpenSearch Service, you must do the following:
+ Create an Amazon SageMaker AI domain.
+ Create an IAM role.
+ Configure pass role permission.
+ Map the flow-framework and ml-commons roles in OpenSearch Dashboards.

For more information on how to setup ML connectors for AWS services, see [Amazon OpenSearch Service ML connectors for AWS services](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html#connector-sagemaker-prereq). To learn more about using OpenSearch Service ML connectors with third-party platforms, see [Amazon OpenSearch Service ML connectors for third-party platforms](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html#connector-sagemaker-prereq).

## Creating a connector through a flow-framework service
<a name="ml-workflow"></a>

To create a flow-framework template with connector, you will need to send a `POST` request to your OpenSearch Service domain endpoint. You can use cURL, a sample Python client, Postman, or another method to send a signed request. The `POST` request takes the following format:

```
POST /_plugins/_flow_framework/workflow 
{
  "name": "Deploy Claude Model",
  "description": "Deploy a model using a connector to Claude",
  "use_case": "PROVISION",
  "version": {
    "template": "1.0.0",
    "compatibility": [
      "2.12.0",
      "3.0.0"
    ]
  },
  "workflows": {
    "provision": {
      "nodes": [
        {
          "id": "create_claude_connector",
          "type": "create_connector",
          "user_inputs": {
            "name": "Claude Instant Runtime Connector",
            "version": "1",
            "protocol": "aws_sigv4",
            "description": "The connector to Bedrock service for Claude model",
            "actions": [
              {
                "headers": {
                  "x-amz-content-sha256": "required",
                  "content-type": "application/json"
                },
                "method": "POST",
                "request_body": "{ \"prompt\":\"${parameters.prompt}\", \"max_tokens_to_sample\":${parameters.max_tokens_to_sample}, \"temperature\":${parameters.temperature},  \"anthropic_version\":\"${parameters.anthropic_version}\" }",
                "action_type": "predict",
                "url": "https://bedrock-runtime.us-west-2.amazonaws.com/model/anthropic.claude-instant-v1/invoke"
              }
            ],
            "credential": {
                "roleArn": "arn:aws:iam::account-id:role/opensearch-secretmanager-role" 
             },
            "parameters": {
              "endpoint": "bedrock-runtime.us-west-2.amazonaws.com",
              "content_type": "application/json",
              "auth": "Sig_V4",
              "max_tokens_to_sample": "8000",
              "service_name": "bedrock",
              "temperature": "0.0001",
              "response_filter": "$.completion",
              "region": "us-west-2",
              "anthropic_version": "bedrock-2023-05-31"
            }
          }
        }
      ]
    }
  }
}
```

If your domain resides within a virtual private cloud (Amazon VPC), you must be connected to the Amazon VPC for the request to successfully create the AI connector. Accessing an Amazon VPC varies by network configuration, but usually involves connecting to a VPN or corporate network. To check that you can reach your OpenSearch Service domain, navigate to `https://your-vpc-domain.region.es.amazonaws.com` in a web browser and verify that you receive the default JSON response. (Replace the *placeholder text* with your own values.

### Sample Python client
<a name="ml-python-sample"></a>

The Python client is simpler to automate than a `HTTP` request and has better re-usability. To create the AI connector with the Python client, save the following sample code to a Python file. The client requires the [AWS SDK for Python (Boto3)](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html#connector-sagemaker-prereq), [Requests:HTTP for Humans](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ml-amazon-connector.html#connector-sagemaker-prereq), and [requests-aws4auth 1.2.3](https://pypi.org/project/requests-aws4auth/) packages.

```
import boto3
import requests 
from requests_aws4auth import AWS4Auth

host = 'domain-endpoint/'
region = 'region'
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

path = '_plugins/_flow_framework/workflow'
url = host + path

payload = {
  "name": "Deploy Claude Model",
  "description": "Deploy a model using a connector to Claude",
  "use_case": "PROVISION",
  "version": {
    "template": "1.0.0",
    "compatibility": [
      "2.12.0",
      "3.0.0"
    ]
  },
  "workflows": {
    "provision": {
      "nodes": [
        {
          "id": "create_claude_connector",
          "type": "create_connector",
          "user_inputs": {
            "name": "Claude Instant Runtime Connector",
            "version": "1",
            "protocol": "aws_sigv4",
            "description": "The connector to Bedrock service for Claude model",
            "actions": [
              {
                "headers": {
                  "x-amz-content-sha256": "required",
                  "content-type": "application/json"
                },
                "method": "POST",
                "request_body": "{ \"prompt\":\"${parameters.prompt}\", \"max_tokens_to_sample\":${parameters.max_tokens_to_sample}, \"temperature\":${parameters.temperature},  \"anthropic_version\":\"${parameters.anthropic_version}\" }",
                "action_type": "predict",
                "url": "https://bedrock-runtime.us-west-2.amazonaws.com/model/anthropic.claude-instant-v1/invoke"
              }
            ],
            "credential": {
                "roleArn": "arn:aws:iam::account-id:role/opensearch-secretmanager-role" 
             },
            "parameters": {
              "endpoint": "bedrock-runtime.us-west-2.amazonaws.com",
              "content_type": "application/json",
              "auth": "Sig_V4",
              "max_tokens_to_sample": "8000",
              "service_name": "bedrock",
              "temperature": "0.0001",
              "response_filter": "$.completion",
              "region": "us-west-2",
              "anthropic_version": "bedrock-2023-05-31"
            }
          }
        }
      ]
    }
  }
}

headers = {"Content-Type": "application/json"}

r = requests.post(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
```

#### Pre-defined workflow templates
<a name="ml-predefined"></a>

Amazon OpenSearch Service provides several workflow templates for some common machine learning (ML) use cases. Using a template simplifies complex setups and provides many default values for use cases like semantic or conversational search. You can specify a workflow template when you call the Create Workflow API.
+ To use an OpenSearch Service provided workflow template, specify the template use case as the `use_case` query parameter. 
+ To use a custom workflow template, provide the complete template in the request body. For an, example of a custom template, see an example JSON template or an example YAML template.

#### Template Use Cases
<a name="templates"></a>

This table provides an overview of the different templates available, a description of the templates, and the required parameters.


| Template use case | Description | Required Parameters | 
| --- | --- | --- | 
| `bedrock_titan_embedding_model_deploy` | Creates and deploys an Amazon Bedrock embedding model (by default, `titan-embed-text-v1` | `create_connector.credential.roleArn` | 
| `bedrock_titan_embedding_model_deploy` | Creates and deploys an Amazon Bedrock multimodal embedding model (by default, `titan-embed-text-v1` | `create_connector.credential.roleArn` | 
| `cohere_embedding_model_deploy` | Creates and deploys a Cohere embedding model (by default, embed-english-v3.0). | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `cohere_chat_model_deploy` | Creates and deploys a Cohere chat model (by default, Cohere Command). | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `open_ai_embedding_model_deploy` | Creates and deploys an OpenAI embedding model (by default, text-embedding-ada-002). | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `openai_chat_model_deploy` | Creates and deploys an OpenAI chat model (by default, gpt-3.5-turbo). | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `semantic_search_with_cohere_embedding` | Configures semantic search and deploys a Cohere embedding model. You must provide the API key for the Cohere model. | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `semantic_search_with_cohere_embedding_query_enricher` | Configures semantic search and deploys a Cohere embedding model. Adds a query\$1enricher search processor that sets a default model ID for neural queries. You must provide the API key for the Cohere model. | `create_connector.credential.roleArn`, `create_connector.credential.secretArn` | 
| `multimodal_search_with_bedrock_titan` | Deploys an Amazon Bedrock multimodal model and configures an ingestion pipeline with a text\$1image\$1embedding processor and a k-NN index for multimodal search. You must provide your AWS credentials. | `create_connector.credential.roleArn` | 

**Note**  
For all templates that require a secret ARN, the default is to store the secret with a key name of "key" in AWS Secrets Manager.

## Default templates with pretrained models
<a name="ml-pretrained-default"></a>

Amazon OpenSearch Service offers two additonal default workflow templates not available in the opensource OpenSearch Service.


| Template use case | Description | 
| --- | --- | 
| `semantic_search_with_local_model` | Configures [semantic search](https://opensearch.org/docs/2.14/search-plugins/semantic-search/) and deploys a pretrained model (`msmarco-distilbert-base-tas-b`). Adds a [https://opensearch.org/docs/2.14/search-plugins/search-pipelines/neural-query-enricher/](https://opensearch.org/docs/2.14/search-plugins/search-pipelines/neural-query-enricher/) search processor that sets a default model ID for neural queries and creates a linked k-NN index called 'my-nlp-index'. | 
| `hybrid_search_with_local_model` | Configures [hybrid search](https://opensearch.org/docs/2.14/search-plugins/hybrid-search/) and deploys a pretrained model (`msmarco-distilbert-base-tas-b`). Adds a [https://opensearch.org/docs/2.14/search-plugins/search-pipelines/neural-query-enricher/](https://opensearch.org/docs/2.14/search-plugins/search-pipelines/neural-query-enricher/) search processor that sets a default model ID for neural queries and creates a linked k-NN index called 'my-nlp-index'. | 

# Configure permissions
<a name="flow-framework-permissions"></a>

If you create a new domain with version 2.13 or later, permissions are already in place. If you enable flow framework on a preexisting OpenSearch Service domain with version 2.11 or earlier that you then upgrade to version 2.13 or later, you must define the `flow_framework_manager` role. Non-admin users must be mapped to this role in order to manage warm indexes on domains using fine-grained access control. To manually create the `flow_framework_manager` role, perform the following steps:

1. In OpenSearch Dashboards, go to **Security** and choose **Permissions**.

1. Choose **Create action group** and configure the following groups:     
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/opensearch-service/latest/developerguide/flow-framework-permissions.html)

1. Choose **Roles** and **Create role**.

1. Name the role **flow\$1framework\$1manager**.

1. For **Cluster permissions,** select `flow_framework_full_access` and `flow_framework_read_access`.

1. For **Index**, type `*`.

1. For **Index permissions**, select `indices:admin/aliases/get`, `indices:admin/mappings/get`, and `indices_monitor`.

1. Choose **Create**.

1. After you create the role, [map it](fgac.md#fgac-mapping) to any user or backend role that will manage flow framework indexes.