

# Amazon MSK multi-VPC private connectivity in a single Region
<a name="aws-access-mult-vpc"></a>

Multi-VPC private connectivity (powered by [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html)) for Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters is a feature that enables you to more quickly connect Kafka clients hosted in different Virtual Private Clouds (VPCs) and AWS accounts to an Amazon MSK cluster.

Multi-VPC private connectivity is a managed solution that simplifies the networking infrastructure for multi-VPC and cross-account connectivity. Clients can connect to the Amazon MSK cluster over PrivateLink while keeping all traffic within the AWS network. Multi-VPC private connectivity for Amazon MSK clusters is available in all AWS Regions where Amazon MSK is available.

**Topics**
+ [What is multi-VPC private connectivity?](#mvpc-what-is)
+ [Benefits of multi-VPC private connectivity](#mvpc-benefits)
+ [Requirements and limitations for multi-VPC private connectivity](#mvpc-requirements)
+ [Get started using multi-VPC private connectivity](mvpc-getting-started.md)
+ [Update the authorization schemes on a cluster](mvpc-cross-account-update-authschemes.md)
+ [Reject a managed VPC connection to an Amazon MSK cluster](mvpc-cross-account-reject-connection.md)
+ [Delete a managed VPC connection to an Amazon MSK cluster](mvpc-cross-account-delete-connection.md)
+ [Permissions for multi-VPC private connectivity](mvpc-cross-account-permissions.md)

## What is multi-VPC private connectivity?
<a name="mvpc-what-is"></a>

Multi-VPC private connectivity for Amazon MSK is a connectivity option that enables you to connect Apache Kafka clients that are hosted in different Virtual Private Clouds (VPCs) and AWS accounts to a MSK cluster.

Amazon MSK simplifies cross-account access with [cluster policies](mvpc-cluster-owner-action-policy.md). These policies allow the cluster owner to grant permissions for other AWS accounts to establish private connectivity to the MSK cluster.

## Benefits of multi-VPC private connectivity
<a name="mvpc-benefits"></a>

Multi-VPC private connectivity has several advantages over [other connectivity solutions](https://docs.aws.amazon.com/msk/latest/developerguide/aws-access.html):
+ It automates operational management of the AWS PrivateLink connectivity solution.
+ It allows overlapping IPs across connecting VPCs, eliminating the need to maintain non-overlapping IPs, complex peering, and routing tables associated with other VPC connectivity solutions.

You use a cluster policy for your MSK cluster to define which AWS accounts have permissions to set up cross-account private connectivity to your MSK cluster. The cross-account admin can delegate permissions to appropriate roles or users. When used with IAM client authentication, you can also use the cluster policy to define Kafka data plane permissions on a granular basis for the connecting clients.

## Requirements and limitations for multi-VPC private connectivity
<a name="mvpc-requirements"></a>

Note these MSK cluster requirements for running multi-VPC private connectivity:
+ Multi-VPC private connectivity is supported only on Apache Kafka 2.7.1 or higher. Make sure that any clients that you use with the MSK cluster are running Apache Kafka versions that are compatible with the cluster.
+ Multi-VPC private connectivity supports auth types IAM, TLS and SASL/SCRAM. Unauthenticated clusters can't use multi-VPC private connectivity.
+ If you are using the SASL/SCRAM or mTLS access-control methods, you must set Apache Kafka ACLs for your cluster. First, set the Apache Kafka ACLs for your cluster. Then, update the cluster's configuration to have the property `allow.everyone.if.no.acl.found` set to false for the cluster. For information about how to update the configuration of a cluster, see [Broker configuration operations](msk-configuration-operations.md). If you are using IAM access control and want to apply authorization policies or update your authorization policies, see [IAM access control](iam-access-control.md). For information about Apache Kafka ACLs, see [Apache Kafka ACLs](msk-acls.md).
+ Multi-VPC private connectivity doesn’t support the t3.small instance type.
+ Multi-VPC private connectivity isn’t supported across AWS Regions, only on AWS accounts within the same Region.
+ To set up multi-VPC private connectivity, you must have the same number of client subnets as cluster subnets. You must also make sure that [Availability Zone IDs](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html) are same for the client subnet and cluster subnet.
+ Amazon MSK doesn't support multi-VPC private connectivity to Zookeeper nodes.

# Get started using multi-VPC private connectivity
<a name="mvpc-getting-started"></a>

**Topics**
+ [Step 1: On the MSK cluster in Account A, turn on multi-VPC connectivity for IAM auth scheme on the cluster](mvpc-cluster-owner-action-turn-on.md)
+ [Step 2: Attach a cluster policy to the MSK cluster](mvpc-cluster-owner-action-policy.md)
+ [Step 3: Cross-account user actions to configure client-managed VPC connections](mvpc-cross-account-user-action.md)

This tutorial uses a common use case as an example of how you can use multi-VPC connectivity to privately connect an Apache Kafka client to an MSK cluster from inside AWS, but outside VPC of the cluster. This process requires the cross-account user to create a MSK managed VPC connection and configuration for each client, including required client permissions. The process also requires the MSK cluster owner to enable PrivateLink connectivity on the MSK cluster and select authentication schemes to control access to the cluster.

In different parts of this tutorial, we choose options that apply to this example. This doesn't mean that they're the only options that work for setting up an MSK cluster or client instances.

The network configuration for this use case is as follows:
+ A cross-account user (Kafka client) and an MSK cluster are in the same AWS network/Region, but in different accounts:
  + MSK cluster in Account A
  + Kafka client in Account B
+ The cross-account user will connect privately to the MSK cluster using IAM auth scheme.

This tutorial assumes that there is a provisioned MSK cluster created with Apache Kafka version 2.7.1 or higher. The MSK cluster must be in an ACTIVE state before beginning the configuration process. To avoid potential data loss or downtime, clients that will use multi-VPC private connection to connect to the cluster should use Apache Kafka versions that are compatible with the cluster.

The following diagram illustrates the architecture of Amazon MSK multi-VPC connectivity connected to a client in a different AWS account.

![\[Multi-vpc network diagram in a single Region\]](http://docs.aws.amazon.com/msk/latest/developerguide/images/mvpc-network.png)


# Step 1: On the MSK cluster in Account A, turn on multi-VPC connectivity for IAM auth scheme on the cluster
<a name="mvpc-cluster-owner-action-turn-on"></a>

The MSK cluster owner needs to make configuration settings on the MSK cluster after the cluster is created and in an ACTIVE state.

The cluster owner turns on multi-VPC private connectivity on the ACTIVE cluster for any auth schemes that will be active on the cluster. This can be done using the [UpdateSecurity API](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-security.html) or MSK console. The IAM, SASL/SCRAM, and TLS auth schemes support multi-VPC private connectivity. Multi-VPC private connectivity can’t be enabled for unauthenticated clusters.

For this use case, you’ll configure the cluster to use the IAM auth scheme.

**Note**  
If you are configuring your MSK cluster to use SASL/SCRAM auth scheme, the Apache Kafka ACLs property "`allow.everyone.if.no.acl.found=false`" is mandatory. See [Apache Kafka ACLs](https://docs.aws.amazon.com/msk/latest/developerguide/msk-acls.html).

When you update multi-VPC private connectivity settings, Amazon MSK starts a rolling reboot of broker nodes that updates the broker configurations. This can take up to 30 minutes or more to complete. You can’t make other updates to the cluster while connectivity is being updated.

**Turn on multi-VPC for selected auth schemes on the cluster in Account A using the console**

1. Open the Amazon MSK console at [https://console.aws.amazon.com/msk/](https://docs.aws.amazon.com/msk/latest/developerguide/msk-acls.html) for the account where the cluster is located.

1. In the navigation pane, under **MSK Clusters**, choose **Clusters** to display the list of clusters in the account.

1. Select the cluster to configure for multi-VPC private connectivity. The cluster must be in an ACTIVE state.

1. Select the cluster **Properties** tab, and then go to **Network** settings.

1. Select the **Edit** drop down menu and select **Turn on multi-VPC connectivity**.

1. Select one or more authentication types you want turned on for this cluster. For this use case, select **IAM role-based authentication**.

1. Select **Save changes**.

**Example - UpdateConnectivity API that turns on Multi-VPC private connectivity auth schemes on a cluster**  
As an alternative to the MSK console, you can use the [UpdateConnectivity API](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-connectivity.html) to turn on multi-VPC private connectivity and configure auth schemes on an ACTIVE cluster. The following example shows the IAM auth scheme turned on for the cluster.  

```
{
  "currentVersion": "K3T4TT2Z381HKD",
  "connectivityInfo": {
    "vpcConnectivity": {
      "clientAuthentication": {
        "sasl": {
          "iam": {
            "enabled": TRUE
            }
        }
      }
    }
  }
}
```

Amazon MSK creates the networking infrastructure required for private connectivity. Amazon MSK also creates a new set of bootstrap broker endpoints for each auth type that requires private connectivity. Note that the plaintext auth scheme does not support multi-VPC private connectivity.

# Step 2: Attach a cluster policy to the MSK cluster
<a name="mvpc-cluster-owner-action-policy"></a>

The cluster owner can attach a cluster policy (also known as a [resource-based policy](https://docs.aws.amazon.com/msk/latest/developerguide/security_iam_service-with-iam.html#security_iam_service-with-iam-resource-based-policies)) to the MSK cluster where you will turn on multi-VPC private connectivity. The cluster policy gives the clients permission to access the cluster from another account. Before you can edit the cluster policy, you need the account ID(s) for the accounts that should have permission to access the MSK cluster. See [How Amazon MSK works with IAM](https://docs.aws.amazon.com/msk/latest/developerguide/security_iam_service-with-iam.html).

The cluster owner must attach a cluster policy to the MSK cluster that authorizes the cross-account user in Account B to get bootstrap brokers for the cluster and to authorize the following actions on the MSK cluster in Account A:
+ CreateVpcConnection
+ GetBootstrapBrokers
+ DescribeCluster
+ DescribeClusterV2

**Example**  
For reference, the following is an example of the JSON for a basic cluster policy, similar to the default policy shown in the MSK console IAM policy editor. The following policy grants permissions for cluster, topic, and group-level access.    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "123456789012"
      },
      "Action": [
        "kafka:CreateVpcConnection",
        "kafka:GetBootstrapBrokers",
        "kafka:DescribeCluster",
        "kafka:DescribeClusterV2",
        "kafka-cluster:*"
      ],
      "Resource": "arn:aws:kafka:us-east-1:111122223333:cluster/testing/de8982fa-8222-4e87-8b20-9bf3cdfa1521-2"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "123456789012"
      },
      "Action": "kafka-cluster:*",
      "Resource": "arn:aws:kafka:us-east-1:111122223333:topic/testing/*"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "123456789012"
      },
      "Action": "kafka-cluster:*",
      "Resource": "arn:aws:kafka:us-east-1:111122223333:group/testing/*"
    }
  ]
}
```

**Attach a cluster policy to the MSK cluster**

1. In the Amazon MSK console, under **MSK Clusters**, choose **Clusters**.

1. Scroll down to **Security settings** and select **Edit cluster** policy.

1. In the console, on the **Edit Cluster Policy** screen, select **Basic policy for multi-VPC connectivity**.

1. In the **Account ID** field, enter the account ID for each account that should have permission to access this cluster. As you type the ID, it is automatically copied over into the displayed policy JSON syntax. In our example cluster policy, the Account ID is *111122223333*.

1. Select **Save changes**.

For information about cluster policy APIs, see [Amazon MSK resource-based policies](https://docs.aws.amazon.com/msk/latest/developerguide/security_iam_service-with-iam.html#security_iam_service-with-iam-resource-based-policies).

# Step 3: Cross-account user actions to configure client-managed VPC connections
<a name="mvpc-cross-account-user-action"></a>

To set up multi-VPC private connectivity between a client in a different account from the MSK cluster, the cross-account user creates a managed VPC connection for the client. Multiple clients can be connected to the MSK cluster by repeating this procedure. For the purposes of this use case, you’ll configure just one client.

Clients can use the supported auth schemes IAM, SASL/SCRAM, or TLS. Each managed VPC connection can have only one auth scheme associated with it. The client auth scheme must be configured on the MSK cluster where the client will connect.

 For this use case, configure the client auth scheme so that the client in Account B uses the IAM auth scheme.

**Prerequisites**

This process requires the following items:
+ The previously created cluster policy that grants the client in Account B permission to perform actions on the MSK cluster in Account A.
+ An identity policy attached to the client in Account B that grants permissions for `kafka:CreateVpcConnection`, `ec2:CreateTags`, `ec2:CreateVPCEndpoint` and `ec2:DescribeVpcAttribute` action.

**Example**  
For reference, the following is an example of the JSON for a basic client identity policy.    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kafka:CreateVpcConnection",
        "ec2:CreateTags",
        "ec2:CreateVPCEndpoint",
        "ec2:DescribeVpcAttribute"
      ],
      "Resource": "*"
    }
  ]
}
```

**To create a managed VPC connection for a client in Account B**

1. From the cluster administrator, get the **Cluster ARN** of the MSK cluster in Account A that you want the client in Account B to connect to. Make note of the cluster ARN to use later.

1. In the MSK console for the client Account B, choose **Managed VPC connections**, and then choose **Create connection**.

1. In the **Connection settings** pane, paste the cluster ARN into the cluster ARN text field, and then choose **Verify**.

1. Select the **Authentication type** for the client in Account B. For this use case, choose IAM when creating the client VPC connection.

1. Choose the **VPC** for the client.

1. Choose at least two availability **Zones** and associated **Subnets**. You can get the availability zone IDs from the AWS Management Console cluster details or by using the [DescribeCluster](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn.html#DescribeCluster) API or the [describe-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kafka/describe-cluster.html) AWS CLI command. The zone IDs that you specify for the client subnet must match those of the cluster subnet. If the values for a subnet are missing, first create a subnet with the same zone ID as your MSK cluster.

1. Choose a **Security group** for this VPC connection. You can use the default security group. For more information on configuring a security group, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).

1. Select **Create connection**.

1. To get the list of new bootstrap broker strings from the cross-account user’s MSK console (**Cluster** details > **Managed VPC connection**), see the bootstrap broker strings shown under “**Cluster connection string**.” From the client Account B, the list of bootstrap brokers can be viewed by calling the [GetBootstrapBrokers](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-bootstrap-brokers.html#GetBootstrapBrokers) API or by viewing the list of bootstrap brokers in the console cluster details.

1. Update the security groups associated with the VPC connections as follows:

   1. Set **inbound rules** for the PrivateLink VPC to allow all traffic for the IP range from the Account B network.

   1. [Optional] Set **Outbound rules** connectivity to the MSK cluster. Choose the **Security Group** in the VPC console, **Edit Outbound Rules**, and add a rule for **Custom TCP Traffic** for port ranges 14001-14100. The multi-VPC network load balancer is listening on the 14001-14100 port ranges. See [Network Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html).

1. Configure the client in Account B to use the new bootstrap brokers for multi-VPC private connectivity to connect to the MSK cluster in Account A. See [Produce and consume data](https://docs.aws.amazon.com/msk/latest/developerguide/produce-consume.html).

After authorization is complete, Amazon MSK creates a managed VPC connection for each specified VPC and auth scheme. The chosen security group is associated with each connection. This managed VPC connection is configured by Amazon MSK to connect privately to the brokers. You can use the new set of bootstrap brokers to connect privately to the Amazon MSK cluster.

# Update the authorization schemes on a cluster
<a name="mvpc-cross-account-update-authschemes"></a>

Multi-VPC private connectivity supports several authorization schemes: SASL/SCRAM, IAM, and TLS. The cluster owner can turn on/off private connectivity for one or more auth schemes. The cluster has to be in ACTIVE state to perform this action.

**To turn on an auth scheme using the Amazon MSK console**

1. Open the Amazon MSK console at [AWS Management Console](https://console.aws.amazon.com/msk) for the cluster that you want to edit.

1. In the navigation pane, under **MSK Clusters**, choose **Clusters** to display the list of clusters in the account.

1. Select the cluster that you want to edit. The cluster must be in an ACTIVE state.

1. Select the cluster **Properties** tab, and then go to **Network settings**.

1. Select the **Edit** dropdown menu and select **Turn on multi-VPC connectivity** to turn on a new auth scheme.

1. Select one or more authentication types that you want turned on for this cluster.

1. Select **Turn on selection**.

When you turn on a new auth scheme, you should also create new managed VPC connections for the new auth scheme and update your clients to use the bootstrap brokers specific to the new auth scheme.

**To turn off an auth scheme using the Amazon MSK console**
**Note**  
When you turn off multi-VPC private connectivity for auth schemes, all connectivity related infrastructure, including the managed VPC connections, are deleted.

When you turn off multi-VPC private connectivity for auth schemes, existing VPC connections on client side change to INACTIVE, and Privatelink infrastructure on the cluster side, including the managed VPC connections, on the cluster side is removed. The cross-account user can only delete the inactive VPC connection. If private connectivity is turned on again on the cluster, the cross-account user needs to create a new connection to the cluster.

1. Open the Amazon MSK console at [AWS Management Console](https://console.aws.amazon.com/msk).

1. In the navigation pane, under **MSK Clusters**, choose **Clusters** to display the list of clusters in the account.

1. Select the cluster you want to edit. The cluster must be in an ACTIVE state.

1. Select the cluster **Properties** tab, then go to **Network settings**.

1. Select the **Edit** drop down menu and select **Turn off multi-VPC connectivity** (to turn off an auth scheme).

1. Select one or more authentication types you want turned off for this cluster.

1. Select **Turn off selection**.

**Example To turn on/off an auth scheme with the API**  
As an alternative to the MSK console, you can use the [UpdateConnectivity API](https://docs.aws.amazon.com/msk/1.0/apireference/clusters-clusterarn-connectivity.html) to turn on multi-VPC private connectivity and configure auth schemes on an ACTIVE cluster. The following example shows SASL/SCRAM and IAM auth schemes turned on for the cluster.  
When you turn on a new auth scheme, you should also create new managed VPC connections for the new auth scheme and update your clients to use the bootstrap brokers specific to the new auth scheme.  
When you turn off multi-VPC private connectivity for auth schemes, existing VPC connections on client side change to INACTIVE, and Privatelink infrastructure on the cluster side, including the managed VPC connections, is removed. The cross-account user can only delete the inactive VPC connection. If private connectivity is turned on again on the cluster, the cross-account user needs to create a new connection to the cluster.  

```
Request:
{
  "currentVersion": "string",
  "connnectivityInfo": {
    "publicAccess": {
      "type": "string"
    },
    "vpcConnectivity": {
      "clientAuthentication": {
        "sasl": {
          "scram": {
            "enabled": TRUE
          },
          "iam": {
            "enabled": TRUE
          }        
        },
        "tls": {
          "enabled": FALSE
        }
      }
    }
  }
}

Response:
{
  "clusterArn": "string",
  "clusterOperationArn": "string"
}
```

# Reject a managed VPC connection to an Amazon MSK cluster
<a name="mvpc-cross-account-reject-connection"></a>

From the Amazon MSK console on the cluster admin account, you can reject a client VPC connection. The client VPC connection must be in the AVAILABLE state to be rejected. You might want to reject a managed VPC connection from a client that is no longer authorized to connect to your cluster. To prevent new managed VPC connections from a connecting to a client, deny access to the client in the cluster policy. A rejected connection still incurs cost until its deleted by the connection owner. See [Delete a managed VPC connection to an Amazon MSK cluster ](https://docs.aws.amazon.com/msk/latest/developerguide/mvpc-cross-account-delete-connection.html).

**To reject a client VPC connection using the MSK console**

1. Open the Amazon MSK console at [AWS Management Console](https://console.aws.amazon.com/msk).

1. In the navigation pane, select **Clusters** and scroll to the **Network settings > Client VPC connections** list.

1. Select the connection that you want to reject and select **Reject client VPC connection**.

1. Confirm that you want to reject the selected client VPC connection.

To reject a managed VPC connection using the API, use the `RejectClientVpcConnection` API.

# Delete a managed VPC connection to an Amazon MSK cluster
<a name="mvpc-cross-account-delete-connection"></a>

The cross-account user can delete a managed VPC connection for an MSK cluster from the client account console. Because the cluster owner user doesn’t own the managed VPC connection, the connection can’t be deleted from the cluster admin account. Once a VPC connection is deleted, it no longer incurs cost.

**To delete a managed VPC connection using the MSK console**

1. From the client account, open the Amazon MSK console at [AWS Management Console](https://console.aws.amazon.com/msk).

1. In the navigation pane, select **Managed VPC connections**.

1. From the connection list, select the connection that you want to delete.

1. Confirm that you want to delete the VPC connection.

To delete a managed VPC connection using the API, use the `DeleteVpcConnection` API.

# Permissions for multi-VPC private connectivity
<a name="mvpc-cross-account-permissions"></a>

This section summarizes the permissions needed for clients and clusters using the multi-VPC private connectivity feature. Multi-VPC private connectivity requires the client admin to create permissions on each client that will have a managed VPC connection to the MSK cluster. It also requires the MSK cluster admin to enable PrivateLink connectivity on the MSK cluster and select authentication schemes to control access to the cluster. 

**Cluster auth type and topic access permissions**  
Turn on the multi-VPC private connectivity feature for auth schemes that are enabled for your MSK cluster. See [Requirements and limitations for multi-VPC private connectivity](aws-access-mult-vpc.md#mvpc-requirements). If you are configuring your MSK cluster to use SASL/SCRAM auth scheme, the Apache Kafka ACLs property `allow.everyone.if.no.acl.found=false` is mandatory. After you set the [Apache Kafka ACLs](msk-acls.md) for your cluster, update the cluster's configuration to have the property `allow.everyone.if.no.acl.found` set to false for the cluster. For information about how to update the configuration of a cluster, see [Broker configuration operations](msk-configuration-operations.md).

**Cross-account cluster policy permissions**  
If a Kafka client is in an AWS account that is different than the MSK cluster, attach a cluster-based policy to the MSK cluster that authorizes the client root user for cross-account connectivity. You can edit the multi-VPC cluster policy using the IAM policy editor in the MSK console (cluster **Security settings** > **Edit cluster policy**), or use the following APIs to manage the cluster policy:

**PutClusterPolicy**  
Attaches the cluster policy to the cluster. You can use this API to create or update the specified MSK cluster policy. If you’re updating the policy, the currentVersion field is required in the request payload.

**GetClusterPolicy**  
Retrieves the JSON text of the cluster policy document attached to the cluster.

**DeleteClusterPolicy**  
Deletes the cluster policy.

The following is an example of the JSON for a basic cluster policy, similar to the one shown in the MSK console IAM policy editor. The following policy grants permissions for cluster, topic, and group-level access.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Allow",
        "Principal": {
            "AWS": [
                "123456789012"
            ]
        },
        "Action": [
            "kafka-cluster:*",
            "kafka:CreateVpcConnection",
            "kafka:GetBootstrapBrokers",
            "kafka:DescribeCluster",
            "kafka:DescribeClusterV2"
        ],
        "Resource": [
            "arn:aws:kafka:us-east-1:123456789012:cluster/testing/de8982fa-8222-4e87-8b20-9bf3cdfa1521-2",
            "arn:aws:kafka:us-east-1:123456789012:topic/testing/*",
            "arn:aws:kafka:us-east-1:123456789012:group/testing/*"
        ]
    }]
}
```

------

**Client permissions for multi-VPC private connectivity to an MSK cluster**  
To set up multi-VPC private connectivity between a Kafka client and an MSK cluster, the client requires an attached identity policy that grants permissions for `kafka:CreateVpcConnection`, `ec2:CreateTags` and `ec2:CreateVPCEndpoint` actions on the client. For reference, the following is an example of the JSON for a basic client identity policy.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kafka:CreateVpcConnection",
        "ec2:CreateTags",
        "ec2:CreateVPCEndpoint"
      ],
      "Resource": "*"
    }
  ]
}
```

------