

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Continuous Deployment with Argo CD
<a name="argocd"></a>

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. With Argo CD, you can automate the deployment and lifecycle management of your applications across multiple clusters and environments. Argo CD supports multiple source types including Git repositories, Helm registries (HTTP and OCI), and OCI images—providing flexibility for organizations with different security and compliance requirements.

With EKS Capabilities, Argo CD is fully managed by AWS, eliminating the need to install, maintain, and scale Argo CD controllers and their dependencies on your clusters.

## How Argo CD Works
<a name="_how_argo_cd_works"></a>

Argo CD follows the GitOps pattern, where your application source (Git repository, Helm registry, or OCI image) is the source of truth for defining the desired application state. When you create an Argo CD `Application` resource, you specify the source containing your application manifests and the target Kubernetes cluster and namespace. Argo CD continuously monitors both the source and the live state in the cluster, automatically synchronizing any changes to ensure the cluster state matches the desired state.

**Note**  
With the EKS Capability for Argo CD, the Argo CD software runs in the AWS control plane, not on your worker nodes. This means your worker nodes don’t need direct access to Git repositories or Helm registries—the capability handles source access from the AWS account.

Argo CD provides three primary resource types:
+  **Application**: Defines a deployment from a Git repository to a target cluster
+  **ApplicationSet**: Generates multiple Applications from templates for multi-cluster deployments
+  **AppProject**: Provides logical grouping and access control for Applications

 **Example: Creating an Argo CD Application** 

The following example shows how to create an Argo CD `Application` resource:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: guestbook
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
```

**Note**  
Use `destination.name` with the cluster name you used when registering the cluster (like `in-cluster` for the local cluster). The `destination.server` field also works with EKS cluster ARNs, but using cluster names is recommended for better readability.

## Benefits of Argo CD
<a name="_benefits_of_argo_cd"></a>

Argo CD implements a GitOps workflow where you define your application configurations in Git repositories and Argo CD automatically syncs your applications to match the desired state. This Git-centric approach provides a complete audit trail of all changes, enables easy rollbacks, and integrates naturally with your existing code review and approval processes. Argo CD automatically detects and reconciles drift between the desired state in Git and the actual state in your clusters, ensuring your deployments remain consistent with your declared configuration.

With Argo CD, you can deploy and manage applications across multiple clusters from a single Argo CD instance, simplifying operations in multi-cluster and multi-region environments. The Argo CD UI provides visualization and monitoring capabilities, allowing you to view the deployment status, health, and history of your applications. The UI integrates with AWS Identity Center (formerly AWS SSO) for seamless authentication and authorization, enabling you to control access using your existing identity management infrastructure.

As part of EKS Managed Capabilities, Argo CD is fully managed by AWS, eliminating the need to install, configure, and maintain Argo CD infrastructure. AWS handles scaling, patching, and operational management, allowing your teams to focus on application delivery rather than tool maintenance.

## Integration with AWS Identity Center
<a name="integration_with_shared_aws_identity_center"></a>

EKS Managed Capabilities provides direct integration between Argo CD and AWS Identity Center, enabling seamless authentication and authorization for your users. When you enable the Argo CD capability, you can configure AWS Identity Center integration to map Identity Center groups and users to Argo CD RBAC roles, allowing you to control who can access and manage applications in Argo CD.

## Integration with Other EKS Managed Capabilities
<a name="_integration_with_other_eks_managed_capabilities"></a>

Argo CD integrates with other EKS Managed Capabilities.
+  ** AWS Controllers for Kubernetes (ACK)**: Use Argo CD to manage the deployment of ACK resources across multiple clusters, enabling GitOps workflows for your AWS infrastructure.
+  **kro (Kube Resource Orchestrator)**: Use Argo CD to deploy kro compositions across multiple clusters, enabling consistent resource composition across your Kubernetes estate.

## Getting Started with Argo CD
<a name="_getting_started_with_argo_cd"></a>

To get started with the EKS Capability for Argo CD:

1. Create and configure an IAM Capability Role with the necessary permissions for Argo CD to access your sources and manage applications.

1.  [Create an Argo CD capability resource](create-argocd-capability.md) on your EKS cluster through the AWS Console, AWS CLI, or your preferred infrastructure as code tool.

1. Configure repository access and register clusters for application deployment.

1. Create Application resources to deploy your applications from your declarative sources.

# Create an Argo CD capability
<a name="create-argocd-capability"></a>

This topic explains how to create an Argo CD capability on your Amazon EKS cluster.

## Prerequisites
<a name="_prerequisites"></a>

Before creating an Argo CD capability, ensure you have:
+ An existing Amazon EKS cluster running a supported Kubernetes version (all versions in standard and extended support are supported)
+  ** AWS Identity Center configured** - Required for Argo CD authentication (local users are not supported)
+ An IAM Capability Role with permissions for Argo CD
+ Sufficient IAM permissions to create capability resources on EKS clusters
+  `kubectl` configured to communicate with your cluster
+ (Optional) The Argo CD CLI installed for easier cluster and repository management
+ (For CLI/eksctl) The appropriate CLI tool installed and configured

For instructions on creating the IAM Capability Role, see [Amazon EKS capability IAM role](capability-role.md). For Identity Center setup, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html).

**Important**  
The IAM Capability Role you provide determines which AWS resources Argo CD can access. This includes Git repository access via CodeConnections and secrets in Secrets Manager. For guidance on creating an appropriate role with least-privilege permissions, see [Amazon EKS capability IAM role](capability-role.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

## Choose your tool
<a name="_choose_your_tool"></a>

You can create an Argo CD capability using the AWS Management Console, AWS CLI, or eksctl:
+  [Create an Argo CD capability using the Console](argocd-create-console.md) - Use the Console for a guided experience
+  [Create an Argo CD capability using the AWS CLI](argocd-create-cli.md) - Use the AWS CLI for scripting and automation
+  [Create an Argo CD capability using eksctl](argocd-create-eksctl.md) - Use eksctl for a Kubernetes-native experience

## What happens when you create an Argo CD capability
<a name="_what_happens_when_you_create_an_argo_cd_capability"></a>

When you create an Argo CD capability:

1. EKS creates the Argo CD capability service in the AWS control plane

1. Custom Resource Definitions (CRDs) are installed in your cluster

1. An access entry is automatically created for your IAM Capability Role with capability-specific access entry policies that grant baseline Kubernetes permissions (see [Security considerations for EKS Capabilities](capabilities-security.md))

1. Argo CD begins watching for its custom resources (Applications, ApplicationSets, AppProjects)

1. The capability status changes from `CREATING` to `ACTIVE` 

1. The Argo CD UI becomes accessible through its URL

Once active, you can create Argo CD Applications in your cluster to deploy from your declarative sources.

**Note**  
The automatically created access entry does not grant permissions to deploy applications to clusters. To deploy applications, you must configure additional Kubernetes RBAC permissions for each target cluster. See [Register target clusters](argocd-register-clusters.md) for details on registering clusters and configuring access.

## Next steps
<a name="_next_steps"></a>

After creating the Argo CD capability:
+  [Argo CD concepts](argocd-concepts.md) - Learn about GitOps principles, sync policies, and multi-cluster patterns
+  [Working with Argo CD](working-with-argocd.md) - Configure repository access, register target clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Explore multi-cluster architecture patterns and advanced configuration

# Create an Argo CD capability using the Console
<a name="argocd-create-console"></a>

This topic describes how to create an Argo CD capability using the AWS Management Console.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS Identity Center configured** – Argo CD requires AWS Identity Center for authentication. Local users are not supported. If you don’t have AWS Identity Center set up, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) to create an Identity Center instance, and [Add users](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) and [Add groups](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html) to create users and groups for Argo CD access.

## Create the Argo CD capability
<a name="_create_the_argo_cd_capability"></a>

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name to open the cluster detail page.

1. Choose the **Capabilities** tab.

1. In the left navigation, choose **Argo CD**.

1. Choose **Create Argo CD capability**.

1. For **IAM Capability Role**:
   + If you already have an IAM Capability Role, select it from the dropdown
   + If you need to create a role, choose **Create Argo CD role** 

     This opens the IAM console in a new tab with pre-populated trust policy and full read access to Secrets Manager. No other permissions are added by default, but you can add them if needed. If you plan to use CodeCommit repositories or other AWS services, add the appropriate permissions before creating the role.

     After creating the role, return to the EKS console and the role will be automatically selected.
**Note**  
If you plan to use the optional integrations with AWS Secrets Manager or AWS CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

1. Configure AWS Identity Center integration:

   1. Select **Enable AWS Identity Center integration**.

   1. Choose your Identity Center instance from the dropdown.

   1. Configure role mappings for RBAC by assigning users or groups to Argo CD roles (ADMIN, EDITOR, or VIEWER)

1. Choose **Create**.

The capability creation process begins.

## Verify the capability is active
<a name="_verify_the_capability_is_active"></a>

1. On the **Capabilities** tab, view the Argo CD capability status.

1. Wait for the status to change from `CREATING` to `ACTIVE`.

1. Once active, the capability is ready to use.

For information about capability statuses and troubleshooting, see [Working with capability resources](working-with-capabilities.md).

## Access the Argo CD UI
<a name="_access_the_argo_cd_ui"></a>

After the capability is active, you can access the Argo CD UI:

1. On the Argo CD capability page, choose **Open Argo CD UI**.

1. The Argo CD UI opens in a new browser tab.

1. You can now create Applications and manage deployments through the UI.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Configure repositories, register clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster architecture and advanced configuration
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Create an Argo CD capability using the AWS CLI
<a name="argocd-create-cli"></a>

This topic describes how to create an Argo CD capability using the AWS CLI.

## Prerequisites
<a name="_prerequisites"></a>
+  ** AWS CLI** – Version `2.12.3` or later. To check your version, run `aws --version`. For more information, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+  ** `kubectl` ** – A command line tool for working with Kubernetes clusters. For more information, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+  ** AWS Identity Center configured** – Argo CD requires AWS Identity Center for authentication. Local users are not supported. If you don’t have AWS Identity Center set up, see [Getting started with AWS Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) to create an Identity Center instance, and [Add users](https://docs.aws.amazon.com/singlesignon/latest/userguide/addusers.html) and [Add groups](https://docs.aws.amazon.com/singlesignon/latest/userguide/addgroups.html) to create users and groups for Argo CD access.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > argocd-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ArgoCDCapabilityRole \
  --assume-role-policy-document file://argocd-trust-policy.json
```

**Note**  
If you plan to use the optional integrations with AWS Secrets Manager or AWS CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

## Step 2: Create the Argo CD capability
<a name="_step_2_create_the_argo_cd_capability"></a>

Create the Argo CD capability resource on your cluster.

First, set environment variables for your Identity Center configuration:

```
# Get your Identity Center instance ARN (replace region if your IDC instance is in a different region)
export IDC_INSTANCE_ARN=$(aws sso-admin list-instances --region [.replaceable]`region` --query 'Instances[0].InstanceArn' --output text)

# Get a user ID for RBAC mapping (replace with your username and region if needed)
export IDC_USER_ID=$(aws identitystore list-users \
  --region [.replaceable]`region` \
  --identity-store-id $(aws sso-admin list-instances --region [.replaceable]`region` --query 'Instances[0].IdentityStoreId' --output text) \
  --query 'Users[?UserName==`your-username`].UserId' --output text)

echo "IDC_INSTANCE_ARN=$IDC_INSTANCE_ARN"
echo "IDC_USER_ID=$IDC_USER_ID"
```

Create the capability with Identity Center integration. Replace *region-code* with the AWS Region where your cluster is located and *my-cluster* with your cluster name and *idc-region-code* with the region code where you IAM Identity Center has been configured:

```
aws eks create-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd \
  --type ARGOCD \
  --role-arn arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):role/ArgoCDCapabilityRole \
  --delete-propagation-policy RETAIN \
  --configuration '{
    "argoCd": {
      "awsIdc": {
        "idcInstanceArn": "'$IDC_INSTANCE_ARN'",
        "idcRegion": "'[.replaceable]`idc-region-code`'"
      },
      "rbacRoleMappings": [{
        "role": "ADMIN",
        "identities": [{
          "id": "'$IDC_USER_ID'",
          "type": "SSO_USER"
        }]
      }]
    }
  }'
```

The command returns immediately, but the capability takes some time to become active as EKS creates the required capability infrastructure and components. EKS will install the Kubernetes Custom Resource Definitions related to this capability in your cluster as it is being created.

**Note**  
If you receive an error that the cluster doesn’t exist or you don’t have permissions, verify:  
The cluster name is correct
Your AWS CLI is configured for the correct region
You have the required IAM permissions

## Step 3: Verify the capability is active
<a name="_step_3_verify_the_capability_is_active"></a>

Wait for the capability to become active. Replace *region-code* with the AWS Region where your cluster is located and *my-cluster* with your cluster name.

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd \
  --query 'capability.status' \
  --output text
```

The capability is ready when the status shows `ACTIVE`. Don’t continue to the next step until the status is `ACTIVE`.

You can also view the full capability details:

```
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd
```

## Step 4: Verify custom resources are available
<a name="_step_4_verify_custom_resources_are_available"></a>

After the capability is active, verify that Argo CD custom resources are available in your cluster:

```
kubectl api-resources | grep argoproj.io
```

You should see `Application` and `ApplicationSet` resource types listed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Configure repositories, register clusters, and create Applications
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster architecture and advanced configuration
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Create an Argo CD capability using eksctl
<a name="argocd-create-eksctl"></a>

This topic describes how to create an Argo CD capability using eksctl.

**Note**  
The following steps require eksctl version `0.220.0` or later. To check your version, run `eksctl version`.

## Step 1: Create an IAM Capability Role
<a name="_step_1_create_an_iam_capability_role"></a>

Create a trust policy file:

```
cat > argocd-trust-policy.json << 'EOF'
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "capabilities.eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
EOF
```

Create the IAM role:

```
aws iam create-role \
  --role-name ArgoCDCapabilityRole \
  --assume-role-policy-document file://argocd-trust-policy.json
```

**Note**  
For this basic setup, no additional IAM policies are needed. If you plan to use Secrets Manager for repository credentials or CodeConnections, you’ll need to add permissions to the role. For IAM policy examples and configuration guidance, see [Manage application secrets with AWS Secrets Manager](integration-secrets-manager.md) and [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

## Step 2: Get your AWS Identity Center configuration
<a name="step_2_get_your_shared_aws_identity_center_configuration"></a>

Get your Identity Center instance ARN and user ID for RBAC configuration:

```
# Get your Identity Center instance ARN
aws sso-admin list-instances --query 'Instances[0].InstanceArn' --output text

# Get a user ID for admin access (replace 'your-username' with your Identity Center username)
aws identitystore list-users \
  --identity-store-id $(aws sso-admin list-instances --query 'Instances[0].IdentityStoreId' --output text) \
  --query 'Users[?UserName==`your-username`].UserId' --output text
```

Note these values - you’ll need them in the next step.

## Step 3: Create an eksctl configuration file
<a name="_step_3_create_an_eksctl_configuration_file"></a>

Create a file named `argocd-capability.yaml` with the following content. Replace the placeholder values with your cluster’s name, cluster’s region, IAM role ARN, Identity Center instance ARN, Identity Center region, and user ID:

```
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: my-cluster
  region: cluster-region-code

capabilities:
  - name: my-argocd
    type: ARGOCD
    roleArn: arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole
    deletePropagationPolicy: RETAIN
    configuration:
      argocd:
        awsIdc:
          idcInstanceArn: arn:aws:sso:::instance/ssoins-123abc
          idcRegion: idc-region-code
        rbacRoleMappings:
          - role: ADMIN
            identities:
              - id: 38414300-1041-708a-01af-5422d6091e34
                type: SSO_USER
```

**Note**  
You can add multiple users or groups to the RBAC mappings. For groups, use `type: SSO_GROUP` and provide the group ID. Available roles are `ADMIN`, `EDITOR`, and `VIEWER`.

## Step 4: Create the Argo CD capability
<a name="_step_4_create_the_argo_cd_capability"></a>

Apply the configuration file:

```
eksctl create capability -f argocd-capability.yaml
```

The command returns immediately, but the capability takes some time to become active.

## Step 5: Verify the capability is active
<a name="_step_5_verify_the_capability_is_active"></a>

Check the capability status. Replace *region-code* with the AWS Region that your cluster is in and replace *my-cluster* with the name of your cluster.

```
eksctl get capability \
  --region region-code \
  --cluster my-cluster \
  --name my-argocd
```

The capability is ready when the status shows `ACTIVE`.

## Step 6: Verify custom resources are available
<a name="_step_6_verify_custom_resources_are_available"></a>

After the capability is active, verify that Argo CD custom resources are available in your cluster:

```
kubectl api-resources | grep argoproj.io
```

You should see `Application` and `ApplicationSet` resource types listed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create and manage Argo CD Applications
+  [Argo CD considerations](argocd-considerations.md) - Configure SSO and multi-cluster access
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Argo CD concepts
<a name="argocd-concepts"></a>

Argo CD implements GitOps by treating Git as the single source of truth for your application deployments. This topic walks through a practical example, then explains the core concepts you need to understand when working with the EKS Capability for Argo CD.

## Getting started with Argo CD
<a name="_getting_started_with_argo_cd"></a>

After creating the Argo CD capability (see [Create an Argo CD capability](create-argocd-capability.md)), you can start deploying applications. This example walks through registering a cluster and creating an Application.

### Step 1: Set up
<a name="_step_1_set_up"></a>

 **Register your cluster** (required)

Register the cluster where you want to deploy applications. For this example, we’ll register the same cluster where Argo CD is running (you can use the name `in-cluster` for compatibility with most Argo CD examples):

```
# Get your cluster ARN
CLUSTER_ARN=$(aws eks describe-cluster \
  --name my-cluster \
  --query 'cluster.arn' \
  --output text)

# Register the cluster using Argo CD CLI
argocd cluster add $CLUSTER_ARN \
  --aws-cluster-name $CLUSTER_ARN \
  --name in-cluster \
  --project default
```

**Note**  
For information about configuring the Argo CD CLI to work with the Argo CD capability in EKS, see [Using the Argo CD CLI with the managed capability](argocd-comparison.md#argocd-cli-configuration).

Alternatively, register the cluster using a Kubernetes Secret (see [Register target clusters](argocd-register-clusters.md) for details).

 **Configure repository access** (optional)

This example uses a public GitHub repository, so no repository configuration is required. For private repositories, configure access using AWS Secrets Manager, CodeConnections, or Kubernetes Secrets (see [Configure repository access](argocd-configure-repositories.md) for details).

For AWS services (ECR for Helm charts, CodeConnections, and CodeCommit), you can reference them directly in Application resources without creating a Repository. The Capability Role must have the required IAM permissions. See [Configure repository access](argocd-configure-repositories.md) for details.

### Step 2: Create an Application
<a name="_step_2_create_an_application"></a>

Create this Application manifest in `my-app.yaml`:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: guestbook
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
```

Apply the Application:

```
kubectl apply -f my-app.yaml
```

After applying this Application, Argo CD: 1. Syncs the application from Git to your cluster (initial deployment) 2. Monitors the Git repository for changes 3. Automatically syncs subsequent changes to your cluster 4. Detects and corrects any drift from the desired state 5. Provides health status and sync history in the UI

View the application status:

```
kubectl get application guestbook -n argocd
```

You can also view the application using the Argo CD CLI or the Argo CD UI (accessible from the EKS console under your cluster’s Capabilities tab).

**Note**  
When using the Argo CD CLI with the managed capability, specify applications with the namespace prefix: `argocd app get argocd/guestbook`.

**Note**  
Use the cluster name in `destination.name` (the name you used when registering the cluster). The managed capability does not support the local in-cluster default (`kubernetes.default.svc`).

## Core concepts
<a name="_core_concepts"></a>

### GitOps principles and source types
<a name="_gitops_principles_and_source_types"></a>

Argo CD implements GitOps, where your application source is the single source of truth for deployments:
+  **Declarative** - Desired state is declared using YAML manifests, Helm charts, or Kustomize overlays
+  **Versioned** - Every change is tracked with complete audit trail
+  **Automated** - Argo CD continuously monitors sources and automatically syncs changes
+  **Self-healing** - Detects and corrects drift between desired and actual cluster state

 **Supported source types**:
+  **Git repositories** - GitHub, GitLab, Bitbucket, CodeCommit (HTTPS, SSH, or CodeConnections)
+  **Helm registries** - HTTP registries (like `https://aws.github.io/eks-charts`) and OCI registries (like `public.ecr.aws`)
+  **OCI images** - Container images containing manifests or Helm charts (like `oci://registry-1.docker.io/user/my-app`)

This flexibility allows organizations to choose sources that meet their security and compliance requirements. For example, organizations that restrict Git access from clusters can use ECR for Helm charts or OCI images.

For more information, see [Application Sources](https://argo-cd.readthedocs.io/en/stable/user-guide/application-sources/) in the Argo CD documentation.

### Sync and reconciliation
<a name="_sync_and_reconciliation"></a>

Argo CD continuously monitors your sources and clusters to detect and correct differences:

1. Polls sources for changes (default: every 6 minutes)

1. Compares desired state with cluster state

1. Marks applications as `Synced` or `OutOfSync` 

1. Syncs changes automatically (if configured) or waits for manual approval

1. Monitors resource health after sync

 **Sync waves** control resource creation order using annotations:

```
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "0"  # Default if not specified
```

Resources are applied in wave order (lower numbers first, including negative numbers like `-1`). Wave `0` is the default if not specified. This allows you to create dependencies like namespaces (wave `-1`) before deployments (wave `0`) before services (wave `1`).

 **Self-healing** automatically reverts manual changes:

```
spec:
  syncPolicy:
    automated:
      selfHeal: true
```

**Note**  
The managed capability uses annotation-based resource tracking (not label-based) for better compatibility with Kubernetes conventions and other tools.

For detailed information about sync phases, hooks, and advanced patterns, see the [Argo CD sync documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/).

### Application health
<a name="_application_health"></a>

Argo CD monitors the health of all resources in your application:

 **Health statuses**: \$1 **Healthy** - All resources running as expected \$1 **Progressing** - Resources being created or updated \$1 **Degraded** - Some resources not healthy (pods crashing, jobs failing) \$1 **Suspended** - Application intentionally paused \$1 **Missing** - Resources defined in Git not present in cluster

Argo CD has built-in health checks for common Kubernetes resources (Deployments, StatefulSets, Jobs, etc.) and supports custom health checks for CRDs.

Application health is determined by all its resources - if any resource is `Degraded`, the application is `Degraded`.

For more information, see [Resource Health](https://argo-cd.readthedocs.io/en/stable/operator-manual/health/) in the Argo CD documentation.

### Multi-cluster patterns
<a name="_multi_cluster_patterns"></a>

Argo CD supports two main deployment patterns:

 **Hub-and-spoke** - Run Argo CD on a dedicated management cluster that deploys to multiple workload clusters: \$1 Centralized control and visibility \$1 Consistent policies across all clusters \$1 One Argo CD instance to manage \$1 Clear separation between control plane and workloads

 **Per-cluster** - Run Argo CD on each cluster, managing only that cluster’s applications: \$1 Cluster separation (one failure doesn’t affect others) \$1 Simpler networking (no cross-cluster communication) \$1 Easier initial setup (no cluster registration)

Choose hub-and-spoke for platform teams managing many clusters, or per-cluster for independent teams or when clusters must be fully isolated.

For detailed multi-cluster configuration, see [Argo CD considerations](argocd-considerations.md).

### Projects
<a name="_projects"></a>

Projects provide logical grouping and access control for Applications:
+  **Source restrictions** - Limit which Git repositories can be used
+  **Destination restrictions** - Limit which clusters and namespaces can be targeted
+  **Resource restrictions** - Limit which Kubernetes resource types can be deployed
+  **RBAC integration** - Map projects to AWS Identity Center user and group IDs

Applications belong to a single project. If not specified, they use the `default` project, which has no restrictions by default. For production use, edit the `default` project to restrict access and create new projects with appropriate restrictions.

For project configuration and RBAC patterns, see [Configure Argo CD permissions](argocd-permissions.md).

### Sync options
<a name="_sync_options"></a>

Fine-tune sync behavior with common options:
+  `CreateNamespace=true` - Automatically create destination namespace
+  `ServerSideApply=true` - Use server-side apply for better conflict resolution
+  `SkipDryRunOnMissingResource=true` - Skip dry run when CRDs don’t exist yet (useful for kro instances)

```
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
    - ServerSideApply=true
    - SkipDryRunOnMissingResource=true
```

For a complete list of sync options, see the [Argo CD sync options documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/).

## Next steps
<a name="_next_steps"></a>
+  [Configure repository access](argocd-configure-repositories.md) - Configure Git repository access
+  [Register target clusters](argocd-register-clusters.md) - Register target clusters for deployment
+  [Create Applications](argocd-create-application.md) - Create your first Application
+  [Argo CD considerations](argocd-considerations.md) - EKS-specific patterns, Identity Center integration, and multi-cluster configuration
+  [Argo CD Documentation](https://argo-cd.readthedocs.io/en/stable/) - Comprehensive Argo CD documentation including sync hooks, health checks, and advanced patterns

# Configure Argo CD permissions
<a name="argocd-permissions"></a>

The Argo CD managed capability integrates with AWS Identity Center for authentication and uses built-in RBAC roles for authorization. This topic explains how to configure permissions for users and teams.

## How permissions work with Argo CD
<a name="_how_permissions_work_with_argo_cd"></a>

The Argo CD capability uses AWS Identity Center for authentication and provides three built-in RBAC roles for authorization.

When a user accesses Argo CD:

1. They authenticate using AWS Identity Center (which can federate to your corporate identity provider)

1.  AWS Identity Center provides user and group information to Argo CD

1. Argo CD maps users and groups to RBAC roles based on your configuration

1. Users see only the applications and resources they have permission to access

## Built-in RBAC roles
<a name="_built_in_rbac_roles"></a>

The Argo CD capability provides three built-in roles that you map to AWS Identity Center users and groups. These are **globally scoped roles** that control access to Argo CD resources like projects, clusters, and repositories.

**Important**  
Global roles control access to Argo CD itself, not to project-scoped resources like Applications. EDITOR and VIEWER users cannot see or manage Applications by default—they need project roles to access project-scoped resources. See [Project roles and project-scoped access](#project-roles) for details on granting access to Applications and other project-scoped resources.

 **ADMIN** 

Full access to all Argo CD resources and settings:
+ Create, update, and delete Applications and ApplicationSets in any project
+ Manage Argo CD configuration
+ Register and manage deployment target clusters
+ Configure repository access
+ Create and manage projects
+ View all application status and history
+ List and access all clusters and repositories

 **EDITOR** 

Can update projects and configure project roles, but cannot change global Argo CD settings:
+ Update existing projects (cannot create or delete projects)
+ Configure project roles and permissions
+ View GPG keys and certificates
+ Cannot change global Argo CD configuration
+ Cannot manage clusters or repositories directly
+ Cannot see or manage Applications without project roles

 **VIEWER** 

Read-only access to Argo CD resources:
+ View project configurations
+ List all projects (including projects the user is not assigned to)
+ View GPG keys and certificates
+ Cannot list clusters or repositories
+ Cannot make any changes
+ Cannot see or manage Applications without project roles

**Note**  
To grant EDITOR or VIEWER users access to Applications, an ADMIN or EDITOR must create project roles that map Identity Center groups to specific permissions within a project.

## Project roles and project-scoped access
<a name="project-roles"></a>

Global roles (ADMIN, EDITOR, VIEWER) control access to Argo CD itself. Project roles control access to resources and capabilities within a specific project, including:
+  **Resources**: Applications, ApplicationSets, repository credentials, cluster credentials
+  **Capabilities**: Log access, exec access to application pods

 **Understanding the two-level permission model**:
+  **Global scope**: Built-in roles determine what users can do with projects, clusters, repositories, and Argo CD settings
+  **Project scope**: Project roles determine what users can do with resources and capabilities within a specific project

This means:
+ ADMIN users can access all project resources and capabilities without additional configuration
+ EDITOR and VIEWER users must be granted project roles to access project resources and capabilities
+ EDITOR users can create project roles to grant themselves and others access within projects they can update

 **Example workflow**:

1. An ADMIN maps an Identity Center group to the EDITOR role globally

1. An ADMIN creates a project for a team

1. The EDITOR configures project roles within that project to grant team members access to project-scoped resources

1. Team members (who may have VIEWER global role) can now see and manage Applications in that project based on their project role permissions

For details on configuring project roles, see [Project-based access control](#_project_based_access_control).

## Configure role mappings
<a name="_configure_role_mappings"></a>

Map AWS Identity Center users and groups to Argo CD roles when creating or updating the capability.

 **Example role mapping**:

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["AdminGroup", "alice@example.com"],
    "EDITOR": ["DeveloperGroup", "DevOpsTeam"],
    "VIEWER": ["ReadOnlyGroup", "bob@example.com"]
  }
}
```

**Note**  
Role names are case-sensitive and must be uppercase (ADMIN, EDITOR, VIEWER).

**Important**  
EKS Capabilities integration with AWS Identity Center supports up to 1,000 identities per Argo CD capability. An identity can be a user or a group.

 **Update role mappings**:

```
aws eks update-capability \
  --region us-east-1 \
  --cluster-name cluster \
  --capability-name capname \
  --endpoint "https://eks.ap-northeast-2.amazonaws.com" \
  --role-arn "arn:aws:iam::[.replaceable]111122223333:role/[.replaceable]`EKSCapabilityRole`" \
  --configuration '{
    "argoCd": {
      "rbacRoleMappings": {
        "addOrUpdateRoleMappings": [
          {
            "role": "ADMIN",
            "identities": [
              { "id": "686103e0-f051-7068-b225-e6392b959d9e", "type": "SSO_USER" }
            ]
          }
        ]
      }
    }
  }'
```

## Admin account usage
<a name="_admin_account_usage"></a>

The admin account is designed for initial setup and administrative tasks like registering clusters and configuring repositories.

 **When admin account is appropriate**:
+ Initial capability setup and configuration
+ Solo development or quick demonstrations
+ Administrative tasks (cluster registration, repository configuration, project creation)

 **Best practices for admin account**:
+ Don’t commit account tokens to version control
+ Rotate tokens immediately if exposed
+ Limit account token usage to setup and administrative tasks
+ Set short expiration times (maximum 12 hours)
+ Only 5 account tokens can be created at any given time

 **When to use project-based access instead**:
+ Shared development environments with multiple users
+ Any environment that resembles production
+ When you need audit trails of who performed actions
+ When you need to enforce resource restrictions or access boundaries

For production environments and multi-user scenarios, use project-based access control with dedicated RBAC roles mapped to AWS Identity Center groups.

## Project-based access control
<a name="_project_based_access_control"></a>

Use Argo CD Projects (AppProject) to provide fine-grained access control and resource isolation for teams.

**Important**  
Before assigning users or groups to project-specific roles, you must first map them to a global Argo CD role (ADMIN, EDITOR, or VIEWER) in the capability configuration. Users cannot access Argo CD without a global role mapping, even if they’re assigned to project roles.  
Consider mapping users to the VIEWER role globally, then grant additional permissions through project-specific roles. This provides baseline access while allowing fine-grained control at the project level.

Projects provide:
+  **Source restrictions**: Limit which Git repositories can be used
+  **Destination restrictions**: Limit which clusters and namespaces can be targeted
+  **Resource restrictions**: Limit which Kubernetes resource types can be deployed
+  **RBAC integration**: Map projects to AWS Identity Center groups or Argo CD roles

 **Example project for team isolation**:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  description: Team A applications

  # Required: Specify which namespaces this project watches for Applications
  sourceNamespaces:
  - argocd

  # Source restrictions
  sourceRepos:
  - https://github.com/myorg/team-a-apps

  # Destination restrictions
  destinations:
  - namespace: team-a-*
    server: arn:aws:eks:us-west-2:111122223333:cluster/production

  # Resource restrictions
  clusterResourceWhitelist:
  - group: ''
    kind: Namespace
  namespaceResourceWhitelist:
  - group: 'apps'
    kind: Deployment
  - group: ''
    kind: Service
  - group: ''
    kind: ConfigMap
```

### Source namespaces
<a name="_source_namespaces"></a>

When using the EKS Argo CD capability, the `spec.sourceNamespaces` field is required in AppProject definitions. This field specifies which namespace can contain Applications or ApplicationSets that reference this project.

**Important**  
The EKS Argo CD capability only supports a single namespace for Applications and ApplicationSets—the namespace you specified when creating the capability (typically `argocd`). This differs from open source Argo CD which supports multiple namespaces.

 **AppProject configuration** 

All AppProjects must include the capability’s configured namespace in `sourceNamespaces`:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a-project
  namespace: argocd
spec:
  description: Applications for Team A

  # Required: Specify the capability's configured namespace (configuration.argoCd.namespace)
  sourceNamespaces:
    - argocd  # Must match your capability's namespace configuration

  # Source repositories this project can deploy from
  sourceRepos:
    - 'https://github.com/my-org/team-a-*'

  # Destination restrictions
  destinations:
    - namespace: 'team-a-*'
      server: arn:aws:eks:us-west-2:111122223333:cluster/my-cluster
```

**Note**  
If you omit the capability’s namespace from `sourceNamespaces`, Applications or ApplicationSets in that namespace cannot reference this project, resulting in deployment failures.

 **Assign users to projects**:

Project roles grant EDITOR and VIEWER users access to project resources (Applications, ApplicationSets, repository and cluster credentials) and capabilities (logs, exec). Without project roles, these users cannot access these resources even if they have global role access.

ADMIN users have access to all Applications without needing project roles.

 **Example: Granting Application access to team members** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  # ... project configuration ...

  sourceNamespaces:
  - argocd

  # Project roles grant Application-level access
  roles:
  - name: developer
    description: Team A developers - can manage Applications
    policies:
    - p, proj:team-a:developer, applications, *, team-a/*, allow
    - p, proj:team-a:developer, clusters, get, *, allow  # See cluster names in UI
    groups:
    - 686103e0-f051-7068-b225-e6392b959d9e  # Identity Center group ID

  - name: viewer
    description: Team A viewers - read-only Application access
    policies:
    - p, proj:team-a:viewer, applications, get, team-a/*, allow
    - p, proj:team-a:viewer, clusters, get, *, allow  # See cluster names in UI
    groups:
    - 786203e0-f051-7068-b225-e6392b959d9f  # Identity Center group ID
```

**Note**  
Include `clusters, get, *, allow` in project roles to allow users to see cluster names in the UI. Without this permission, the destination cluster displays as "unknown".

 **Understanding project role policies**:

The policy format is: `p, proj:<project>:<role>, <resource>, <action>, <object>, <allow/deny>` 

 **Resource policies**:
+  `applications, , team-a/, allow` - Full access to all Applications in the team-a project
+  `applications, get, team-a/*, allow` - Read-only access to Applications
+  `applications, sync, team-a/*, allow` - Can sync Applications but not create/delete
+  `applications, delete, team-a/*, allow` - Can delete Applications (use with caution)
+  `applicationsets, , team-a/, allow` - Full access to ApplicationSets
+  `repositories, *, *, allow` - Access to repository credentials
+  `clusters, *, *, allow` - Access to cluster credentials

 **Capability policies**:
+  `logs, , team-a/, allow` - Access to application logs
+  `exec, , team-a/, allow` - Exec access to application pods

**Note**  
EDITOR users can create project roles to grant themselves and others permissions within projects they can update. This allows team leads to control access to project-scoped resources for their team without requiring ADMIN intervention.

**Note**  
Use Identity Center group IDs (not group names) in the `groups` field. You can also use Identity Center user IDs for individual user access. Find these IDs in the AWS Identity Center console or using the AWS CLI.

## Common permission patterns
<a name="_common_permission_patterns"></a>

 **Pattern 1: Admin team with full access** 

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam", "SRETeam"]
  }
}
```

ADMIN users can see and manage all project-scoped resources without additional configuration.

 **Pattern 2: Team leads manage projects, developers access via project roles** 

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam"],
    "EDITOR": ["TeamLeads"],
    "VIEWER": ["AllDevelopers"]
  }
}
```

1. ADMIN creates projects for each team

1. Team leads (EDITOR) configure project roles to grant their developers access to project resources (Applications, ApplicationSets, credentials) and capabilities (logs, exec)

1. Developers (VIEWER) can only access resources and capabilities allowed by their project roles

 **Pattern 3: Team-based access with project roles** 

1. ADMIN creates projects and maps team leads to EDITOR role globally

1. Team leads (EDITOR) assign team members to project roles within their projects

1. Team members only need VIEWER global role—project roles provide access to project resources and capabilities

```
{
  "rbacRoleMapping": {		 	 	 
    "ADMIN": ["PlatformTeam"],
    "EDITOR": ["TeamLeads"],
    "VIEWER": ["AllDevelopers"]
  }
}
```

## Best practices
<a name="_best_practices"></a>

 **Use groups instead of individual users**: Map AWS Identity Center groups to Argo CD roles rather than individual users for easier management.

 **Start with least privilege**: Begin with VIEWER access and grant EDITOR or ADMIN as needed.

 **Use projects for team isolation**: Create separate AppProjects for different teams or environments to enforce boundaries.

 **Leverage Identity Center federation**: Configure AWS Identity Center to federate with your corporate identity provider for centralized user management.

 **Regular access reviews**: Periodically review role mappings and project assignments to ensure appropriate access levels.

 **Limit cluster access**: Remember that Argo CD RBAC controls access to Argo CD resources and operations, but does not correspond to Kubernetes RBAC. Users with Argo CD access can deploy applications to clusters that Argo CD has access to. Limit which clusters Argo CD can access and use project destination restrictions to control where applications can be deployed.

## AWS service permissions
<a name="shared_aws_service_permissions"></a>

To use AWS services directly in Application resources (without creating Repository resources), attach the required IAM permissions to the Capability Role.

 **ECR for Helm charts**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage"
      ],
      "Resource": "*"
    }
  ]
}
```

 **CodeCommit repositories**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPull"
      ],
      "Resource": "arn:aws:codecommit:region:account-id:repository-name"
    }
  ]
}
```

 **CodeConnections (GitHub, GitLab, Bitbucket)**:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codeconnections:UseConnection"
      ],
      "Resource": "arn:aws:codeconnections:region:account-id:connection/connection-id"
    }
  ]
}
```

See [Configure repository access](argocd-configure-repositories.md) for details on using these integrations.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create applications and manage deployments
+  [Argo CD concepts](argocd-concepts.md) - Understand Argo CD concepts including Projects
+  [Security considerations for EKS Capabilities](capabilities-security.md) - Review security best practices for capabilities

# Working with Argo CD
<a name="working-with-argocd"></a>

With Argo CD, you define applications in Git repositories and Argo CD automatically syncs them to your Kubernetes clusters. This enables declarative, version-controlled application deployment with automated drift detection.

## Prerequisites
<a name="_prerequisites"></a>

Before working with Argo CD, you need:
+ An EKS cluster with the Argo CD capability created (see [Create an Argo CD capability](create-argocd-capability.md))
+ A Git repository containing Kubernetes manifests
+  `kubectl` configured to communicate with your cluster

## Common tasks
<a name="_common_tasks"></a>

The following topics guide you through common Argo CD tasks:

 ** [Configure repository access](argocd-configure-repositories.md) ** - Configure Argo CD to access your Git repositories using AWS Secrets Manager, AWS CodeConnections, or Kubernetes Secrets.

 ** [Register target clusters](argocd-register-clusters.md) ** - Register target clusters where Argo CD will deploy applications.

 ** [Working with Argo CD Projects](argocd-projects.md) ** - Organize applications and enforce security boundaries using Projects for multi-tenant environments.

 ** [Create Applications](argocd-create-application.md) ** - Create Applications that deploy from Git repositories with automated or manual sync policies.

 ** [Use ApplicationSets](argocd-applicationsets.md) ** - Use ApplicationSets to deploy applications across multiple environments or clusters using templates and generators.

## Access the Argo CD UI
<a name="_access_the_argo_cd_ui"></a>

Access the Argo CD UI through the EKS console:

1. Open the Amazon EKS console

1. Select your cluster

1. Choose the **Capabilities** tab

1. Choose **Argo CD** 

1. Choose **Open Argo CD UI** 

The UI provides visual application topology, sync status and history, resource health and events, manual sync controls, and application management.

## Upstream documentation
<a name="_upstream_documentation"></a>

For detailed information about Argo CD features:
+  [Argo CD Documentation](https://argo-cd.readthedocs.io/) - Complete user guide
+  [Application Spec](https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/) - Full Application API reference
+  [ApplicationSet Guide](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/) - ApplicationSet patterns and examples
+  [Argo CD GitHub](https://github.com/argoproj/argo-cd) - Source code and examples

# Configure repository access
<a name="argocd-configure-repositories"></a>

Before deploying applications, configure Argo CD to access your Git repositories and Helm chart registries. Argo CD supports multiple authentication methods for GitHub, GitLab, Bitbucket, AWS CodeCommit, and AWS ECR.

**Note**  
For direct AWS service integrations (ECR Helm charts, CodeCommit repositories, and CodeConnections), you can reference them directly in Application resources without creating Repository configurations. The Capability Role must have the required IAM permissions. See [Configure Argo CD permissions](argocd-permissions.md) for details.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Git repositories containing Kubernetes manifests
+  `kubectl` configured to communicate with your cluster

**Note**  
 AWS CodeConnections can connect to Git servers located in AWS Cloud or on-premises. For more information, see [AWS CodeConnections](https://docs.aws.amazon.com/codeconnections/latest/userguide/welcome.html).

## Authentication methods
<a name="_authentication_methods"></a>


| Method | Use Case | IAM Permissions Required | 
| --- | --- | --- | 
|   **Direct integration with AWS services**   | 
|  CodeCommit  |  Direct integration with AWS CodeCommit Git repositories. No Repository configuration needed.  |   `codecommit:GitPull`   | 
|  CodeConnections  |  Connect to GitHub, GitLab, or Bitbucket with managed authentication. Requires connection setup.  |   `codeconnections:UseConnection`   | 
|  ECR OCI Artifacts  |  Direct integration with AWS ECR for OCI Helm charts and manifest images. No Repository configuration needed.  |   `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly`   | 
|   **Repository configuration with credentials**   | 
|   AWS Secrets Manager (Username/Token)  |  Store personal access tokens or passwords. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|   AWS Secrets Manager (SSH Key)  |  Use SSH key authentication. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|   AWS Secrets Manager (GitHub App)  |  GitHub App authentication with private key. Enables credential rotation without Kubernetes access.  |   `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess`   | 
|  Kubernetes Secret  |  Standard Argo CD method using in-cluster secrets  |  None (permissions handled by EKS Access Entry with Kubernetes RBAC)  | 

## Direct access to AWS services
<a name="direct_access_to_shared_aws_services"></a>

For AWS services, you can reference them directly in Application resources without creating Repository configurations. The Capability Role must have the required IAM permissions.

### CodeCommit repositories
<a name="_codecommit_repositories"></a>

Reference CodeCommit repositories directly in Applications:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  source:
    repoURL: https://git-codecommit.region.amazonaws.com/v1/repos/repository-name
    targetRevision: main
    path: kubernetes/manifests
```

Required Capability Role permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "codecommit:GitPull",
      "Resource": "arn:aws:codecommit:region:account-id:repository-name"
    }
  ]
}
```

### CodeConnections
<a name="_codeconnections"></a>

Reference GitHub, GitLab, or Bitbucket repositories through CodeConnections. The repository URL format is derived from the CodeConnections connection ARN.

The repository URL format is:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  source:
    repoURL: https://codeconnections.region.amazonaws.com/git-http/account-id/region/connection-id/owner/repository.git
    targetRevision: main
    path: kubernetes/manifests
```

Required Capability Role permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "codeconnections:UseConnection",
      "Resource": "arn:aws:codeconnections:region:account-id:connection/connection-id"
    }
  ]
}
```

### ECR Helm charts
<a name="_ecr_helm_charts"></a>

ECR stores Helm charts as OCI artifacts. Argo CD supports two ways to reference them:

 **Helm format** (recommended for Helm charts):

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-helm
  namespace: argocd
spec:
  source:
    repoURL: account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: chart-version
    chart: chart-name
    helm:
      valueFiles:
        - values.yaml
```

Note: Do not include the `oci://` prefix when using Helm format. Use the `chart` field to specify the chart name.

 **OCI format** (for OCI artifacts with Kubernetes manifests):

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-oci
  namespace: argocd
spec:
  source:
    repoURL: oci://account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: artifact-version
    path: path-to-manifests
```

Note: Include the `oci://` prefix when using OCI format. Use the `path` field instead of `chart`.

Required Capability Role permissions - attach the managed policy:

```
arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
```

This policy includes the necessary ECR permissions: `ecr:GetAuthorizationToken`, `ecr:BatchGetImage`, and `ecr:GetDownloadUrlForLayer`.

## Using AWS Secrets Manager
<a name="using_shared_aws_secrets_manager"></a>

Store repository credentials in Secrets Manager and reference them in Argo CD Repository configurations. Using Secrets Manager enables automated credential rotation without requiring Kubernetes RBAC access—credentials can be rotated using IAM permissions to Secrets Manager, and Argo CD automatically reads the updated values.

**Note**  
For credential reuse across multiple repositories (for example, all repositories under a GitHub organization), use repository credential templates with `argocd.argoproj.io/secret-type: repo-creds`. This provides better UX than creating individual repository secrets. For more information, see [Repository Credentials](https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-repo-creds-yaml/) in the Argo CD documentation.

### Username and token authentication
<a name="_username_and_token_authentication"></a>

For HTTPS repositories with personal access tokens or passwords:

 **Create the secret in Secrets Manager**:

```
aws secretsmanager create-secret \
  --name argocd/my-repo \
  --description "GitHub credentials for Argo CD" \
  --secret-string '{"username":"your-username","token":"your-personal-access-token"}'
```

 **Optional TLS client certificate fields** (for private Git servers):

```
aws secretsmanager create-secret \
  --name argocd/my-private-repo \
  --secret-string '{
    "username":"your-username",
    "token":"your-token",
    "tlsClientCertData":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCi4uLgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t",
    "tlsClientCertKey":"LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCi4uLgotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0t"
  }'
```

**Note**  
The `tlsClientCertData` and `tlsClientCertKey` values must be base64 encoded.

 **Create a Repository Secret referencing Secrets Manager**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/my-repo-AbCdEf
  project: default
```

### SSH key authentication
<a name="_ssh_key_authentication"></a>

For SSH-based Git access, store the private key as plaintext (not JSON):

 **Create the secret with SSH private key**:

```
aws secretsmanager create-secret \
  --name argocd/my-repo-ssh \
  --description "SSH key for Argo CD" \
  --secret-string "-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
...
-----END OPENSSH PRIVATE KEY-----"
```

 **Create a Repository Secret for SSH**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-ssh
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: git@github.com:your-org/your-repo.git
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/my-repo-ssh-AbCdEf
  project: default
```

### GitHub App authentication
<a name="_github_app_authentication"></a>

For GitHub App authentication with a private key:

 **Create the secret with GitHub App credentials**:

```
aws secretsmanager create-secret \
  --name argocd/github-app \
  --description "GitHub App credentials for Argo CD" \
  --secret-string '{
    "githubAppPrivateKeySecret":"LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQouLi4KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=",
    "githubAppID":"123456",
    "githubAppInstallationID":"12345678"
  }'
```

**Note**  
The `githubAppPrivateKeySecret` value must be base64 encoded.

 **Optional field for GitHub Enterprise**:

```
aws secretsmanager create-secret \
  --name argocd/github-enterprise-app \
  --secret-string '{
    "githubAppPrivateKeySecret":"LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQouLi4KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=",
    "githubAppID":"123456",
    "githubAppInstallationID":"12345678",
    "githubAppEnterpriseBaseUrl":"https://github.example.com/api/v3"
  }'
```

 **Create a Repository Secret for GitHub App**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-github-app
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/github-app-AbCdEf
  project: default
```

### Repository credential templates
<a name="_repository_credential_templates"></a>

For credential reuse across multiple repositories (for example, all repositories under a GitHub organization or user), use repository credential templates with `argocd.argoproj.io/secret-type: repo-creds`. This provides better UX than creating individual repository secrets for each repository.

 **Create a repository credential template**:

```
apiVersion: v1
kind: Secret
metadata:
  name: github-org-creds
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repo-creds
stringData:
  type: git
  url: https://github.com/your-org
  secretArn: arn:aws:secretsmanager:us-west-2:111122223333:secret:argocd/github-org-AbCdEf
```

This credential template applies to all repositories matching the URL prefix `https://github.com/your-org`. You can then reference any repository under this organization in Applications without creating additional secrets.

For more information, see [Repository Credentials](https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-repo-creds-yaml/) in the Argo CD documentation.

**Important**  
Ensure your IAM Capability Role has the managed policy `arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess` attached, or equivalent permissions including `secretsmanager:GetSecretValue` and KMS decrypt permissions. See [Argo CD considerations](argocd-considerations.md) for IAM policy configuration.

## Using AWS CodeConnections
<a name="using_shared_aws_codeconnections"></a>

For CodeConnections integration, see [Connect to Git repositories with AWS CodeConnections](integration-codeconnections.md).

CodeConnections provides managed authentication for GitHub, GitLab, and Bitbucket without storing credentials.

## Using Kubernetes Secrets
<a name="_using_kubernetes_secrets"></a>

Store credentials directly in Kubernetes using the standard Argo CD method.

 **For HTTPS with personal access token**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/your-org/your-repo
  username: your-username
  password: your-personal-access-token
```

 **For SSH**:

```
apiVersion: v1
kind: Secret
metadata:
  name: my-repo-ssh
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: git@github.com:your-org/your-repo.git
  sshPrivateKey: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    ... your private key ...
    -----END OPENSSH PRIVATE KEY-----
```

## CodeCommit repositories
<a name="_codecommit_repositories_2"></a>

For AWS CodeCommit, grant your IAM Capability Role CodeCommit permissions (`codecommit:GitPull`).

Configure the repository:

```
apiVersion: v1
kind: Secret
metadata:
  name: codecommit-repo
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/my-repo
  project: default
```

For detailed IAM policy configuration, see [Argo CD considerations](argocd-considerations.md).

## Verify repository connection
<a name="_verify_repository_connection"></a>

Check connection status through the Argo CD UI under Settings → Repositories. The UI shows connection status and any authentication errors.

Repository Secrets do not include status information.

## Additional resources
<a name="_additional_resources"></a>
+  [Register target clusters](argocd-register-clusters.md) - Register target clusters for deployments
+  [Create Applications](argocd-create-application.md) - Create your first Application
+  [Argo CD considerations](argocd-considerations.md) - IAM permissions and security configuration
+  [Private Repositories](https://argo-cd.readthedocs.io/en/stable/user-guide/private-repositories/) - Upstream repository configuration reference

# Register target clusters
<a name="argocd-register-clusters"></a>

Register clusters to enable Argo CD to deploy applications to them. You can register the same cluster where Argo CD is running (local cluster) or remote clusters in different accounts or regions. Once a cluster is registered, it will remain in an Unknown connection state until you create an application within that cluster. To create an Argo CD application after your cluster is registered, see [Create Applications](argocd-create-application.md).

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+  `kubectl` configured to communicate with your cluster
+ For remote clusters: appropriate IAM permissions and access entries

## Register the local cluster
<a name="_register_the_local_cluster"></a>

To deploy applications to the same cluster where Argo CD is running, register it as a deployment target.

**Important**  
The Argo CD capability does not automatically register the local cluster. You must explicitly register it to deploy applications to the same cluster. You can use the cluster name `in-cluster` for compatibility with most Argo CD examples online.

**Note**  
An EKS Access Entry is automatically created for the local cluster with the Argo CD Capability Role, but no Kubernetes RBAC permissions are granted by default. This follows the principle of least privilege—you must explicitly configure the permissions Argo CD needs based on your use case. For example, if you only use this cluster as an Argo CD hub to manage remote clusters, it doesn’t need any local deployment permissions. See the Access Entry RBAC requirements section below for configuration options.

 **Using the Argo CD CLI**:

```
argocd cluster add <cluster-context-name> \
  --aws-cluster-name arn:aws:eks:us-west-2:111122223333:cluster/my-cluster \
  --name local-cluster
```

 **Using a Kubernetes Secret**:

```
apiVersion: v1
kind: Secret
metadata:
  name: local-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
stringData:
  name: local-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/my-cluster
  project: default
```

Apply the configuration:

```
kubectl apply -f local-cluster.yaml
```

**Note**  
Use the EKS cluster ARN in the `server` field, not the Kubernetes API server URL. The managed capability requires ARNs to identify clusters. The default `kubernetes.default.svc` is not supported.

## Register remote clusters
<a name="_register_remote_clusters"></a>

To deploy to remote clusters:

 **Step 1: Create the access entry on the remote cluster** 

Replace *region-code* with the AWS Region that your remote cluster is in, replace *remote-cluster* with the name of your remote cluster, and replace the ARN with your Argo CD capability role ARN.

```
aws eks create-access-entry \
  --region region-code \
  --cluster-name remote-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --type STANDARD
```

 **Step 2: Associate an access policy with Kubernetes RBAC permissions** 

The Access Entry requires Kubernetes RBAC permissions for Argo CD to deploy applications. For getting started quickly, you can use the `AmazonEKSClusterAdminPolicy`:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name remote-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` provides full cluster-admin access (equivalent to `system:masters`). This is convenient for getting started but should not be used in production. For production environments, use more restrictive permissions by associating the Access Entry with custom Kubernetes groups and creating appropriate Role or ClusterRole bindings. See the production setup section below for least privilege configuration.

 **Step 3: Register the cluster in Argo CD** 

 **Using the Argo CD CLI**:

```
argocd cluster add <cluster-context-name> \
  --aws-cluster-name arn:aws:eks:us-west-2:111122223333:cluster/remote-cluster \
  --name remote-cluster
```

 **Using a Kubernetes Secret**:

```
apiVersion: v1
kind: Secret
metadata:
  name: remote-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
stringData:
  name: remote-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/remote-cluster
  project: default
```

Apply the configuration:

```
kubectl apply -f remote-cluster.yaml
```

## Cross-account clusters
<a name="_cross_account_clusters"></a>

To deploy to clusters in different AWS accounts:

1. In the target account, create an Access Entry on the target EKS cluster using the Argo CD IAM Capability Role ARN from the source account as the principal

1. Associate an access policy with appropriate Kubernetes RBAC permissions

1. Register the cluster in Argo CD using its EKS cluster ARN

No additional IAM role creation or trust policy configuration is required—EKS Access Entries handle cross-account access.

The cluster ARN format includes the region, so cross-region deployments use the same process as same-region deployments.

## Verify cluster registration
<a name="_verify_cluster_registration"></a>

View registered clusters:

```
kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=cluster
```

Or check cluster status in the Argo CD UI under Settings → Clusters.

## Private clusters
<a name="_private_clusters"></a>

The Argo CD capability provides transparent access to fully private EKS clusters without requiring VPC peering or specialized networking configuration.

 AWS manages connectivity between the Argo CD capability and private remote clusters automatically.

Simply register the private cluster using its ARN—no additional networking setup required.

## Access Entry RBAC requirements
<a name="_access_entry_rbac_requirements"></a>

When you create an Argo CD capability, an EKS Access Entry is automatically created for the Capability Role, but no Kubernetes RBAC permissions are granted by default. This intentional design follows the principle of least privilege—different use cases require different permissions.

For example: \$1 If you use the cluster only as an Argo CD hub to manage remote clusters, it doesn’t need local deployment permissions \$1 If you deploy applications locally, it needs read access cluster-wide and write access to specific namespaces \$1 If you need to create CRDs, it requires additional cluster-admin permissions

You must explicitly configure the permissions Argo CD needs based on your requirements.

### Minimum permissions for Argo CD
<a name="_minimum_permissions_for_argo_cd"></a>

Argo CD needs two types of permissions to function without errors:

 **Read permissions (cluster-wide)**: Argo CD must be able to read all resource types and Custom Resource Definitions (CRDs) across the cluster for:
+ Resource discovery and health checks
+ Detecting drift between desired and actual state
+ Validating resources before deployment

 **Write permissions (namespace-specific)**: Argo CD needs create, update, and delete permissions for resources defined in Applications:
+ Deploy application workloads (Deployments, Services, ConfigMaps, etc.)
+ Apply Custom Resources (CRDs specific to your applications)
+ Manage application lifecycle

### Quick setup
<a name="_quick_setup"></a>

For getting started quickly, testing, or development environments, use `AmazonEKSClusterAdminPolicy`:

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
  --access-scope type=cluster
```

**Important**  
The `AmazonEKSClusterAdminPolicy` provides full cluster-admin access (equivalent to `system:masters`), including the ability to create CRDs, modify cluster-wide resources, and deploy to any namespace. This is convenient for development and POCs but should not be used in production. For production, use the least privilege setup below.

### Production setup with least privilege
<a name="_production_setup_with_least_privilege"></a>

For production environments, create custom Kubernetes RBAC that grants:
+ Cluster-wide read access to all resources (for discovery and health checks)
+ Namespace-specific write access (for deployments)

 **Step 1: Associate Access Entry with a custom Kubernetes group** 

```
aws eks associate-access-policy \
  --region region-code \
  --cluster-name my-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \
  --access-scope type=namespace,namespaces=app-namespace
```

 **Step 2: Create ClusterRole for read access** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argocd-read-all
rules:
# Read access to all resources for discovery and health checks
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
```

 **Step 3: Create Role for write access to application namespaces** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: argocd-deploy
  namespace: app-namespace
rules:
# Full access to deploy application resources
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
```

 **Step 4: Bind roles to the Kubernetes group** 

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argocd-read-all
subjects:
- kind: Group
  name: eks-access-entry:arn:aws:iam::111122223333:role/ArgoCDCapabilityRole
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: argocd-read-all
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: argocd-deploy
  namespace: app-namespace
subjects:
- kind: Group
  name: eks-access-entry:arn:aws:iam::111122223333:role/ArgoCDCapabilityRole
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: argocd-deploy
  apiGroup: rbac.authorization.k8s.io
```

**Note**  
The group name format for Access Entries is `eks-access-entry:` followed by the principal ARN. Repeat the RoleBinding for each namespace where Argo CD should deploy applications.

**Important**  
Argo CD must be able to read all resource types across the cluster for health checks and discovery, even if it only deploys to specific namespaces. Without cluster-wide read access, Argo CD will show errors when checking application health.

## Restrict cluster access with Projects
<a name="_restrict_cluster_access_with_projects"></a>

Use Projects to control which clusters and namespaces Applications can deploy to by configuring the allowed target clusters and namespaces in `spec.destinations`:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production
  namespace: argocd
spec:
  destinations:
  - server: arn:aws:eks:us-west-2:111122223333:cluster/prod-cluster
    namespace: '*'
  - server: arn:aws:eks:eu-west-1:111122223333:cluster/prod-eu-cluster
    namespace: '*'
  sourceRepos:
  - 'https://github.com/example/production-apps'
```

For details, see [Working with Argo CD Projects](argocd-projects.md).

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize applications and enforce security boundaries
+  [Create Applications](argocd-create-application.md) - Deploy your first application
+  [Use ApplicationSets](argocd-applicationsets.md) - Deploy to multiple clusters with ApplicationSets
+  [Argo CD considerations](argocd-considerations.md) - Multi-cluster patterns and cross-account setup
+  [Declarative Cluster Setup](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters) - Upstream cluster configuration reference

# Working with Argo CD Projects
<a name="argocd-projects"></a>

Argo CD Projects (AppProject) provide logical grouping and access control for Applications. Projects define which Git repositories, target clusters, and namespaces Applications can use, enabling multi-tenancy and security boundaries in shared Argo CD instances.

## When to use Projects
<a name="_when_to_use_projects"></a>

Use Projects to:
+ Separate applications by team, environment, or business unit
+ Restrict which repositories teams can deploy from
+ Limit which clusters and namespaces teams can deploy to
+ Enforce resource quotas and allowed resource types
+ Provide self-service application deployment with guardrails

## Default Project
<a name="_default_project"></a>

Every Argo CD capability includes a `default` project that allows access to all repositories, clusters, and namespaces. While useful for initial testing, create dedicated projects with explicit restrictions for production use.

For details on the default project configuration and how to restrict it, see [The Default Project](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#the-default-project) in the Argo CD documentation.

## Create a Project
<a name="_create_a_project"></a>

Create a Project by applying an `AppProject` resource to your cluster.

 **Example: Team-specific Project** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  description: Applications for Team A

  # Source repositories this project can deploy from
  sourceRepos:
    - 'https://github.com/my-org/team-a-*'
    - 'https://github.com/my-org/shared-libs'


  # Source namespaces (required for EKS capability)
  sourceNamespaces:
    - argocd
    - team-a-dev
    - team-a-prod

  # Destination clusters and namespaces
  destinations:
    - name: dev-cluster
      namespace: team-a-dev
    - name: prod-cluster
      namespace: team-a-prod

  # Allowed resource types
  clusterResourceWhitelist:
    - group: ''
      kind: Namespace

  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
    - group: ''
      kind: ConfigMap
```

Apply the Project:

```
kubectl apply -f team-a-project.yaml
```

## Project configuration
<a name="_project_configuration"></a>

### Source repositories
<a name="_source_repositories"></a>

Control which Git repositories Applications in this project can use:

```
spec:
  sourceRepos:
    - 'https://github.com/my-org/app-*'  # Wildcard pattern
    - 'https://github.com/my-org/infra'  # Specific repo
```

You can use wildcards and negation patterns (`!` prefix) to allow or deny specific repositories. For details, see [Managing Projects](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#managing-projects) in the Argo CD documentation.

### Source namespaces
<a name="_source_namespaces"></a>

When using the EKS Argo CD capability, the `spec.sourceNamespaces` field is **required** in your custom AppProject definition. This field specifies which namespaces can contain Applications or ApplicationSets that reference this project:

**Important**  
This is a required field for EKS Argo CD capability, which differs from OSS Argo CD where this field is optional.

#### Default AppProject behavior
<a name="_default_appproject_behavior"></a>

The `default` AppProject automatically includes the `argocd` namespace in `sourceNamespaces`. If you need to create Applications or ApplicationSets in additional namespaces, modify the `sourceNamespaces` field to add those namespaces:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: default
  namespace: argocd
spec:
  sourceNamespaces:
    - argocd           # Already included by default
    - team-a-apps      # Add additional namespaces as needed
    - team-b-apps
```

#### Custom AppProject configuration
<a name="_custom_appproject_configuration"></a>

When creating a custom AppProject, you must manually include the `argocd` system namespace and any other namespaces where you plan to create Applications or ApplicationSets:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a-project
  namespace: argocd
spec:
  description: Applications for Team A

  # Required: Manually specify all namespaces
  sourceNamespaces:
    - argocd           # ArgoCD system namespace (required)
    - team-a-dev       # Custom namespace for dev Applications
    - team-a-prod      # Custom namespace for prod Applications

  # Source repositories this project can deploy from
  sourceRepos:
    - 'https://github.com/my-org/team-a-*'

  # Destination restrictions
  destinations:
    - namespace: 'team-a-*'
      server: https://kubernetes.default.svc
```

**Note**  
If you omit a namespace from `sourceNamespaces`, Applications or ApplicationSets created in that namespace will not be able to reference this project, resulting in deployment failures.

### Destination restrictions
<a name="_destination_restrictions"></a>

Limit where Applications can deploy:

```
spec:
  destinations:
    - name: prod-cluster  # Specific cluster by name
      namespace: production
    - name: '*'  # Any cluster
      namespace: team-a-*  # Namespace pattern
```

**Important**  
Use specific cluster names and namespace patterns rather than wildcards for production Projects. This prevents accidental deployments to unauthorized clusters or namespaces.

You can use wildcards and negation patterns to control destinations. For details, see [Managing Projects](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#managing-projects) in the Argo CD documentation.

### Resource restrictions
<a name="_resource_restrictions"></a>

Control which Kubernetes resource types can be deployed:

 **Cluster-scoped resources**:

```
spec:
  clusterResourceWhitelist:
    - group: ''
      kind: Namespace
    - group: 'rbac.authorization.k8s.io'
      kind: Role
```

 **Namespace-scoped resources**:

```
spec:
  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
    - group: ''
      kind: ConfigMap
    - group: 's3.services.k8s.aws'
      kind: Bucket
```

Use blacklists to deny specific resources:

```
spec:
  namespaceResourceBlacklist:
    - group: ''
      kind: Secret  # Prevent direct Secret creation
```

## Assign Applications to Projects
<a name="_assign_applications_to_projects"></a>

When creating an Application, specify the project in the `spec.project` field:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: team-a  # Assign to team-a project
  source:
    repoURL: https://github.com/my-org/my-app
    path: manifests
  destination:
    name: prod-cluster
    namespace: team-a-prod
```

Applications without a specified project use the `default` project.

## Project roles and RBAC
<a name="_project_roles_and_rbac"></a>

Projects can define custom roles for fine-grained access control. Map project roles to AWS Identity Center users and groups in your capability configuration to control who can sync, update, or delete applications.

 **Example: Project with developer and admin roles** 

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: team-a
  namespace: argocd
spec:
  sourceRepos:
    - '*'
  destinations:
    - name: '*'
      namespace: 'team-a-*'

  roles:
    - name: developer
      description: Developers can sync applications
      policies:
        - p, proj:team-a:developer, applications, sync, team-a/*, allow
        - p, proj:team-a:developer, applications, get, team-a/*, allow
      groups:
        - team-a-developers

    - name: admin
      description: Admins have full access
      policies:
        - p, proj:team-a:admin, applications, *, team-a/*, allow
      groups:
        - team-a-admins
```

For details on project roles, JWT tokens for CI/CD pipelines, and RBAC configuration, see [Project Roles](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-roles) in the Argo CD documentation.

## Common patterns
<a name="_common_patterns"></a>

### Environment-based Projects
<a name="_environment_based_projects"></a>

Create separate projects for each environment:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/*'
  destinations:
    - name: prod-cluster
      namespace: '*'
  # Strict resource controls for production
  clusterResourceWhitelist: []
  namespaceResourceWhitelist:
    - group: 'apps'
      kind: Deployment
    - group: ''
      kind: Service
```

### Team-based Projects
<a name="_team_based_projects"></a>

Isolate teams with dedicated projects:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: platform-team
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/platform-*'
  destinations:
    - name: '*'
      namespace: 'platform-*'
  # Platform team can manage cluster resources
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
```

### Multi-cluster Projects
<a name="_multi_cluster_projects"></a>

Deploy to multiple clusters with consistent policies:

```
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: global-app
  namespace: argocd
spec:
  sourceRepos:
    - 'https://github.com/my-org/global-app'
  destinations:
    - name: us-west-cluster
      namespace: app
    - name: eu-west-cluster
      namespace: app
    - name: ap-south-cluster
      namespace: app
```

## Best practices
<a name="_best_practices"></a>

 **Start with restrictive Projects**: Begin with narrow permissions and expand as needed rather than starting with broad access.

 **Use namespace patterns**: Leverage wildcards in namespace restrictions (like `team-a-*`) to allow flexibility while maintaining boundaries.

 **Separate production Projects**: Use dedicated Projects for production with stricter controls and manual sync policies.

 **Document Project purposes**: Use the `description` field to explain what each Project is for and who should use it.

 **Review Project permissions regularly**: Audit Projects periodically to ensure restrictions still align with team needs and security requirements.

## Additional resources
<a name="_additional_resources"></a>
+  [Configure Argo CD permissions](argocd-permissions.md) - Configure RBAC and Identity Center integration
+  [Create Applications](argocd-create-application.md) - Create Applications within Projects
+  [Use ApplicationSets](argocd-applicationsets.md) - Use ApplicationSets with Projects for multi-cluster deployments
+  [Argo CD Projects Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/projects/) - Complete upstream reference

# Create Applications
<a name="argocd-create-application"></a>

Applications represent deployments in target clusters. Each Application defines a source (Git repository) and destination (cluster and namespace). When applied, Argo CD will create the resources specified by manifests in the Git repository to the namespace in the cluster. Applications often specify workload deployments, but they can manage any Kubernetes resources available in the destination cluster.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Repository access configured (see [Configure repository access](argocd-configure-repositories.md))
+ Target cluster registered (see [Register target clusters](argocd-register-clusters.md))
+  `kubectl` configured to communicate with your cluster

## Create a basic Application
<a name="_create_a_basic_application"></a>

Define an Application that deploys from a Git repository:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps
    targetRevision: HEAD
    path: guestbook
  destination:
    name: in-cluster
    namespace: default
```

**Note**  
Use `destination.name` with the cluster name you used when registering the cluster (like `in-cluster` for the local cluster). The `destination.server` field also works with EKS cluster ARNs, but using cluster names is recommended for better readability.

Apply the Application:

```
kubectl apply -f application.yaml
```

View the Application status:

```
kubectl get application guestbook -n argocd
```

## Source configuration
<a name="_source_configuration"></a>

 **Git repository**:

```
spec:
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: main
    path: kubernetes/manifests
```

 **Specific Git tag or commit**:

```
spec:
  source:
    targetRevision: v1.2.0  # or commit SHA
```

 **Helm chart**:

```
spec:
  source:
    repoURL: https://github.com/example/helm-charts
    targetRevision: main
    path: charts/my-app
    helm:
      valueFiles:
      - values.yaml
      parameters:
      - name: image.tag
        value: v1.2.0
```

 **Helm chart with values from external Git repository** (multi-source pattern):

```
spec:
  sources:
  - repoURL: https://github.com/example/helm-charts
    targetRevision: main
    path: charts/my-app
    helm:
      valueFiles:
      - $values/environments/production/values.yaml
  - repoURL: https://github.com/example/config-repo
    targetRevision: main
    ref: values
```

For more information, see [Helm Value Files from External Git Repository](https://argo-cd.readthedocs.io/en/stable/user-guide/multiple_sources/#helm-value-files-from-external-git-repository) in the Argo CD documentation.

 **Helm chart from ECR**:

```
spec:
  source:
    repoURL: oci://account-id.dkr.ecr.region.amazonaws.com/repository-name
    targetRevision: chart-version
    chart: chart-name
```

If the Capability Role has the required ECR permissions, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Git repository from CodeCommit**:

```
spec:
  source:
    repoURL: https://git-codecommit.region.amazonaws.com/v1/repos/repository-name
    targetRevision: main
    path: kubernetes/manifests
```

If the Capability Role has the required CodeCommit permissions, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Git repository from CodeConnections**:

```
spec:
  source:
    repoURL: https://codeconnections.region.amazonaws.com/git-http/account-id/region/connection-id/owner/repository.git
    targetRevision: main
    path: kubernetes/manifests
```

The repository URL format is derived from the CodeConnections connection ARN. If the Capability Role has the required CodeConnections permissions and a connection is configured, the repository is used directly and no Repository configuration is required. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Kustomize**:

```
spec:
  source:
    repoURL: https://github.com/example/kustomize-app
    targetRevision: main
    path: overlays/production
    kustomize:
      namePrefix: prod-
```

## Sync policies
<a name="_sync_policies"></a>

Control how Argo CD syncs applications.

 **Manual sync (default)**:

Applications require manual approval to sync:

```
spec:
  syncPolicy: {}  # No automated sync
```

Manually trigger sync:

```
kubectl patch application guestbook -n argocd \
  --type merge \
  --patch '{"operation": {"initiatedBy": {"username": "admin"}, "sync": {}}}'
```

 **Automatic sync**:

Applications automatically sync when Git changes are detected:

```
spec:
  syncPolicy:
    automated: {}
```

 **Self-healing**:

Automatically revert manual changes to the cluster:

```
spec:
  syncPolicy:
    automated:
      selfHeal: true
```

When enabled, Argo CD reverts any manual changes made directly to the cluster, ensuring Git remains the source of truth.

 **Pruning**:

Automatically delete resources removed from Git:

```
spec:
  syncPolicy:
    automated:
      prune: true
```

**Warning**  
Pruning will delete resources from your cluster. Use with caution in production environments.

 **Combined automated sync**:

```
spec:
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
```

 **Retry configuration**:

Configure retry behavior for failed syncs:

```
spec:
  syncPolicy:
    retry:
      limit: 5  # Number of failed sync attempts; unlimited if less than 0
      backoff:
        duration: 5s  # Amount to back off (default unit: seconds, also supports "2m", "1h")
        factor: 2  # Factor to multiply the base duration after each failed retry
        maxDuration: 3m  # Maximum amount of time allowed for the backoff strategy
```

This is particularly useful for resources that depend on CRDs being created first, or when working with kro instances where the CRD may not be immediately available.

## Sync options
<a name="_sync_options"></a>

Additional sync configuration:

 **Create namespace if it doesn’t exist**:

```
spec:
  syncPolicy:
    syncOptions:
    - CreateNamespace=true
```

 **Skip dry run for missing resources**:

Useful when applying resources that depend on CRDs that don’t exist yet (like kro instances):

```
spec:
  syncPolicy:
    syncOptions:
    - SkipDryRunOnMissingResource=true
```

This can also be applied to specific resources using a label on the resource itself.

 **Validate resources before applying**:

```
spec:
  syncPolicy:
    syncOptions:
    - Validate=true
```

 **Apply out of sync only**:

```
spec:
  syncPolicy:
    syncOptions:
    - ApplyOutOfSyncOnly=true
```

## Advanced sync features
<a name="_advanced_sync_features"></a>

Argo CD supports advanced sync features for complex deployments:
+  **Sync waves** - Control resource creation order with `argocd.argoproj.io/sync-wave` annotations
+  **Sync hooks** - Run jobs before or after sync with `argocd.argoproj.io/hook` annotations (PreSync, PostSync, SyncFail)
+  **Resource health assessment** - Custom health checks for application-specific resources

For details, see [Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/) and [Resource Hooks](https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/) in the Argo CD documentation.

## Ignore differences
<a name="_ignore_differences"></a>

Prevent Argo CD from syncing specific fields that are managed by other controllers (like HPA managing replicas):

```
spec:
  ignoreDifferences:
  - group: apps
    kind: Deployment
    jsonPointers:
    - /spec/replicas
```

For details on ignore patterns and field exclusions, see [Diffing Customization](https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/) in the Argo CD documentation.

## Multi-environment deployment
<a name="_multi_environment_deployment"></a>

Deploy the same application to multiple environments:

 **Development**:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: develop
    path: overlays/development
  destination:
    name: dev-cluster
    namespace: my-app
```

 **Production**:

```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-prod
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/example/my-app
    targetRevision: main
    path: overlays/production
  destination:
    name: prod-cluster
    namespace: my-app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
```

## Monitor and manage Applications
<a name="_monitor_and_manage_applications"></a>

 **View Application status**:

```
kubectl get application my-app -n argocd
```

 **Access the Argo CD UI**:

Open the Argo CD UI through the EKS console to view application topology, sync status, resource health, and deployment history. See [Working with Argo CD](working-with-argocd.md) for UI access instructions.

 **Rollback Applications**:

Rollback to a previous revision using the Argo CD UI, the Argo CD CLI, or by updating the `targetRevision` in the Application spec to a previous Git commit or tag.

Using the Argo CD CLI:

```
argocd app rollback argocd/my-app <revision-id>
```

**Note**  
When using the Argo CD CLI with the managed capability, specify applications with the namespace prefix: `namespace/appname`.

For more information, see [argocd app rollback](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd_app_rollback/) in the Argo CD documentation.

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize applications with Projects for multi-tenant environments
+  [Use ApplicationSets](argocd-applicationsets.md) - Deploy to multiple clusters with templates
+  [Application Specification](https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/) - Complete Application API reference
+  [Sync Options](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/) - Advanced sync configuration

# Use ApplicationSets
<a name="argocd-applicationsets"></a>

ApplicationSets generate multiple Applications from templates, enabling you to deploy the same application across multiple clusters, environments, or namespaces with a single resource definition.

## Prerequisites
<a name="_prerequisites"></a>
+ An EKS cluster with the Argo CD capability created
+ Repository access configured (see [Configure repository access](argocd-configure-repositories.md))
+  `kubectl` configured to communicate with your cluster

**Note**  
Multiple target clusters are not required for ApplicationSets. You can use generators other than the cluster generator (like list, git, or matrix generators) to deploy applications without remote clusters.

## How ApplicationSets work
<a name="_how_applicationsets_work"></a>

ApplicationSets use generators to produce parameters, then apply those parameters to an Application template. Each set of generated parameters creates one Application.

Common generators for EKS deployments:
+  **List generator** - Explicitly define clusters and parameters for each environment
+  **Cluster generator** - Automatically deploy to all registered clusters
+  **Git generator** - Generate Applications from repository structure
+  **Matrix generator** - Combine generators for multi-dimensional deployments
+  **Merge generator** - Merge parameters from multiple generators

For complete generator reference, see [ApplicationSet Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/).

## List generator
<a name="_list_generator"></a>

Deploy to multiple clusters with explicit configuration:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: guestbook-all-clusters
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - environment: dev
        replicas: "2"
      - environment: staging
        replicas: "3"
      - environment: prod
        replicas: "5"
  template:
    metadata:
      name: 'guestbook-{{environment}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/guestbook
        targetRevision: HEAD
        path: 'overlays/{{environment}}'
      destination:
        name: '{{environment}}-cluster'
        namespace: guestbook
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

**Note**  
Use `destination.name` with cluster names for better readability. The `destination.server` field also works with EKS cluster ARNs if needed.

This creates three Applications: `guestbook-dev`, `guestbook-staging`, and `guestbook-prod`.

## Cluster generator
<a name="_cluster_generator"></a>

Deploy to all registered clusters automatically:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-addons
  namespace: argocd
spec:
  generators:
  - clusters: {}
  template:
    metadata:
      name: '{{name}}-addons'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/cluster-addons
        targetRevision: HEAD
        path: addons
      destination:
        server: '{{server}}'
        namespace: kube-system
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

This automatically creates an Application for each registered cluster.

 **Filter clusters**:

Use `matchLabels` to include specific clusters, or `matchExpressions` to exclude clusters:

```
spec:
  generators:
  - clusters:
      selector:
        matchLabels:
          environment: production
        matchExpressions:
        - key: skip-appset
          operator: DoesNotExist
```

## Git generators
<a name="_git_generators"></a>

Git generators create Applications based on repository structure:
+  **Directory generator** - Deploy each directory as a separate Application (useful for microservices)
+  **File generator** - Generate Applications from parameter files (useful for multi-tenant deployments)

 **Example: Microservices deployment** 

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: microservices
  namespace: argocd
spec:
  generators:
  - git:
      repoURL: https://github.com/example/microservices
      revision: HEAD
      directories:
      - path: services/*
  template:
    metadata:
      name: '{{path.basename}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/microservices
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        name: my-cluster
        namespace: '{{path.basename}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
        - CreateNamespace=true
```

For details on Git generators and file-based configuration, see [Git Generator](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Git/) in the Argo CD documentation.

## Matrix generator
<a name="_matrix_generator"></a>

Combine multiple generators to deploy across multiple dimensions (environments × clusters):

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: multi-env-multi-cluster
  namespace: argocd
spec:
  generators:
  - matrix:
      generators:
      - list:
          elements:
          - environment: dev
          - environment: staging
          - environment: prod
      - clusters:
          selector:
            matchLabels:
              region: us-west-2
  template:
    metadata:
      name: 'app-{{environment}}-{{name}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/app
        targetRevision: HEAD
        path: 'overlays/{{environment}}'
      destination:
        name: '{{name}}'
        namespace: 'app-{{environment}}'
```

For details on combining generators, see [Matrix Generator](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators-Matrix/) in the Argo CD documentation.

## Multi-region deployment
<a name="_multi_region_deployment"></a>

Deploy to clusters across multiple regions:

```
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: global-app
  namespace: argocd
spec:
  generators:
  - list:
      elements:
      - clusterName: prod-us-west
        region: us-west-2
      - clusterName: prod-us-east
        region: us-east-1
      - clusterName: prod-eu-west
        region: eu-west-1
  template:
    metadata:
      name: 'app-{{region}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/example/app
        targetRevision: HEAD
        path: kubernetes
        helm:
          parameters:
          - name: region
            value: '{{region}}'
      destination:
        name: '{{clusterName}}'
        namespace: app
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
```

## Manage ApplicationSets
<a name="_manage_applicationsets"></a>

 **View ApplicationSets and generated Applications**:

```
kubectl get applicationsets -n argocd
kubectl get applications -n argocd -l argocd.argoproj.io/application-set-name=<applicationset-name>
```

 **Update an ApplicationSet**:

Modify the ApplicationSet spec and reapply. Argo CD automatically updates all generated Applications:

```
kubectl apply -f applicationset.yaml
```

 **Delete an ApplicationSet**:

```
kubectl delete applicationset <name> -n argocd
```

**Warning**  
Deleting an ApplicationSet deletes all generated Applications. If those Applications have `prune: true`, their resources will also be deleted from target clusters.  
To preserve deployed resources when deleting an ApplicationSet, set `.syncPolicy.preserveResourcesOnDeletion` to `true` in the ApplicationSet spec. For more information, see [Application Pruning & Resource Deletion](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Application-Deletion/) in the Argo CD documentation.

**Important**  
Argo CD’s ApplicationSets feature has security considerations you should be aware of before using ApplicationSets. For more information, see [ApplicationSet Security](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Security/) in the Argo CD documentation.

## Additional resources
<a name="_additional_resources"></a>
+  [Working with Argo CD Projects](argocd-projects.md) - Organize ApplicationSets with Projects
+  [Create Applications](argocd-create-application.md) - Understand Application configuration
+  [ApplicationSet Documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/) - Complete generator reference and patterns
+  [Generator Reference](https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/Generators/) - Detailed generator specifications

# Argo CD considerations
<a name="argocd-considerations"></a>

This topic covers important considerations for using the EKS Capability for Argo CD, including planning, permissions, authentication, and multi-cluster deployment patterns.

## Planning
<a name="_planning"></a>

Before deploying Argo CD, consider the following:

 **Repository strategy**: Determine where your application manifests will be stored (CodeCommit, GitHub, GitLab, Bitbucket). Plan your repository structure and branching strategy for different environments.

 **RBAC strategy**: Plan which teams or users should have admin, editor, or viewer access. Map these to AWS Identity Center groups or Argo CD roles.

 **Multi-cluster architecture**: Determine if you’ll manage multiple clusters from a single Argo CD instance. Consider using a dedicated management cluster for Argo CD.

 **Application organization**: Plan how you’ll structure Applications and ApplicationSets. Consider using projects to organize applications by team or environment.

 **Sync policies**: Decide whether applications should sync automatically or require manual approval. Automated sync is common for development, manual for production.

## Permissions
<a name="_permissions"></a>

For detailed information about IAM Capability Roles, trust policies, and security best practices, see [Amazon EKS capability IAM role](capability-role.md) and [Security considerations for EKS Capabilities](capabilities-security.md).

### IAM Capability Role overview
<a name="_iam_capability_role_overview"></a>

When you create an Argo CD capability resource, you provide an IAM Capability Role. Unlike ACK, Argo CD primarily manages Kubernetes resources, not AWS resources directly. However, the IAM Capability Role is required for:
+ Accessing private Git repositories in CodeCommit
+ Integrating with AWS Identity Center for authentication
+ Accessing secrets in AWS Secrets Manager (if configured)
+ Cross-cluster deployments to other EKS clusters

### CodeCommit integration
<a name="_codecommit_integration"></a>

If you’re using CodeCommit repositories, attach a policy with read permissions:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "codecommit:GitPull"
      ],
      "Resource": "*"
    }
  ]
}
```

**Important**  
For production use, restrict the `Resource` field to specific repository ARNs instead of using `"*"`.  
Example:  

```
"Resource": "arn:aws:codecommit:us-west-2:111122223333:my-app-repo"
```
This limits the Argo CD capability’s access to only the repositories it needs to manage.

### Secrets Manager integration
<a name="_secrets_manager_integration"></a>

If you’re storing repository credentials in Secrets Manager, attach the managed policy for read access:

```
arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess
```

This policy includes the necessary permissions: `secretsmanager:GetSecretValue`, `secretsmanager:DescribeSecret`, and KMS decrypt permissions.

### Basic setup
<a name="_basic_setup"></a>

For basic Argo CD functionality with public Git repositories, no additional IAM policies are required beyond the trust policy.

## Authentication
<a name="_authentication"></a>

### AWS Identity Center integration
<a name="shared_aws_identity_center_integration"></a>

The Argo CD managed capability integrates directly with AWS Identity Center (formerly AWS SSO), enabling you to use your existing identity provider for authentication.

When you configure AWS Identity Center integration:

1. Users access the Argo CD UI through the EKS console

1. They authenticate using AWS Identity Center (which can federate to your corporate identity provider)

1.  AWS Identity Center provides user and group information to Argo CD

1. Argo CD maps users and groups to RBAC roles based on your configuration

1. Users see only the applications and resources they have permission to access

### Simplifying access with Identity Center permission sets
<a name="_simplifying_access_with_identity_center_permission_sets"></a>

 AWS Identity Center provides two distinct authentication paths when working with Argo CD:

 **Argo CD API authentication**: Identity Center provides SSO authentication to the Argo CD UI and API. This is configured through the Argo CD capability’s RBAC role mappings.

 **EKS cluster access**: The Argo CD capability uses the customer-provided IAM role to authenticate with EKS clusters through access entries. These access entries can be manually configured to add or remove permissions.

You can use [Identity Center permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocreatepermissionset.html) to simplify identity management by allowing a single identity to access both Argo CD and EKS clusters. This reduces overhead by requiring you to manage only one identity across both systems, rather than maintaining separate credentials for Argo CD access and cluster access.

### RBAC role mappings
<a name="_rbac_role_mappings"></a>

Argo CD has built-in roles that you can map to AWS Identity Center users and groups:

 **ADMIN**: Full access to all applications and settings. Can create, update, and delete applications. Can manage Argo CD configuration.

 **EDITOR**: Can create and modify applications. Cannot change Argo CD settings or delete applications.

 **VIEWER**: Read-only access to applications. Can view application status and history. Cannot make changes.

**Note**  
Role names are case-sensitive and must be uppercase (ADMIN, EDITOR, VIEWER).

**Important**  
EKS Capabilities integration with AWS Identity Center supports up to 1,000 identities per Argo CD capability. An identity can be a user or a group.

## Multi-cluster deployments
<a name="_multi_cluster_deployments"></a>

The Argo CD managed capability supports multi-cluster deployments, enabling you to manage applications across development, staging, and production clusters from a single Argo CD instance.

### How multi-cluster works
<a name="_how_multi_cluster_works"></a>

When you register additional clusters with Argo CD:

1. You create cluster secrets that reference target EKS clusters by ARN

1. You create Applications or ApplicationSets that target different clusters

1. Argo CD connects to each cluster to deploy and watch resources

1. You view and manage all clusters from a single Argo CD UI

### Prerequisites for multi-cluster
<a name="_prerequisites_for_multi_cluster"></a>

Before registering additional clusters:
+ Create an Access Entry on the target cluster for the Argo CD capability role
+ Ensure network connectivity between the Argo CD capability and target clusters
+ Verify IAM permissions to access the target clusters

### Register a cluster
<a name="_register_a_cluster"></a>

Register clusters using Kubernetes Secrets in the `argocd` namespace.

Get the target cluster ARN. Replace *region-code* with the AWS Region that your target cluster is in and replace *target-cluster* with the name of your target cluster.

```
aws eks describe-cluster \
  --region region-code \
  --name target-cluster \
  --query 'cluster.arn' \
  --output text
```

Create a cluster secret using the cluster ARN:

```
apiVersion: v1
kind: Secret
metadata:
  name: target-cluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: target-cluster
  server: arn:aws:eks:us-west-2:111122223333:cluster/target-cluster
  project: default
```

**Important**  
Use the EKS cluster ARN in the `server` field, not the Kubernetes API server URL. The managed capability requires ARNs to identify target clusters.

Apply the secret:

```
kubectl apply -f cluster-secret.yaml
```

### Configure Access Entry on target cluster
<a name="_configure_access_entry_on_target_cluster"></a>

The target cluster must have an Access Entry that grants the Argo CD capability role permission to deploy applications. Replace *region-code* with the AWS Region that your target cluster is in, replace *target-cluster* with the name of your target cluster, and replace the ARN with your Argo CD capability role ARN.

```
aws eks create-access-entry \
  --region region-code \
  --cluster-name target-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/ArgoCDCapabilityRole \
  --type STANDARD \
  --kubernetes-groups system:masters
```

**Note**  
For production use, consider using more restrictive Kubernetes groups instead of `system:masters`.

### Private cluster access
<a name="_private_cluster_access"></a>

The Argo CD managed capability can deploy to fully private EKS clusters without requiring VPC peering or specialized networking configuration. AWS manages connectivity between the Argo CD capability and private remote clusters automatically. Ensure your repository access controls and Argo CD RBAC policies are properly configured.

### Cross-account deployments
<a name="_cross_account_deployments"></a>

For cross-account deployments, add the Argo CD IAM Capability Role from the source account to the target cluster’s EKS Access Entry:

1. In the target account, create an Access Entry on the target EKS cluster

1. Use the Argo CD IAM Capability Role ARN from the source account as the principal

1. Configure appropriate Kubernetes RBAC permissions for the Access Entry

1. Register the target cluster in Argo CD using its EKS cluster ARN

No additional IAM role creation or trust policy configuration is required—EKS Access Entries handle cross-account access.

## Best practices
<a name="_best_practices"></a>

 **Use declarative sources as the source of truth**: Store all your application manifests in declarative sources (Git repositories, Helm registries, or OCI images), enabling version control, audit trails, and collaboration.

 **Implement proper RBAC**: Use AWS Identity Center integration to control who can access and manage applications in Argo CD. Argo CD supports fine-grained access control to resources within Applications (Deployments, Pods, ConfigMaps, Secrets).

 **Use ApplicationSets for multi-environment deployments**: Use ApplicationSets to deploy applications across multiple clusters or namespaces with different configurations.

## Lifecycle management
<a name="_lifecycle_management"></a>

### Application sync policies
<a name="_application_sync_policies"></a>

Control how Argo CD syncs applications:

 **Manual sync**: Applications require manual approval to sync changes. Recommended for **production** environments.

 **Automatic sync**: Applications automatically sync when Git changes are detected. Common for development and staging environments.

 **Self-healing**: Automatically revert manual changes made to the cluster. Ensures cluster state matches Git.

 **Pruning**: Automatically delete resources removed from Git. Use with caution as this can delete resources.

### Application health
<a name="_application_health"></a>

Argo CD continuously monitors application health:
+  **Healthy**: All resources are running as expected
+  **Progressing**: Resources are being created or updated
+  **Degraded**: Some resources are not healthy
+  **Suspended**: Application is paused
+  **Missing**: Resources are missing from the cluster

### Sync windows
<a name="_sync_windows"></a>

Configure sync windows to control when applications can be synced:
+ Allow syncs only during maintenance windows
+ Block syncs during business hours
+ Schedule automatic syncs for specific times
+ Use sync windows in situations where you need to make changes and stop any syncs (break-glass scenarios)

## Webhook configuration for faster sync
<a name="_webhook_configuration_for_faster_sync"></a>

By default, Argo CD polls Git repositories every 6 minutes to detect changes. For more responsive deployments, configure Git webhooks to trigger immediate syncs when changes are pushed.

Webhooks provide several benefits:
+ Immediate sync response when code is pushed (seconds vs minutes)
+ Reduced polling overhead and improved system performance
+ More efficient use of API rate limits
+ Better user experience with faster feedback

### Webhook endpoint
<a name="_webhook_endpoint"></a>

The webhook URL follows the pattern `${serverUrl}/api/webhook`, where `serverUrl` is your Argo CD server URL.

For example, if your Argo CD server URL is `https://abc123.eks-capabilities.us-west-2.amazonaws.com`, the webhook URL is:

```
https://abc123.eks-capabilities.us-west-2.amazonaws.com/api/webhook
```

### Configure webhooks by Git provider
<a name="_configure_webhooks_by_git_provider"></a>

 **GitHub**: In your repository settings, add a webhook with the Argo CD webhook URL. Set the content type to `application/json` and select "Just the push event".

 **GitLab**: In your project settings, add a webhook with the Argo CD webhook URL. Enable "Push events" and optionally "Tag push events".

 **Bitbucket**: In your repository settings, add a webhook with the Argo CD webhook URL. Select "Repository push" as the trigger.

 **CodeCommit**: Create an Amazon EventBridge rule that triggers on CodeCommit repository state changes and sends notifications to the Argo CD webhook endpoint.

For detailed webhook configuration instructions, see [Argo CD Webhook Configuration](https://argo-cd.readthedocs.io/en/stable/operator-manual/webhook/).

**Note**  
Webhooks complement polling—they don’t replace it. Argo CD continues to poll repositories as a fallback mechanism in case webhook notifications are missed.

## Next steps
<a name="_next_steps"></a>
+  [Working with Argo CD](working-with-argocd.md) - Learn how to create and manage Argo CD Applications
+  [Troubleshoot issues with Argo CD capabilities](argocd-troubleshooting.md) - Troubleshoot Argo CD issues
+  [Working with capability resources](working-with-capabilities.md) - Manage your Argo CD capability resource

# Troubleshoot issues with Argo CD capabilities
<a name="argocd-troubleshooting"></a>

This topic provides troubleshooting guidance for the EKS Capability for Argo CD, including capability health checks, application sync issues, repository authentication, and multi-cluster deployments.

**Note**  
EKS Capabilities are fully managed and run outside your cluster. You don’t have access to Argo CD server logs or the `argocd` namespace. Troubleshooting focuses on capability health, application status, and configuration.

## Capability is ACTIVE but applications aren’t syncing
<a name="_capability_is_active_but_applications_arent_syncing"></a>

If your Argo CD capability shows `ACTIVE` status but applications aren’t syncing, check the capability health and application status.

 **Check capability health**:

You can view capability health and status issues in the EKS console or using the AWS CLI.

 **Console**:

1. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home\$1/clusters.

1. Select your cluster name.

1. Choose the **Observability** tab.

1. Choose **Monitor cluster**.

1. Choose the **Capabilities** tab to view health and status for all capabilities.

 ** AWS CLI**:

```
# View capability status and health
aws eks describe-capability \
  --region region-code \
  --cluster-name my-cluster \
  --capability-name my-argocd

# Look for issues in the health section
```

 **Common causes**:
+  **Repository not configured**: Git repository not added to Argo CD
+  **Authentication failed**: SSH key, token, or CodeCommit credentials invalid
+  **Application not created**: No Application resources exist in the cluster
+  **Sync policy**: Manual sync required (auto-sync not enabled)
+  **IAM permissions**: Missing permissions for CodeCommit or Secrets Manager

 **Check application status**:

```
# List applications
kubectl get application -n argocd

# View sync status
kubectl get application my-app -n argocd -o jsonpath='{.status.sync.status}'

# View application health
kubectl get application my-app -n argocd -o jsonpath='{.status.health}'
```

 **Check application conditions**:

```
# Describe application to see detailed status
kubectl describe application my-app -n argocd

# View application health
kubectl get application my-app -n argocd -o jsonpath='{.status.health}'
```

## Applications stuck in "Progressing" state
<a name="_applications_stuck_in_progressing_state"></a>

If an application shows `Progressing` but never reaches `Healthy`, check the application’s resource status and events.

 **Check resource health**:

```
# View application resources
kubectl get application my-app -n argocd -o jsonpath='{.status.resources}'

# Check for unhealthy resources
kubectl describe application my-app -n argocd | grep -A 10 "Health Status"
```

 **Common causes**:
+  **Deployment not ready**: Pods failing to start or readiness probes failing
+  **Resource dependencies**: Resources waiting for other resources to be ready
+  **Image pull errors**: Container images not accessible
+  **Insufficient resources**: Cluster lacks CPU or memory for pods

 **Verify target cluster configuration** (for multi-cluster setups):

```
# List registered clusters
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

# View cluster secret details
kubectl get secret cluster-secret-name -n argocd -o yaml
```

## Repository authentication failures
<a name="_repository_authentication_failures"></a>

If Argo CD cannot access your Git repositories, verify the authentication configuration.

 **For CodeCommit repositories**:

Verify the IAM Capability Role has CodeCommit permissions:

```
# View IAM policies
aws iam list-attached-role-policies --role-name my-argocd-capability-role
aws iam list-role-policies --role-name my-argocd-capability-role

# Get specific policy details
aws iam get-role-policy --role-name my-argocd-capability-role --policy-name policy-name
```

The role needs `codecommit:GitPull` permission for the repositories.

 **For private Git repositories**:

Verify repository credentials are correctly configured:

```
# Check repository secret exists
kubectl get secret -n argocd repo-secret-name -o yaml
```

Ensure the secret contains the correct authentication credentials (SSH key, token, or username/password).

 **For repositories using Secrets Manager**:

```
# Verify IAM Capability Role has Secrets Manager permissions
aws iam list-attached-role-policies --role-name my-argocd-capability-role

# Test secret retrieval
aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:region-code:111122223333:secret:my-secret
```

## Multi-cluster deployment issues
<a name="_multi_cluster_deployment_issues"></a>

If applications aren’t deploying to remote clusters, verify the cluster registration and access configuration.

 **Check cluster registration**:

```
# List registered clusters
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster

# Verify cluster secret format
kubectl get secret CLUSTER_SECRET_NAME -n argocd -o yaml
```

Ensure the `server` field contains the EKS cluster ARN, not the Kubernetes API URL.

 **Verify target cluster Access Entry**:

On the target cluster, check that the Argo CD Capability Role has an Access Entry:

```
# List access entries (run on target cluster or use AWS CLI)
aws eks list-access-entries --cluster-name target-cluster

# Describe specific access entry
aws eks describe-access-entry \
  --cluster-name target-cluster \
  --principal-arn arn:aws:iam::[.replaceable]111122223333:role/my-argocd-capability-role
```

 **Check IAM permissions for cross-account**:

For cross-account deployments, verify the Argo CD Capability Role has an Access Entry on the target cluster. The managed capability uses EKS Access Entries for cross-account access, not IAM role assumption.

For more on multi-cluster configuration, see [Register target clusters](argocd-register-clusters.md).

## Next steps
<a name="_next_steps"></a>
+  [Argo CD considerations](argocd-considerations.md) - Argo CD considerations and best practices
+  [Working with Argo CD](working-with-argocd.md) - Create and manage Argo CD Applications
+  [Register target clusters](argocd-register-clusters.md) - Configure multi-cluster deployments
+  [Troubleshooting EKS Capabilities](capabilities-troubleshooting.md) - General capability troubleshooting guidance

# Comparing EKS Capability for Argo CD to self-managed Argo CD
<a name="argocd-comparison"></a>

The EKS Capability for Argo CD provides a fully managed Argo CD experience that runs in EKS. For a general comparison of EKS Capabilities vs self-managed solutions, see [EKS Capabilities considerations](capabilities-considerations.md). This topic focuses on Argo CD-specific differences, including authentication, multi-cluster management, and upstream feature support.

## Differences from upstream Argo CD
<a name="_differences_from_upstream_argo_cd"></a>

The EKS Capability for Argo CD is based on upstream Argo CD but differs in how it’s accessed, configured, and integrated with AWS services.

 **RBAC and authentication**: The capability comes with three RBAC roles (admin, editor, viewer) and uses AWS Identity Center for authentication instead of Argo CD’s built-in authentication. Configure role mappings through the capability’s `rbacRoleMapping` parameter to map Identity Center groups to Argo CD roles, not through Argo CD’s `argocd-rbac-cm` ConfigMap. The Argo CD UI is hosted with its own direct URL (find it in the EKS console under your cluster’s Capabilities tab), and API access uses AWS authentication and authorization through IAM.

 **Cluster configuration**: The capability does not automatically configure local cluster or hub-and-spoke topologies. You configure your deployment target clusters and EKS access entries. The capability supports only Amazon EKS clusters as deployment targets using EKS cluster ARNs (not Kubernetes API server URLs). The capability does not automatically add the local cluster (`kubernetes.default.svc`) as a deployment target—to deploy to the same cluster where the capability is created, explicitly register that cluster using its ARN.

 **Simplified remote cluster access**: The capability simplifies multi-cluster deployments by using EKS Access Entries to grant Argo CD access to remote clusters, eliminating the need to configure IAM Roles for Service Accounts (IRSA) or set up cross-account IAM role assumptions. The capability also provides transparent access to fully private EKS clusters without requiring VPC peering or specialized networking configuration—AWS manages connectivity between the Argo CD capability and private remote clusters automatically.

 **Direct AWS service integration**: The capability provides direct integration with AWS services through the Capability Role’s IAM permissions. You can reference CodeCommit repositories, ECR Helm charts, and CodeConnections directly in Application resources without creating Repository configurations. This simplifies authentication and eliminates the need to manage separate credentials for AWS services. See [Configure repository access](argocd-configure-repositories.md) for details.

 **Namespace support**: The capability requires you to specify a single namespace where Argo CD Application, ApplicationSet, and AppProject custom resources must be created.

**Note**  
This namespace restriction only applies to Argo CD’s own custom resources (Application, ApplicationSet, AppProject). Your application workloads can be deployed to any namespace in any target cluster. For example, if you create the capability with namespace `argocd`, all Application CRs must be created in the `argocd` namespace, but those Applications can deploy workloads to `default`, `production`, `staging`, or any other namespace.

**Note**  
The managed capability has specific requirements for CLI usage and AppProject configuration:  
When using the Argo CD CLI, specify applications with the namespace prefix: `argocd app sync namespace/appname` 
AppProject resources must specify `.spec.sourceNamespaces` to define which namespaces the project can watch for Applications (typically set to the namespace you specified when creating the capability)
Resource tracking annotations use the format: `namespace_appname:group/kind:namespace/name` 

 **Unsupported features**: The following features are not available in the managed capability:
+ Config Management Plugins (CMPs) for custom manifest generation
+ Custom Lua scripts for resource health assessment (built-in health checks for standard resources are supported)
+ The Notifications controller
+ Custom SSO providers (only AWS Identity Center is supported, including third-party federated identity through AWS Identity Center)
+ UI extensions and custom banners
+ Direct access to `argocd-cm`, `argocd-params`, and other configuration ConfigMaps
+ Modifying the sync timeout (fixed at 120 seconds)

 **Compatibility**: Applications and ApplicationSets work identically to upstream Argo CD with no changes to your manifests. The capability uses the same Kubernetes APIs and CRDs, so tools like `kubectl` work the same way. The capability fully supports Applications and ApplicationSets, GitOps workflows with automatic sync, multi-cluster deployments, sync policies (automated, prune, self-heal), sync waves and hooks, health assessment for standard Kubernetes resources, rollback capabilities, Git repository sources (HTTPS and SSH), Helm, Kustomize, and plain YAML manifests, GitHub app credentials, projects for multi-tenancy, and resource exclusions and inclusions.

## Using the Argo CD CLI with the managed capability
<a name="argocd-cli-configuration"></a>

The Argo CD CLI works the same as upstream Argo CD for most operations, but authentication and cluster registration differ.

### Prerequisites
<a name="_prerequisites"></a>

Install the Argo CD CLI following the [upstream installation instructions](https://argo-cd.readthedocs.io/en/stable/cli_installation/).

### Configuration
<a name="_configuration"></a>

Configure the CLI using environment variables:

1. Get the Argo CD server URL from the EKS console (under your cluster’s **Capabilities** tab), or using the AWS CLI. The `https://` prefix must be removed:

   ```
   export ARGOCD_SERVER=$(aws eks describe-capability \
     --cluster-name my-cluster \
     --capability-name my-argocd \
     --query 'capability.configuration.argoCd.serverUrl' \
     --output text \
     --region region-code | sed 's|^https://||')
   ```

1. Generate an account token from the Argo CD UI (**Settings** → **Accounts** → **admin** → **Generate New Token**), then set it as an environment variable:

   ```
   export ARGOCD_AUTH_TOKEN="your-token-here"
   ```

**Important**  
This configuration uses the admin account token for initial setup and development workflows. For production use cases, use project-scoped roles and tokens to follow the principle of least privilege. For more information about configuring project roles and RBAC, see [Configure Argo CD permissions](argocd-permissions.md).

1. Set the required gRPC option:

   ```
   export ARGOCD_OPTS="--grpc-web"
   ```

With these environment variables set, you can use the Argo CD CLI without the `argocd login` command.

### Key differences
<a name="_key_differences"></a>

The managed capability has the following CLI limitations:
+  `argocd admin` commands are not supported (they require direct pod access)
+  `argocd login` is not supported (use account or project tokens instead)
+  `argocd cluster add` requires the `--aws-cluster-name` flag with the EKS cluster ARN

### Example: Register a cluster
<a name="_example_register_a_cluster"></a>

Register an EKS cluster for application deployment:

```
# Get the cluster ARN
CLUSTER_ARN=$(aws eks describe-cluster \
  --name my-cluster \
  --query 'cluster.arn' \
  --output text)

# Register the cluster
argocd cluster add $CLUSTER_ARN \
  --aws-cluster-name $CLUSTER_ARN \
  --name in-cluster \
  --project default
```

For complete Argo CD CLI documentation, see the [Argo CD CLI reference](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd/).

## Migration Path
<a name="_migration_path"></a>

You can migrate from self-managed Argo CD to the managed capability:

1. Review your current Argo CD configuration for unsupported features (Notifications controller, CMPs, custom health checks, UI extensions)

1. Scale your self-managed Argo CD controllers to zero replicas to prevent conflicts

1. Create an Argo CD capability resource on your cluster

1. Export your existing Applications, ApplicationSets, and AppProjects

1. Migrate repository credentials, cluster secrets, and repository credential templates (repocreds)

1. If using GPG keys, TLS certificates, or SSH known hosts, migrate these configurations as well

1. Update `destination.server` fields to use cluster names or EKS cluster ARNs

1. Apply them to the managed Argo CD instance

1. Verify applications are syncing correctly

1. Decommission your self-managed Argo CD installation

The managed capability uses the same Argo CD APIs and resource definitions, so your existing manifests work with minimal modification.

## Next steps
<a name="_next_steps"></a>
+  [Create an Argo CD capability](create-argocd-capability.md) - Create an Argo CD capability resource
+  [Working with Argo CD](working-with-argocd.md) - Deploy your first application
+  [Argo CD considerations](argocd-considerations.md) - Configure AWS Identity Center integration