

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Simplify compute management with AWS Fargate
<a name="fargate"></a>

This topic discusses using Amazon EKS to run Kubernetes Pods on AWS Fargate. Fargate is a technology that provides on-demand, right-sized compute capacity for [containers](https://aws.amazon.com/what-are-containers). With Fargate, you don’t have to provision, configure, or scale groups of virtual machines on your own to run containers. You also don’t need to choose server types, decide when to scale your node groups, or optimize cluster packing.

You can control which Pods start on Fargate and how they run with [Fargate profiles](fargate-profile.md). Fargate profiles are defined as part of your Amazon EKS cluster. Amazon EKS integrates Kubernetes with Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes Pods onto Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers. When you start a Pod that meets the criteria for running on Fargate, the Fargate controllers that are running in the cluster recognize, update, and schedule the Pod onto Fargate.

This topic describes the different components of Pods that run on Fargate, and calls out special considerations for using Fargate with Amazon EKS.

## AWS Fargate considerations
<a name="fargate-considerations"></a>

Here are some things to consider about using Fargate on Amazon EKS.
+ Each Pod that runs on Fargate has its own compute boundary. They don’t share the underlying kernel, CPU resources, memory resources, or elastic network interface with another Pod.
+ Network Load Balancers and Application Load Balancers (ALBs) can be used with Fargate with IP targets only. For more information, see [Create a network load balancer](network-load-balancing.md#network-load-balancer) and [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md).
+ Fargate exposed services only run on target type IP mode, and not on node IP mode. The recommended way to check the connectivity from a service running on a managed node and a service running on Fargate is to connect via service name.
+ Pods must match a Fargate profile at the time that they’re scheduled to run on Fargate. Pods that don’t match a Fargate profile might be stuck as `Pending`. If a matching Fargate profile exists, you can delete pending Pods that you have created to reschedule them onto Fargate.
+ Daemonsets aren’t supported on Fargate. If your application requires a daemon, reconfigure that daemon to run as a sidecar container in your Pods.
+ Privileged containers aren’t supported on Fargate.
+ Pods running on Fargate can’t specify `HostPort` or `HostNetwork` in the Pod manifest.
+ The default `nofile` and `nproc` soft limit is 1024 and the hard limit is 65535 for Fargate Pods.
+ GPUs aren’t currently available on Fargate.
+ Pods that run on Fargate are only supported on private subnets (with NAT gateway access to AWS services, but not a direct route to an Internet Gateway), so your cluster’s VPC must have private subnets available. For clusters without outbound internet access, see [Deploy private clusters with limited internet access](private-clusters.md).
+ You can use the [Adjust pod resources with Vertical Pod Autoscaler](vertical-pod-autoscaler.md) to set the initial correct size of CPU and memory for your Fargate Pods, and then use the [Scale pod deployments with Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) to scale those Pods. If you want the Vertical Pod Autoscaler to automatically re-deploy Pods to Fargate with larger CPU and memory combinations, set the mode for the Vertical Pod Autoscaler to either `Auto` or `Recreate` to ensure correct functionality. For more information, see the [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start) documentation on GitHub.
+ DNS resolution and DNS hostnames must be enabled for your VPC. For more information, see [Viewing and updating DNS support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating).
+ Amazon EKS Fargate adds defense-in-depth for Kubernetes applications by isolating each Pod within a Virtual Machine (VM). This VM boundary prevents access to host-based resources used by other Pods in the event of a container escape, which is a common method of attacking containerized applications and gain access to resources outside of the container.

  Using Amazon EKS doesn’t change your responsibilities under the [shared responsibility model](security.md). You should carefully consider the configuration of cluster security and governance controls. The safest way to isolate an application is always to run it in a separate cluster.
+ Fargate profiles support specifying subnets from VPC secondary CIDR blocks. You might want to specify a secondary CIDR block. This is because there’s a limited number of IP addresses available in a subnet. As a result, there’s also a limited number of Pods that can be created in the cluster. By using different subnets for Pods, you can increase the number of available IP addresses. For more information, see [Adding IPv4 CIDR blocks to a VPC.](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-resize) 
+ The Amazon EC2 instance metadata service (IMDS) isn’t available to Pods that are deployed to Fargate nodes. If you have Pods that are deployed to Fargate that need IAM credentials, assign them to your Pods using [IAM roles for service accounts](iam-roles-for-service-accounts.md). If your Pods need access to other information available through IMDS, then you must hard code this information into your Pod spec. This includes the AWS Region or Availability Zone that a Pod is deployed to.
+ You can’t deploy Fargate Pods to AWS Outposts, AWS Wavelength, or AWS Local Zones.
+ Amazon EKS must periodically patch Fargate Pods to keep them secure. We attempt the updates in a way that reduces impact, but there are times when Pods must be deleted if they aren’t successfully evicted. There are some actions you can take to minimize disruption. For more information, see [Set actions for AWS Fargate OS patching events](fargate-pod-patching.md).
+ The [Amazon VPC CNI plugin for Amazon EKS](https://github.com/aws/amazon-vpc-cni-plugins) is installed on Fargate nodes. You can’t use [Alternate CNI plugins for Amazon EKS clusters](alternate-cni-plugins.md) with Fargate nodes.
+ A Pod running on Fargate automatically mounts an Amazon EFS file system, without needing manual driver installation steps. You can’t use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning.
+ Amazon EKS doesn’t support Fargate Spot.
+ You can’t mount Amazon EBS volumes to Fargate Pods.
+ You can run the Amazon EBS CSI controller on Fargate nodes, but the Amazon EBS CSI node DaemonSet can only run on Amazon EC2 instances.
+ After a [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) is marked `Completed` or `Failed`, the Pods that the Job creates normally continue to exist. This behavior allows you to view your logs and results, but with Fargate you will incur costs if you don’t clean up the Job afterwards.

  To automatically delete the related Pods after a Job completes or fails, you can specify a time period using the time-to-live (TTL) controller. The following example shows specifying `.spec.ttlSecondsAfterFinished` in your Job manifest.

  ```
  apiVersion: batch/v1
  kind: Job
  metadata:
    name: busybox
  spec:
    template:
      spec:
        containers:
        - name: busybox
          image: busybox
          command: ["/bin/sh", "-c", "sleep 10"]
        restartPolicy: Never
    ttlSecondsAfterFinished: 60 # <-- TTL controller
  ```

## Fargate Comparison Table
<a name="_fargate_comparison_table"></a>


| Criteria |  AWS Fargate | 
| --- | --- | 
|  Can be deployed to [AWS Outposts](https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html)   |  No  | 
|  Can be deployed to an [AWS Local Zone](local-zones.md)   |  No  | 
|  Can run containers that require Windows  |  No  | 
|  Can run containers that require Linux  |  Yes  | 
|  Can run workloads that require the Inferentia chip  |  No  | 
|  Can run workloads that require a GPU  |  No  | 
|  Can run workloads that require Arm processors  |  No  | 
|  Can run AWS [Bottlerocket](https://aws.amazon.com/bottlerocket/)   |  No  | 
|  Pods share a kernel runtime environment with other Pods  |  No – Each Pod has a dedicated kernel  | 
|  Pods share CPU, memory, storage, and network resources with other Pods.  |  No – Each Pod has dedicated resources and can be sized independently to maximize resource utilization.  | 
|  Pods can use more hardware and memory than requested in Pod specs  |  No – The Pod can be re-deployed using a larger vCPU and memory configuration though.  | 
|  Must deploy and manage Amazon EC2 instances  |  No  | 
|  Must secure, maintain, and patch the operating system of Amazon EC2 instances  |  No  | 
|  Can provide bootstrap arguments at deployment of a node, such as extra [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) arguments.  |  No  | 
|  Can assign IP addresses to Pods from a different CIDR block than the IP address assigned to the node.  |  No  | 
|  Can SSH into node  |  No – There’s no node host operating system to SSH to.  | 
|  Can deploy your own custom AMI to nodes  |  No  | 
|  Can deploy your own custom CNI to nodes  |  No  | 
|  Must update node AMI on your own  |  No  | 
|  Must update node Kubernetes version on your own  |  No – You don’t manage nodes.  | 
|  Can use Amazon EBS storage with Pods  |  No  | 
|  Can use Amazon EFS storage with Pods  |   [Yes](efs-csi.md)   | 
|  Can use Amazon FSx for Lustre storage with Pods  |  No  | 
|  Can use Network Load Balancer for services  |  Yes, when using the [Create a network load balancer](network-load-balancing.md#network-load-balancer)   | 
|  Pods can run in a public subnet  |  No  | 
|  Can assign different VPC security groups to individual Pods  |  Yes  | 
|  Can run Kubernetes DaemonSets  |  No  | 
|  Support `HostPort` and `HostNetwork` in the Pod manifest  |  No  | 
|   AWS Region availability  |   [Some Amazon EKS supported regions](https://docs.aws.amazon.com/general/latest/gr/eks.html)   | 
|  Can run containers on Amazon EC2 dedicated hosts  |  No  | 
|  Pricing  |  Cost of an individual Fargate memory and CPU configuration. Each Pod has its own cost. For more information, see [AWS Fargate pricing](https://aws.amazon.com/fargate/pricing/).  | 

# Get started with AWS Fargate for your cluster
<a name="fargate-getting-started"></a>

This topic describes how to get started running Pods on AWS Fargate with your Amazon EKS cluster.

If you restrict access to the public endpoint of your cluster using CIDR blocks, we recommend that you also enable private endpoint access. This way, Fargate Pods can communicate with the cluster. Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the outbound sources from your VPC. For more information, see [Cluster API server endpoint](cluster-endpoint.md).

**Prerequisite**  
An existing cluster. If you don’t already have an Amazon EKS cluster, see [Get started with Amazon EKS](getting-started.md).

## Step 1: Ensure that existing nodes can communicate with Fargate Pods
<a name="fargate-gs-check-compatibility"></a>

If you’re working with a new cluster with no nodes, or a cluster with only managed node groups (see [Simplify node lifecycle with managed node groups](managed-node-groups.md)), you can skip to [Step 2: Create a Fargate Pod execution role](#fargate-sg-pod-execution-role).

Assume that you’re working with an existing cluster that already has nodes that are associated with it. Make sure that Pods on these nodes can communicate freely with the Pods that are running on Fargate. Pods that are running on Fargate are automatically configured to use the cluster security group for the cluster that they’re associated with. Ensure that any existing nodes in your cluster can send and receive traffic to and from the cluster security group. Managed node groups are automatically configured to use the cluster security group as well, so you don’t need to modify or check them for this compatibility (see [Simplify node lifecycle with managed node groups](managed-node-groups.md)).

For existing node groups that were created with `eksctl` or the Amazon EKS managed AWS CloudFormation templates, you can add the cluster security group to the nodes manually. Or, alternatively, you can modify the Auto Scaling group launch template for the node group to attach the cluster security group to the instances. For more information, see [Changing an instance’s security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SG_Changing_Group_Membership) in the *Amazon VPC User Guide*.

You can check for a security group for your cluster in the AWS Management Console under the **Networking** section for the cluster. Or, you can do this using the following AWS CLI command. When using this command, replace `<my-cluster>` with the name of your cluster.

```
aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId
```

## Step 2: Create a Fargate Pod execution role
<a name="fargate-sg-pod-execution-role"></a>

When your cluster creates Pods on AWS Fargate, the components that run on the Fargate infrastructure must make calls to AWS APIs on your behalf. The Amazon EKS Pod execution role provides the IAM permissions to do this. To create an AWS Fargate Pod execution role, see [Amazon EKS Pod execution IAM role](pod-execution-role.md).

**Note**  
If you created your cluster with `eksctl` using the `--fargate` option, your cluster already has a Pod execution role that you can find in the IAM console with the pattern `eksctl-my-cluster-FargatePodExecutionRole-ABCDEFGHIJKL`. Similarly, if you use `eksctl` to create your Fargate profiles, `eksctl` creates your Pod execution role if one isn’t already created.

## Step 3: Create a Fargate profile for your cluster
<a name="fargate-gs-create-profile"></a>

Before you can schedule Pods that are running on Fargate in your cluster, you must define a Fargate profile that specifies which Pods use Fargate when they’re launched. For more information, see [Define which Pods use AWS Fargate when launched](fargate-profile.md).

**Note**  
If you created your cluster with `eksctl` using the `--fargate` option, then a Fargate profile is already created for your cluster with selectors for all Pods in the `kube-system` and `default` namespaces. Use the following procedure to create Fargate profiles for any other namespaces you would like to use with Fargate.

You can create a Fargate profile using either of these tools:
+  [`eksctl`](#eksctl_fargate_profile_create) 
+  [AWS Management Console](#console_fargate_profile_create) 

### `eksctl`
<a name="eksctl_fargate_profile_create"></a>

This procedure requires `eksctl` version `0.215.0` or later. You can check your version with the following command:

```
eksctl version
```

For instructions on how to install or upgrade `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

 **To create a Fargate profile with `eksctl` ** 

Create your Fargate profile with the following `eksctl` command, replacing every `<example value>` with your own values. You’re required to specify a namespace. However, the `--labels` option isn’t required.

```
eksctl create fargateprofile \
    --cluster <my-cluster> \
    --name <my-fargate-profile> \
    --namespace <my-kubernetes-namespace> \
    --labels <key=value>
```

You can use certain wildcards for `<my-kubernetes-namespace>` and `<key=value>` labels. For more information, see [Fargate profile wildcards](fargate-profile.md#fargate-profile-wildcards).

### AWS Management Console
<a name="console_fargate_profile_create"></a>

 **To create a Fargate profile with AWS Management Console ** 

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the cluster to create a Fargate profile for.

1. Choose the **Compute** tab.

1. Under **Fargate profiles**, choose **Add Fargate profile**.

1. On the **Configure Fargate profile** page, do the following:

   1. For **Name**, enter a name for your Fargate profile. The name must be unique.

   1. For **Pod execution role**, choose the Pod execution role to use with your Fargate profile. Only the IAM roles with the `eks-fargate-pods.amazonaws.com` service principal are shown. If you don’t see any roles listed, you must create one. For more information, see [Amazon EKS Pod execution IAM role](pod-execution-role.md).

   1. Modify the selected **Subnets** as needed.
**Note**  
Only private subnets are supported for Pods that are running on Fargate.

   1. For **Tags**, you can optionally tag your Fargate profile. These tags don’t propagate to other resources that are associated with the profile such as Pods.

   1. Choose **Next**.

1. On the **Configure Pod selection** page, do the following:

   1. For **Namespace**, enter a namespace to match for Pods.
      + You can use specific namespaces to match, such as `kube-system` or `default`.
      + You can use certain wildcards (for example, `prod-*`) to match multiple namespaces (for example, `prod-deployment` and `prod-test`). For more information, see [Fargate profile wildcards](fargate-profile.md#fargate-profile-wildcards).

   1. (Optional) Add Kubernetes labels to the selector. Specifically add them to the one that the Pods in the specified namespace need to match.
      + You can add the label `infrastructure: fargate` to the selector so that only Pods in the specified namespace that also have the `infrastructure: fargate` Kubernetes label match the selector.
      + You can use certain wildcards (for example, `key?: value?`) to match multiple namespaces (for example, `keya: valuea` and `keyb: valueb`). For more information, see [Fargate profile wildcards](fargate-profile.md#fargate-profile-wildcards).

   1. Choose **Next**.

1. On the **Review and create** page, review the information for your Fargate profile and choose **Create**.

## Step 4: Update CoreDNS
<a name="fargate-gs-coredns"></a>

By default, CoreDNS is configured to run on Amazon EC2 infrastructure on Amazon EKS clusters. If you want to *only* run your Pods on Fargate in your cluster, complete the following steps.

**Note**  
If you created your cluster with `eksctl` using the `--fargate` option, then you can skip to [Next steps](#fargate-gs-next-steps).

1. Create a Fargate profile for CoreDNS with the following command. Replace `<my-cluster>` with your cluster name, `<111122223333>` with your account ID, `<AmazonEKSFargatePodExecutionRole>` with the name of your Pod execution role, and `<000000000000000a>`, `<000000000000000b>`, and `<000000000000000c>` with the IDs of your private subnets. If you don’t have a Pod execution role, you must create one first (see [Step 2: Create a Fargate Pod execution role](#fargate-sg-pod-execution-role)).
**Important**  
The role ARN can’t include a [path](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-friendly-names) other than `/`. For example, if the name of your role is `development/apps/AmazonEKSFargatePodExecutionRole`, you need to change it to `AmazonEKSFargatePodExecutionRole` when specifying the ARN for the role. The format of the role ARN must be ` arn:aws:iam::<111122223333>:role/<AmazonEKSFargatePodExecutionRole>`.

   ```
   aws eks create-fargate-profile \
       --fargate-profile-name coredns \
       --cluster-name <my-cluster> \
       --pod-execution-role-arn arn:aws:iam::<111122223333>:role/<AmazonEKSFargatePodExecutionRole> \
       --selectors namespace=kube-system,labels={k8s-app=kube-dns} \
       --subnets subnet-<000000000000000a> subnet-<000000000000000b> subnet-<000000000000000c>
   ```

1. Trigger a rollout of the `coredns` deployment.

   ```
   kubectl rollout restart -n kube-system deployment coredns
   ```

## Next steps
<a name="fargate-gs-next-steps"></a>
+ You can start migrating your existing applications to run on Fargate with the following workflow.

  1.  [Create a Fargate profile](fargate-profile.md#create-fargate-profile) that matches your application’s Kubernetes namespace and Kubernetes labels.

  1. Delete and re-create any existing Pods so that they’re scheduled on Fargate. Modify the `<namespace>` and `<deployment-type>` to update your specific Pods.

     ```
     kubectl rollout restart -n <namespace> deployment <deployment-type>
     ```
+ Deploy the [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) to allow Ingress objects for your Pods running on Fargate.
+ You can use the [Adjust pod resources with Vertical Pod Autoscaler](vertical-pod-autoscaler.md) to set the initial correct size of CPU and memory for your Fargate Pods, and then use the [Scale pod deployments with Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) to scale those Pods. If you want the Vertical Pod Autoscaler to automatically re-deploy Pods to Fargate with higher CPU and memory combinations, set the Vertical Pod Autoscaler’s mode to either `Auto` or `Recreate`. This is to ensure correct functionality. For more information, see the [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start) documentation on GitHub.
+ You can set up the [AWS Distro for OpenTelemetry](https://aws.amazon.com/otel) (ADOT) collector for application monitoring by following [these instructions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-EKS-otel.html).

# Define which Pods use AWS Fargate when launched
<a name="fargate-profile"></a>

Before you schedule Pods on Fargate in your cluster, you must define at least one Fargate profile that specifies which Pods use Fargate when launched.

As an administrator, you can use a Fargate profile to declare which Pods run on Fargate. You can do this through the profile’s selectors. You can add up to five selectors to each profile. Each selector must contain a namespace. The selector can also include labels. The label field consists of multiple optional key-value pairs. Pods that match a selector are scheduled on Fargate. Pods are matched using a namespace and the labels that are specified in the selector. If a namespace selector is defined without labels, Amazon EKS attempts to schedule all the Pods that run in that namespace onto Fargate using the profile. If a to-be-scheduled Pod matches any of the selectors in the Fargate profile, then that Pod is scheduled on Fargate.

If a Pod matches multiple Fargate profiles, you can specify which profile a Pod uses by adding the following Kubernetes label to the Pod specification: `eks.amazonaws.com/fargate-profile: my-fargate-profile`. The Pod must match a selector in that profile to be scheduled onto Fargate. Kubernetes affinity/anti-affinity rules do not apply and aren’t necessary with Amazon EKS Fargate Pods.

When you create a Fargate profile, you must specify a Pod execution role. This execution role is for the Amazon EKS components that run on the Fargate infrastructure using the profile. It’s added to the cluster’s Kubernetes [Role Based Access Control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC) for authorization. That way, the `kubelet` that runs on the Fargate infrastructure can register with your Amazon EKS cluster and appear in your cluster as a node. The Pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see [Amazon EKS Pod execution IAM role](pod-execution-role.md).

Fargate profiles can’t be changed. However, you can create a new updated profile to replace an existing profile, and then delete the original.

**Note**  
Any Pods that are running using a Fargate profile are stopped and put into a pending state when the profile is deleted.

If any Fargate profiles in a cluster are in the `DELETING` status, you must wait until after the Fargate profile is deleted before you create other profiles in that cluster.

**Note**  
Fargate does not currently support Kubernetes [topologySpreadConstraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/).

Amazon EKS and Fargate spread Pods across each of the subnets that’s defined in the Fargate profile. However, you might end up with an uneven spread. If you must have an even spread, use two Fargate profiles. Even spread is important in scenarios where you want to deploy two replicas and don’t want any downtime. We recommend that each profile has only one subnet.

## Fargate profile components
<a name="fargate-profile-components"></a>

The following components are contained in a Fargate profile.

 **Pod execution role**   
When your cluster creates Pods on AWS Fargate, the `kubelet` that’s running on the Fargate infrastructure must make calls to AWS APIs on your behalf. For example, it needs to make calls to pull container images from Amazon ECR. The Amazon EKS Pod execution role provides the IAM permissions to do this.  
When you create a Fargate profile, you must specify a Pod execution role to use with your Pods. This role is added to the cluster’s Kubernetes [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (RBAC) for authorization. This is so that the `kubelet` that’s running on the Fargate infrastructure can register with your Amazon EKS cluster and appear in your cluster as a node. For more information, see [Amazon EKS Pod execution IAM role](pod-execution-role.md).

 **Subnets**   
The IDs of subnets to launch Pods into that use this profile. At this time, Pods that are running on Fargate aren’t assigned public IP addresses. Therefore, only private subnets with no direct route to an Internet Gateway are accepted for this parameter.

 **Selectors**   
The selectors to match for Pods to use this Fargate profile. You might specify up to five selectors in a Fargate profile. The selectors have the following components:  
+  **Namespace** – You must specify a namespace for a selector. The selector only matches Pods that are created in this namespace. However, you can create multiple selectors to target multiple namespaces.
+  **Labels** – You can optionally specify Kubernetes labels to match for the selector. The selector only matches Pods that have all of the labels that are specified in the selector.

## Fargate profile wildcards
<a name="fargate-profile-wildcards"></a>

In addition to characters allowed by Kubernetes, you’re allowed to use `*` and `?` in the selector criteria for namespaces, label keys, and label values:
+  `*` represents none, one, or multiple characters. For example, `prod*` can represent `prod` and `prod-metrics`.
+  `?` represents a single character (for example, `value?` can represent `valuea`). However, it can’t represent `value` and `value-a`, because `?` can only represent exactly one character.

These wildcard characters can be used in any position and in combination (for example, `prod*`, `*dev`, and `frontend*?`). Other wildcards and forms of pattern matching, such as regular expressions, aren’t supported.

If there are multiple matching profiles for the namespace and labels in the Pod spec, Fargate picks up the profile based on alphanumeric sorting by profile name. For example, if both profile A (with the name `beta-workload`) and profile B (with the name `prod-workload`) have matching selectors for the Pods to be launched, Fargate picks profile A (`beta-workload`) for the Pods. The Pods have labels with profile A on the Pods (for example, `eks.amazonaws.com/fargate-profile=beta-workload`).

If you want to migrate existing Fargate Pods to new profiles that use wildcards, there are two ways to do so:
+ Create a new profile with matching selectors, then delete the old profiles. Pods labeled with old profiles are rescheduled to new matching profiles.
+ If you want to migrate workloads but aren’t sure what Fargate labels are on each Fargate Pod, you can use the following method. Create a new profile with a name that sorts alphanumerically first among the profiles on the same cluster. Then, recycle the Fargate Pods that need to be migrated to new profiles.

## Create a Fargate profile
<a name="create-fargate-profile"></a>

This section describes how to create a Fargate profile. You also must have created a Pod execution role to use for your Fargate profile. For more information, see [Amazon EKS Pod execution IAM role](pod-execution-role.md). Pods that are running on Fargate are only supported on private subnets with [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) access to AWS services, but not a direct route to an Internet Gateway. This is so that your cluster’s VPC must have private subnets available.

You can create a profile with the following:
+  [`eksctl`](#eksctl_create_a_fargate_profile) 
+  [AWS Management Console](#console_create_a_fargate_profile) 

## `eksctl`
<a name="eksctl_create_a_fargate_profile"></a>

 **To create a Fargate profile with `eksctl` ** 

Create your Fargate profile with the following `eksctl` command, replacing every example value with your own values. You’re required to specify a namespace. However, the `--labels` option isn’t required.

```
eksctl create fargateprofile \
    --cluster my-cluster \
    --name my-fargate-profile \
    --namespace my-kubernetes-namespace \
    --labels key=value
```

You can use certain wildcards for `my-kubernetes-namespace` and `key=value` labels. For more information, see [Fargate profile wildcards](#fargate-profile-wildcards).

## AWS Management Console
<a name="console_create_a_fargate_profile"></a>

 **To create a Fargate profile with AWS Management Console ** 

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the cluster to create a Fargate profile for.

1. Choose the **Compute** tab.

1. Under **Fargate profiles**, choose **Add Fargate profile**.

1. On the **Configure Fargate profile** page, do the following:

   1. For **Name**, enter a unique name for your Fargate profile, such as `my-profile`.

   1. For **Pod execution role**, choose the Pod execution role to use with your Fargate profile. Only the IAM roles with the `eks-fargate-pods.amazonaws.com` service principal are shown. If you don’t see any roles listed, you must create one. For more information, see [Amazon EKS Pod execution IAM role](pod-execution-role.md).

   1. Modify the selected **Subnets** as needed.
**Note**  
Only private subnets are supported for Pods that are running on Fargate.

   1. For **Tags**, you can optionally tag your Fargate profile. These tags don’t propagate to other resources that are associated with the profile, such as Pods.

   1. Choose **Next**.

1. On the **Configure Pod selection** page, do the following:

   1. For **Namespace**, enter a namespace to match for Pods.
      + You can use specific namespaces to match, such as `kube-system` or `default`.
      + You can use certain wildcards (for example, `prod-*`) to match multiple namespaces (for example, `prod-deployment` and `prod-test`). For more information, see [Fargate profile wildcards](#fargate-profile-wildcards).

   1. (Optional) Add Kubernetes labels to the selector. Specifically, add them to the one that the Pods in the specified namespace need to match.
      + You can add the label `infrastructure: fargate` to the selector so that only Pods in the specified namespace that also have the `infrastructure: fargate` Kubernetes label match the selector.
      + You can use certain wildcards (for example, `key?: value?`) to match multiple namespaces (for example, `keya: valuea` and `keyb: valueb`). For more information, see [Fargate profile wildcards](#fargate-profile-wildcards).

   1. Choose **Next**.

1. On the **Review and create** page, review the information for your Fargate profile and choose **Create**.

# Delete a Fargate profile
<a name="delete-fargate-profile"></a>

This topic describes how to delete a Fargate profile. When you delete a Fargate profile, any Pods that were scheduled onto Fargate with the profile are deleted. If those Pods match another Fargate profile, then they’re scheduled on Fargate with that profile. If they no longer match any Fargate profiles, then they aren’t scheduled onto Fargate and might remain as pending.

Only one Fargate profile in a cluster can be in the `DELETING` status at a time. Wait for a Fargate profile to finish deleting before you can delete any other profiles in that cluster.

You can delete a profile with any of the following tools:
+  [`eksctl`](#eksctl_delete_a_fargate_profile) 
+  [AWS Management Console](#console_delete_a_fargate_profile) 
+  [AWS CLI](#awscli_delete_a_fargate_profile) 

## `eksctl`
<a name="eksctl_delete_a_fargate_profile"></a>

 **Delete a Fargate profile with `eksctl` ** 

Use the following command to delete a profile from a cluster. Replace every *example value* with your own values.

```
eksctl delete fargateprofile  --name my-profile --cluster my-cluster
```

## AWS Management Console
<a name="console_delete_a_fargate_profile"></a>

 **Delete a Fargate profile with AWS Management Console ** 

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, choose **Clusters**. In the list of clusters, choose the cluster that you want to delete the Fargate profile from.

1. Choose the **Compute** tab.

1. Choose the Fargate profile to delete, and then choose **Delete**.

1. On the **Delete Fargate profile** page, enter the name of the profile, and then choose **Delete**.

## AWS CLI
<a name="awscli_delete_a_fargate_profile"></a>

 **Delete a Fargate profile with AWS CLI** 

Use the following command to delete a profile from a cluster. Replace every *example value* with your own values.

```
aws eks delete-fargate-profile --fargate-profile-name my-profile --cluster-name my-cluster
```

# Understand Fargate Pod configuration details
<a name="fargate-pod-configuration"></a>

This section describes some of the unique Pod configuration details for running Kubernetes Pods on AWS Fargate.

## Pod CPU and memory
<a name="fargate-cpu-and-memory"></a>

With Kubernetes, you can define requests, a minimum vCPU amount, and memory resources that are allocated to each container in a Pod. Pods are scheduled by Kubernetes to ensure that at least the requested resources for each Pod are available on the compute resource. For more information, see [Managing compute resources for containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) in the Kubernetes documentation.

**Note**  
Since Amazon EKS Fargate runs only one Pod per node, the scenario of evicting Pods in case of fewer resources doesn’t occur. All Amazon EKS Fargate Pods run with guaranteed priority, so the requested CPU and memory must be equal to the limit for all of the containers. For more information, see [Configure Quality of Service for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) in the Kubernetes documentation.

When Pods are scheduled on Fargate, the vCPU and memory reservations within the Pod specification determine how much CPU and memory to provision for the Pod.
+ The maximum request out of any Init containers is used to determine the Init request vCPU and memory requirements.
+ Requests for all long-running containers are added up to determine the long-running request vCPU and memory requirements.
+ The larger of the previous two values is chosen for the vCPU and memory request to use for your Pod.
+ Fargate adds 256 MB to each Pod’s memory reservation for the required Kubernetes components (`kubelet`, `kube-proxy`, and `containerd`).

Fargate rounds up to the following compute configuration that most closely matches the sum of vCPU and memory requests in order to ensure Pods always have the resources that they need to run.

If you don’t specify a vCPU and memory combination, then the smallest available combination is used (.25 vCPU and 0.5 GB memory).

The following table shows the vCPU and memory combinations that are available for Pods running on Fargate.


| vCPU value | Memory value | 
| --- | --- | 
|  .25 vCPU  |  0.5 GB, 1 GB, 2 GB  | 
|  .5 vCPU  |  1 GB, 2 GB, 3 GB, 4 GB  | 
|  1 vCPU  |  2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB  | 
|  2 vCPU  |  Between 4 GB and 16 GB in 1-GB increments  | 
|  4 vCPU  |  Between 8 GB and 30 GB in 1-GB increments  | 
|  8 vCPU  |  Between 16 GB and 60 GB in 4-GB increments  | 
|  16 vCPU  |  Between 32 GB and 120 GB in 8-GB increments  | 

The additional memory reserved for the Kubernetes components can cause a Fargate task with more vCPUs than requested to be provisioned. For example, a request for 1 vCPU and 8 GB memory will have 256 MB added to its memory request, and will provision a Fargate task with 2 vCPUs and 9 GB memory, since no task with 1 vCPU and 9 GB memory is available.

There is no correlation between the size of the Pod running on Fargate and the node size reported by Kubernetes with `kubectl get nodes`. The reported node size is often larger than the Pod’s capacity. You can verify Pod capacity with the following command. Replace *default* with your Pod’s namespace and *pod-name* with the name of your Pod.

```
kubectl describe pod --namespace default pod-name
```

An example output is as follows.

```
[...]
annotations:
    CapacityProvisioned: 0.25vCPU 0.5GB
[...]
```

The `CapacityProvisioned` annotation represents the enforced Pod capacity and it determines the cost of your Pod running on Fargate. For pricing information for the compute configurations, see [AWS Fargate Pricing](https://aws.amazon.com/fargate/pricing/).

## Fargate storage
<a name="fargate-storage"></a>

A Pod running on Fargate automatically mounts an Amazon EFS file system, without needing manual driver installation steps. You can’t use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning. For more information, see [Amazon EFS CSI Driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/README.md) on GitHub.

When provisioned, each Pod running on Fargate receives a default 20 GiB of ephemeral storage. This type of storage is deleted after a Pod stops. New Pods launched onto Fargate have encryption of the ephemeral storage volume enabled by default. The ephemeral Pod storage is encrypted with an AES-256 encryption algorithm using AWS Fargate managed keys.

**Note**  
The default usable storage for Amazon EKS Pods that run on Fargate is less than 20 GiB. This is because some space is used by the `kubelet` and other Kubernetes modules that are loaded inside the Pod.

You can increase the total amount of ephemeral storage up to a maximum of 175 GiB. To configure the size with Kubernetes, specify the requests of `ephemeral-storage` resource to each container in a Pod. When Kubernetes schedules Pods, it ensures that the sum of the resource requests for each Pod is less than the capacity of the Fargate task. For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) in the Kubernetes documentation.

Amazon EKS Fargate provisions more ephemeral storage than requested for the purposes of system use. For example, a request of 100 GiB will provision a Fargate task with 115 GiB ephemeral storage.

# Set actions for AWS Fargate OS patching events
<a name="fargate-pod-patching"></a>

Amazon EKS periodically patches the OS for AWS Fargate nodes to keep them secure. As part of the patching process, we recycle the nodes to install OS patches. Updates are attempted in a way that creates the least impact on your services. However, if Pods aren’t successfully evicted, there are times when they must be deleted. The following are actions that you can take to minimize potential disruptions:
+ Set appropriate Pod disruption budgets (PDBs) to control the number of Pods that are down simultaneously.
+ Create Amazon EventBridge rules to handle failed evictions before the Pods are deleted.
+ Manually restart your affected pods before the eviction date posted in the notification you receive.
+ Create a notification configuration in AWS User Notifications.

Amazon EKS works closely with the Kubernetes community to make bug fixes and security patches available as quickly as possible. All Fargate Pods start on the most recent Kubernetes patch version, which is available from Amazon EKS for the Kubernetes version of your cluster. If you have a Pod with an older patch version, Amazon EKS might recycle it to update it to the latest version. This ensures that your Pods are equipped with the latest security updates. That way, if there’s a critical [Common Vulnerabilities and Exposures](https://cve.mitre.org/) (CVE) issue, you’re kept up to date to reduce security risks.

When the AWS Fargate OS is updated, Amazon EKS will send you a notification that includes your affected resources and the date of upcoming pod evictions. If the provided eviction date is inconvenient, you have the option to manually restart your affected pods before the eviction date posted in the notification. Any pods created before the time at which you receive the notification are subject to eviction. Refer to the [Kubernetes Documentation](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_restart) for further instructions on how to manually restart your pods.

To limit the number of Pods that are down at one time when Pods are recycled, you can set Pod disruption budgets (PDBs). You can use PDBs to define minimum availability based on the requirements of each of your applications while still allowing updates to occur. Your PDB’s minimum availability must be less than 100%. For more information, see [Specifying a Disruption Budget for your Application](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) in the Kubernetes Documentation.

Amazon EKS uses the [Eviction API](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#eviction-api) to safely drain the Pod while respecting the PDBs that you set for the application. Pods are evicted by Availability Zone to minimize impact. If the eviction succeeds, the new Pod gets the latest patch and no further action is required.

When the eviction for a Pod fails, Amazon EKS sends an event to your account with details about the Pods that failed eviction. You can act on the message before the scheduled termination time. The specific time varies based on the urgency of the patch. When it’s time, Amazon EKS attempts to evict the Pods again. However, this time a new event isn’t sent if the eviction fails. If the eviction fails again, your existing Pods are deleted periodically so that the new Pods can have the latest patch.

The following is a sample event received when the Pod eviction fails. It contains details about the cluster, Pod name, Pod namespace, Fargate profile, and the scheduled termination time.

```
{
    "version": "0",
    "id": "12345678-90ab-cdef-0123-4567890abcde",
    "detail-type": "EKS Fargate Pod Scheduled Termination",
    "source": "aws.eks",
    "account": "111122223333",
    "time": "2021-06-27T12:52:44Z",
    "region": "region-code",
    "resources": [
        "default/my-database-deployment"
    ],
    "detail": {
        "clusterName": "my-cluster",
        "fargateProfileName": "my-fargate-profile",
        "podName": "my-pod-name",
        "podNamespace": "default",
        "evictErrorMessage": "Cannot evict pod as it would violate the pod's disruption budget",
        "scheduledTerminationTime": "2021-06-30T12:52:44.832Z[UTC]"
    }
}
```

In addition, having multiple PDBs associated with a Pod can cause an eviction failure event. This event returns the following error message.

```
"evictErrorMessage": "This pod has multiple PodDisruptionBudget, which the eviction subresource does not support",
```

You can create a desired action based on this event. For example, you can adjust your Pod disruption budget (PDB) to control how the Pods are evicted. More specifically, suppose that you start with a PDB that specifies the target percentage of Pods that are available. Before your Pods are force terminated during an upgrade, you can adjust the PDB to a different percentage of Pods. To receive this event, you must create an Amazon EventBridge rule in the AWS account and AWS Region that the cluster belongs to. The rule must use the following **Custom pattern**. For more information, see [Creating Amazon EventBridge rules that react to events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) in the *Amazon EventBridge User Guide*.

```
{
  "source": ["aws.eks"],
  "detail-type": ["EKS Fargate Pod Scheduled Termination"]
}
```

A suitable target can be set for the event to capture it. For a complete list of available targets, see [Amazon EventBridge targets](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html) in the *Amazon EventBridge User Guide*. You can also create a notification configuration in AWS User Notifications. When using the AWS Management Console to create the notification, under **Event Rules**, choose **Elastic Kubernetes Service (EKS)** for ** AWS service name** and **EKS Fargate Pod Scheduled Termination** for **Event type**. For more information, see [Getting started with AWS User Notifications](https://docs.aws.amazon.com/notifications/latest/userguide/getting-started.html) in the AWS User Notifications User Guide.

See [FAQs: Fargate Pod eviction notice](https://repost.aws/knowledge-center/fargate-pod-eviction-notice) in * AWS re:Post* for frequently asked questions regarding EKS Pod Evictions.

# Collect AWS Fargate app and usage metrics
<a name="monitoring-fargate-usage"></a>

You can collect system metrics and CloudWatch usage metrics for AWS Fargate.

## Application metrics
<a name="fargate-application-metrics"></a>

For applications running on Amazon EKS and AWS Fargate, you can use the AWS Distro for OpenTelemetry (ADOT). ADOT allows you to collect system metrics and send them to CloudWatch Container Insights dashboards. To get started with ADOT for applications running on Fargate, see [Using CloudWatch Container Insights with AWS Distro for OpenTelemetry](https://aws-otel.github.io/docs/getting-started/container-insights) in the ADOT documentation.

## Usage metrics
<a name="fargate-usage-metrics"></a>

You can use CloudWatch usage metrics to provide visibility into your account’s usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards.

 AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about Fargate service quotas, see [View and manage Amazon EKS and Fargate service quotas](service-quotas.md).

 AWS Fargate publishes the following metrics in the ` AWS/Usage` namespace.


| Metric | Description | 
| --- | --- | 
|   `ResourceCount`   |  The total number of the specified resource running on your account. The resource is defined by the dimensions associated with the metric.  | 

The following dimensions are used to refine the usage metrics that are published by AWS Fargate.


| Dimension | Description | 
| --- | --- | 
|   `Service`   |  The name of the AWS service containing the resource. For AWS Fargate usage metrics, the value for this dimension is `Fargate`.  | 
|   `Type`   |  The type of entity that’s being reported. Currently, the only valid value for AWS Fargate usage metrics is `Resource`.  | 
|   `Resource`   |  The type of resource that’s running. Currently, AWS Fargate returns information on your Fargate On-Demand usage. The resource value for Fargate On-Demand usage is `OnDemand`. [NOTE] ==== Fargate On-Demand usage combines Amazon EKS Pods using Fargate, Amazon ECS tasks using the Fargate launch type and Amazon ECS tasks using the `FARGATE` capacity provider. ====  | 
|   `Class`   |  The class of resource being tracked. Currently, AWS Fargate doesn’t use the class dimension.  | 

### Creating a CloudWatch alarm to monitor Fargate resource usage metrics
<a name="service-quota-alarm"></a>

 AWS Fargate provides CloudWatch usage metrics that correspond to the AWS service quotas for Fargate On-Demand resource usage. In the Service Quotas console, you can visualize your usage on a graph. You can also configure alarms that alert you when your usage approaches a service quota. For more information, see [Collect AWS Fargate app and usage metrics](#monitoring-fargate-usage).

Use the following steps to create a CloudWatch alarm based on the Fargate resource usage metrics.

1. Open the Service Quotas console at https://console.aws.amazon.com/servicequotas/.

1. In the left navigation pane, choose ** AWS services**.

1. From the ** AWS services** list, search for and select ** AWS Fargate**.

1. In the **Service quotas** list, choose the Fargate usage quota you want to create an alarm for.

1. In the Amazon CloudWatch alarms section, choose **Create**.

1. For **Alarm threshold**, choose the percentage of your applied quota value that you want to set as the alarm value.

1. For **Alarm name**, enter a name for the alarm and then choose **Create**.

# Start AWS Fargate logging for your cluster
<a name="fargate-logging"></a>

Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don’t explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated `ConfigMap` that must meet the following criteria:
+ Named `aws-logging` 
+ Created in a dedicated namespace called `aws-observability` 
+ Can’t exceed 5300 characters.

Once you’ve created the `ConfigMap`, Amazon EKS on Fargate automatically detects it and configures the log router with it. Fargate uses a version of AWS for Fluent Bit, an upstream compliant distribution of Fluent Bit managed by AWS. For more information, see [AWS for Fluent Bit](https://github.com/aws/aws-for-fluent-bit) on GitHub.

The log router allows you to use the breadth of services at AWS for log analytics and storage. You can stream logs from Fargate directly to Amazon CloudWatch, Amazon OpenSearch Service. You can also stream logs to destinations such as [Amazon S3](https://aws.amazon.com/s3/), [Amazon Kinesis Data Streams](https://aws.amazon.com/kinesis/data-streams/), and partner tools through [Amazon Data Firehose](https://aws.amazon.com/kinesis/data-firehose/).
+ An existing Fargate profile that specifies an existing Kubernetes namespace that you deploy Fargate Pods to. For more information, see [Step 3: Create a Fargate profile for your cluster](fargate-getting-started.md#fargate-gs-create-profile).
+ An existing Fargate Pod execution role. For more information, see [Step 2: Create a Fargate Pod execution role](fargate-getting-started.md#fargate-sg-pod-execution-role).

## Log router configuration
<a name="fargate-logging-log-router-configuration"></a>

**Important**  
For logs to be successfully published, there must be network access from the VPC that your cluster is in to the log destination. This mainly concerns users customizing egress rules for their VPC. For an example using CloudWatch, see [Using CloudWatch Logs with interface VPC endpoints](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html) in the *Amazon CloudWatch Logs User Guide*.

In the following steps, replace every *example value* with your own values.

1. Create a dedicated Kubernetes namespace named `aws-observability`.

   1. Save the following contents to a file named `aws-observability-namespace.yaml` on your computer. The value for `name` must be `aws-observability` and the `aws-observability: enabled` label is required.

      ```
      kind: Namespace
      apiVersion: v1
      metadata:
        name: aws-observability
        labels:
          aws-observability: enabled
      ```

   1. Create the namespace.

      ```
      kubectl apply -f aws-observability-namespace.yaml
      ```

1. Create a `ConfigMap` with a `Fluent Conf` data value to ship container logs to a destination. Fluent Conf is Fluent Bit, which is a fast and lightweight log processor configuration language that’s used to route container logs to a log destination of your choice. For more information, see [Configuration File](https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/classic-mode/configuration-file) in the Fluent Bit documentation.
**Important**  
The main sections included in a typical `Fluent Conf` are `Service`, `Input`, `Filter`, and `Output`. The Fargate log router however, only accepts:  
The `Filter` and `Output` sections.
A `Parser` section.
If you provide any other sections, they will be rejected.

   The Fargate log router manages the `Service` and `Input` sections. It has the following `Input` section, which can’t be modified and isn’t needed in your `ConfigMap`. However, you can get insights from it, such as the memory buffer limit and the tag applied for logs.

   ```
   [INPUT]
       Name tail
       Buffer_Max_Size 66KB
       DB /var/log/flb_kube.db
       Mem_Buf_Limit 45MB
       Path /var/log/containers/*.log
       Read_From_Head On
       Refresh_Interval 10
       Rotate_Wait 30
       Skip_Long_Lines On
       Tag kube.*
   ```

   When creating the `ConfigMap`, take into account the following rules that Fargate uses to validate fields:
   +  `[FILTER]`, `[OUTPUT]`, and `[PARSER]` are supposed to be specified under each corresponding key. For example, `[FILTER]` must be under `filters.conf`. You can have one or more `[FILTER]`s under `filters.conf`. The `[OUTPUT]` and `[PARSER]` sections should also be under their corresponding keys. By specifying multiple `[OUTPUT]` sections, you can route your logs to different destinations at the same time.
   + Fargate validates the required keys for each section. `Name` and `match` are required for each `[FILTER]` and `[OUTPUT]`. `Name` and `format` are required for each `[PARSER]`. The keys are case-insensitive.
   + Environment variables such as `${ENV_VAR}` aren’t allowed in the `ConfigMap`.
   + The indentation has to be the same for either directive or key-value pair within each `filters.conf`, `output.conf`, and `parsers.conf`. Key-value pairs have to be indented more than directives.
   + Fargate validates against the following supported filters: `grep`, `parser`, `record_modifier`, `rewrite_tag`, `throttle`, `nest`, `modify`, and `kubernetes`.
   + Fargate validates against the following supported output: `es`, `firehose`, `kinesis_firehose`, `cloudwatch`, `cloudwatch_logs`, and `kinesis`.
   + At least one supported `Output` plugin has to be provided in the `ConfigMap` to enable logging. `Filter` and `Parser` aren’t required to enable logging.

     You can also run Fluent Bit on Amazon EC2 using the desired configuration to troubleshoot any issues that arise from validation. Create your `ConfigMap` using one of the following examples.
**Important**  
Amazon EKS Fargate logging doesn’t support dynamic configuration of a `ConfigMap`. Any changes to a `ConfigMap` are applied to new Pods only. Changes aren’t applied to existing Pods.

     Create a `ConfigMap` using the example for your desired log destination.
**Note**  
You can also use Amazon Kinesis Data Streams for your log destination. If you use Kinesis Data Streams, make sure that the pod execution role has been granted the `kinesis:PutRecords` permission. For more information, see Amazon Kinesis Data Streams [Permissions](https://docs.fluentbit.io/manual/pipeline/outputs/kinesis#permissions) in the *Fluent Bit: Official Manual*.  
**Example**  

------
#### [ CloudWatch ]

   You have two output options when using CloudWatch:
   +  [An output plugin written in C](https://docs.fluentbit.io/manual/v/1.5/pipeline/outputs/cloudwatch) 
   +  [An output plugin written in Golang](https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit) 

   The following example shows you how to use the `cloudwatch_logs` plugin to send logs to CloudWatch.

   1. Save the following contents to a file named `aws-logging-cloudwatch-configmap.yaml`. Replace *region-code* with the AWS Region that your cluster is in. The parameters under `[OUTPUT]` are required.

      ```
      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: aws-logging
        namespace: aws-observability
      data:
        flb_log_cw: "false"  # Set to true to ship Fluent Bit process logs to CloudWatch.
        filters.conf: |
          [FILTER]
              Name parser
              Match *
              Key_name log
              Parser crio
          [FILTER]
              Name kubernetes
              Match kube.*
              Merge_Log On
              Keep_Log Off
              Buffer_Size 0
              Kube_Meta_Cache_TTL 300s
        output.conf: |
          [OUTPUT]
              Name cloudwatch_logs
              Match   kube.*
              region region-code
              log_group_name my-logs
              log_stream_prefix from-fluent-bit-
              log_retention_days 60
              auto_create_group true
        parsers.conf: |
          [PARSER]
              Name crio
              Format Regex
              Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
              Time_Key    time
              Time_Format %Y-%m-%dT%H:%M:%S.%L%z
      ```

   1. Apply the manifest to your cluster.

      ```
      kubectl apply -f aws-logging-cloudwatch-configmap.yaml
      ```

------
#### [ Amazon OpenSearch Service ]

   If you want to send logs to Amazon OpenSearch Service, you can use [es](https://docs.fluentbit.io/manual/v/1.5/pipeline/outputs/elasticsearch) output, which is a plugin written in C. The following example shows you how to use the plugin to send logs to OpenSearch.

   1. Save the following contents to a file named `aws-logging-opensearch-configmap.yaml`. Replace every *example value* with your own values.

      ```
      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: aws-logging
        namespace: aws-observability
      data:
        output.conf: |
          [OUTPUT]
            Name  es
            Match *
            Host  search-example-gjxdcilagiprbglqn42jsty66y.region-code.es.amazonaws.com
            Port  443
            Index example
            Type  example_type
            AWS_Auth On
            AWS_Region region-code
            tls   On
      ```

   1. Apply the manifest to your cluster.

      ```
      kubectl apply -f aws-logging-opensearch-configmap.yaml
      ```

------
#### [ Firehose ]

   You have two output options when sending logs to Firehose:
   +  [kinesis\$1firehose](https://docs.fluentbit.io/manual/pipeline/outputs/firehose) – An output plugin written in C.
   +  [firehose](https://github.com/aws/amazon-kinesis-firehose-for-fluent-bit) – An output plugin written in Golang.

     The following example shows you how to use the `kinesis_firehose` plugin to send logs to Firehose.

     1. Save the following contents to a file named `aws-logging-firehose-configmap.yaml`. Replace *region-code* with the AWS Region that your cluster is in.

        ```
        kind: ConfigMap
        apiVersion: v1
        metadata:
          name: aws-logging
          namespace: aws-observability
        data:
          output.conf: |
            [OUTPUT]
             Name  kinesis_firehose
             Match *
             region region-code
             delivery_stream my-stream-firehose
        ```

     1. Apply the manifest to your cluster.

        ```
        kubectl apply -f aws-logging-firehose-configmap.yaml
        ```

------

1. Set up permissions for the Fargate Pod execution role to send logs to your destination.

   1. Download the IAM policy for your destination to your computer.  
**Example**  

------
#### [ CloudWatch ]

      Download the CloudWatch IAM policy to your computer. You can also [view the policy](https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json) on GitHub.

      ```
      curl -O https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json
      ```

------
#### [ Amazon OpenSearch Service ]

      Download the OpenSearch IAM policy to your computer. You can also [view the policy](https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/amazon-elasticsearch/permissions.json) on GitHub.

      ```
      curl -O https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/amazon-elasticsearch/permissions.json
      ```

      Make sure that OpenSearch Dashboards' access control is configured properly. The `all_access role` in OpenSearch Dashboards needs to have the Fargate Pod execution role and the IAM role mapped. The same mapping must be done for the `security_manager` role. You can add the previous mappings by selecting `Menu`, then `Security`, then `Roles`, and then select the respective roles. For more information, see [How do I troubleshoot CloudWatch Logs so that it streams to my Amazon ES domain?](https://aws.amazon.com/tr/premiumsupport/knowledge-center/es-troubleshoot-cloudwatch-logs/).

------
#### [ Firehose ]

      Download the Firehose IAM policy to your computer. You can also [view the policy](https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/kinesis-firehose/permissions.json) on GitHub.

      ```
      curl -O https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/kinesis-firehose/permissions.json
      ```

------

   1. Create an IAM policy from the policy file that you downloaded.

      ```
      aws iam create-policy --policy-name eks-fargate-logging-policy --policy-document file://permissions.json
      ```

   1. Attach the IAM policy to the pod execution role specified for your Fargate profile with the following command. Replace *111122223333* with your account ID. Replace *AmazonEKSFargatePodExecutionRole* with your Pod execution role (for more information, see [Step 2: Create a Fargate Pod execution role](fargate-getting-started.md#fargate-sg-pod-execution-role)).

      ```
      aws iam attach-role-policy \
        --policy-arn arn:aws:iam::111122223333:policy/eks-fargate-logging-policy \
        --role-name AmazonEKSFargatePodExecutionRole
      ```

### Kubernetes filter support
<a name="fargate-logging-kubernetes-filter"></a>

The Fluent Bit Kubernetes filter allows you to add Kubernetes metadata to your log files. For more information about the filter, see [Kubernetes](https://docs.fluentbit.io/manual/pipeline/filters/kubernetes) in the Fluent Bit documentation. You can apply a filter using the API server endpoint.

```
filters.conf: |
    [FILTER]
        Name             kubernetes
        Match            kube.*
        Merge_Log           On
        Buffer_Size         0
        Kube_Meta_Cache_TTL 300s
```

**Important**  
 `Kube_URL`, `Kube_CA_File`, `Kube_Token_Command`, and `Kube_Token_File` are service owned configuration parameters and must not be specified. Amazon EKS Fargate populates these values.
 `Kube_Meta_Cache_TTL` is the time Fluent Bit waits until it communicates with the API server for the latest metadata. If `Kube_Meta_Cache_TTL` isn’t specified, Amazon EKS Fargate appends a default value of 30 minutes to lessen the load on the API server.

### To ship Fluent Bit process logs to your account
<a name="ship-fluent-bit-process-logs"></a>

You can optionally ship Fluent Bit process logs to Amazon CloudWatch using the following `ConfigMap`. Shipping Fluent Bit process logs to CloudWatch requires additional log ingestion and storage costs. Replace *region-code* with the AWS Region that your cluster is in.

```
kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
  labels:
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  flb_log_cw: "true"  # Ships Fluent Bit process logs to CloudWatch.

  output.conf: |
    [OUTPUT]
        Name cloudwatch
        Match kube.*
        region region-code
        log_group_name fluent-bit-cloudwatch
        log_stream_prefix from-fluent-bit-
        auto_create_group true
```

The logs are in CloudWatch in the same AWS Region as the cluster. The log group name is ` my-cluster-fluent-bit-logs` and the Fluent Bit logstream name is `fluent-bit-podname-pod-namespace `.

**Note**  
The process logs are shipped only when the Fluent Bit process successfully starts. If there is a failure while starting Fluent Bit, the process logs are missed. You can only ship process logs to CloudWatch.
To debug shipping process logs to your account, you can apply the previous `ConfigMap` to get the process logs. Fluent Bit failing to start is usually due to your `ConfigMap` not being parsed or accepted by Fluent Bit while starting.

### To stop shipping Fluent Bit process logs
<a name="stop-fluent-bit-process-logs"></a>

Shipping Fluent Bit process logs to CloudWatch requires additional log ingestion and storage costs. To exclude process logs in an existing `ConfigMap` setup, do the following steps.

1. Locate the CloudWatch log group automatically created for your Amazon EKS cluster’s Fluent Bit process logs after enabling Fargate logging. It follows the format ` my-cluster-fluent-bit-logs`.

1. Delete the existing CloudWatch log streams created for each Pod’s process logs in the CloudWatch log group.

1. Edit the `ConfigMap` and set `flb_log_cw: "false"`.

1. Restart any existing Pods in the cluster.

## Test application
<a name="fargate-logging-test-application"></a>

1. Deploy a sample Pod.

   1. Save the following contents to a file named `sample-app.yaml` on your computer.

      ```
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: sample-app
        namespace: same-namespace-as-your-fargate-profile
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
              - name: nginx
                image: nginx:latest
                ports:
                  - name: http
                    containerPort: 80
      ```

   1. Apply the manifest to the cluster.

      ```
      kubectl apply -f sample-app.yaml
      ```

1. View the NGINX logs using the destination(s) that you configured in the `ConfigMap`.

## Size considerations
<a name="fargate-logging-size-considerations"></a>

We suggest that you plan for up to 50 MB of memory for the log router. If you expect your application to generate logs at very high throughput then you should plan for up to 100 MB.

## Troubleshooting
<a name="fargate-logging-troubleshooting"></a>

To confirm whether the logging feature is enabled or disabled for some reason, such as an invalid `ConfigMap`, and why it’s invalid, check your Pod events with `kubectl describe pod pod-name `. The output might include Pod events that clarify whether logging is enabled or not, such as the following example output.

```
[...]
Annotations:          CapacityProvisioned: 0.25vCPU 0.5GB
                      Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND
[...]
Events:
  Type     Reason           Age        From                                                           Message
  ----     ------           ----       ----                                                           -------
  Warning  LoggingDisabled  <unknown>  fargate-scheduler                                              Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found
```

The Pod events are ephemeral with a time period depending on the settings. You can also view a Pod’s annotations using `kubectl describe pod pod-name `. In the Pod annotation, there is information about whether the logging feature is enabled or disabled and the reason.