

# AL2 on Amazon EC2
<a name="ec2"></a>

**Note**  
 AL2 is no longer the current version of Amazon Linux. AL2023 is the successor to AL2. For more information, see [Comparing AL2 and AL2023](https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2.html) and the list of [Package changes in AL2023](https://docs.aws.amazon.com/linux/al2023/release-notes/compare-packages.html) in the [AL2023 User Guide](https://docs.aws.amazon.com/linux/al2023/ug/). 

 

**Topics**
+ [

## Launch Amazon EC2 instance with AL2 AMI
](#launch-ec2-instance)
+ [

## Find the latest AL2 AMI using Systems Manager
](#find-latest-al2-using-systems-manager)
+ [

## Connect to an Amazon EC2 instance
](#connect-to-amazon-linux-limits-ec2)
+ [

## AL2 AMI boot mode
](#default-boot-mode-al2)
+ [

## Package repository
](#package-repository)
+ [

# Using cloud-init on AL2
](amazon-linux-cloud-init.md)
+ [

# Configure AL2 instances
](configure-ec2-instance.md)
+ [

# User provided kernels
](UserProvidedKernels.md)
+ [

# AL2 AMI release notifications
](linux-ami-notifications.md)
+ [

# Configure the AL2 MATE desktop connection
](amazon-linux-ami-mate.md)
+ [

# AL2 Tutorials
](al2-tutorials.md)

## Launch Amazon EC2 instance with AL2 AMI
<a name="launch-ec2-instance"></a>

You can launch an Amazon EC2 instance with the AL2 AMI. For more information, see [Step 1: Launch an instance](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance).

## Find the latest AL2 AMI using Systems Manager
<a name="find-latest-al2-using-systems-manager"></a>

Amazon EC2 provides AWS Systems Manager public parameters for public AMIs maintained by AWS that you can use when launching instances. For example, the EC2-provided parameter `/aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-default-hvm-x86_64-gp2` is available in all Regions and always points to the latest version of the AL2 AMI in a given Region.

To find the latest AL2023 AMI using AWS Systems Manager, see [Get started with AL2023](https://docs.aws.amazon.com/linux/al2023/ug/get-started.html).

The Amazon EC2 AMI public parameters are available from the following path:

`/aws/service/ami-amazon-linux-latest`

You can view a list of all Amazon Linux AMIs in the current AWS Region by running the following AWS CLI command.

```
aws ssm get-parameters-by-path --path /aws/service/ami-amazon-linux-latest --query "Parameters[].Name"
```

**To launch an instance using a public parameter**  
The following example uses the EC2-provided public parameter to launch an `m5.xlarge` instance using the latest AL2 AMI.

To specify the parameter in the command, use the following syntax: `resolve:ssm:public-parameter`, where `resolve:ssm` is the standard prefix and `public-parameter` is the path and name of the public parameter.

In this example, the `--count` and `--security-group` parameters are not included. For `--count`, the default is 1. If you have a default VPC and a default security group, they are used.

```
aws ec2 run-instances 
    --image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-kernel-default-hvm-x86_64-gp2 
    --instance-type m5.xlarge 
    --key-name MyKeyPair
```

For more information, see [Using public parameters](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-public-parameters.html) in the *AWS Systems Manager User Guide*.

**Understanding Amazon Linux 2 AMI names**  
Amazon Linux 2 AMI names use the following naming scheme:

`amzn2-ami-[minimal-][kernel-{5.10,default,4.14}]-hvm-{x86_64,aarch64}-{ebs,gp2}`
+ **Minimal** AMIs come with a minimized set of pre-installed packages to reduce image size.
+ **kernel-VERSION** determines the kernel version that is pre-installed on the respective AMI:
  + `kernel-5.10` selects Linux kernel version 5.10. *This is the recommended kernel version for AL2.*
  + `kernel-default` selects the recommended default kernel for AL2. It is an alias for kernel-5.10.
  + `kernel-4.14` selects Linux kernel version 4.14. *This is only provided for compatibility with older AMI releases. Do not use this version for new instance launches. Expect this AMI to become unsupported.*
  + A special set of AMI names exists without reference to a specific kernel. These AMIs are an alias for kernel-4.14.*These AMIs are only provided for compatibility with older AMI releases. Do not use this AMI name for new instance launches. Expect the kernel for these AMIs to be updated.*
+ **x86\$164/aarch64** determines the CPU platform to run the AMI on. Select x86\$164 for Intel and AMD based EC2 instances. Select aarch64 for EC2 Graviton instances.
+ **ebs/gp2** determines the EBS volume type used to serve the respective AMI. See [EBS Volume Types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) for reference. *Always select gp2.*

## Connect to an Amazon EC2 instance
<a name="connect-to-amazon-linux-limits-ec2"></a>

There are several ways to connect to your Amazon Linux instance, including SSH, AWS Systems Manager Session Manager, and EC2 Instance Connect. For more information, see [Connect to your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon EC2 User Guide*.

**SSH users and **sudo****  
Amazon Linux does not allow remote `root` secure shell (SSH) by default. Also, password authentication is disabled to prevent brute force attacks. To enable SSH logins to an Amazon Linux instance, you must provide your key pair to the instance at launch. You must also set the security group used to launch your instance to allow SSH access. By default, the only account that can log in remotely using SSH is `ec2-user`. This account also has **sudo** privileges. If you enable remote `root` login, be aware that it is less secure than relying on key pairs and a secondary user.

## AL2 AMI boot mode
<a name="default-boot-mode-al2"></a>

AL2 AMIs don't have a boot mode parameter set. Instances launched from AL2 AMIs follow the default boot mode value of the instance type. For more information, see [Boot modes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-boot.html) in the *Amazon EC2 User Guide*.

## Package repository
<a name="package-repository"></a>

This information applies to AL2. For information about AL2023, see [Manage packages and operating system updates in AL2023](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html) in the *Amazon Linux 2023 User Guide*. 

AL2 and AL1 are designed to be used with online package repositories hosted in each Amazon EC2 AWS Region. The repositories are available in all Regions and are accessed using **yum** update tools. Hosting repositories in each Region enables us to deploy updates quickly and without any data transfer charges.

**Important**  
The last version of AL1 reached EOL on December 31, 2023 and will not receive any security updates or bug fixes starting January 1, 2024. For more information, see [Amazon Linux AMI end-of-life](https://aws.amazon.com//blogs/aws/update-on-amazon-linux-ami-end-of-life/).

If you don't need to preserve data or customizations for your instances, you can launch new instances using the current AL2 AMI. If you do need to preserve data or customizations for your instances, you can maintain those instances through the Amazon Linux package repositories. These repositories contain all the updated packages. You can choose to apply these updates to your running instances. Earlier versions of the AMI and update packages continue to be available for use, even as new versions are released.

**Note**  
To update and install packages without internet access on an Amazon EC2 instance, see [How can I update yum or install packages without internet access on my Amazon EC2 instances running AL1, AL2, or AL2023?](https://repost.aws/knowledge-center/ec2-al1-al2-update-yum-without-internet)

To install packages, use the following command:

```
[ec2-user ~]$ sudo yum install package
```

If you find that Amazon Linux doesn't contain an application that you need, you can install the application directly on your Amazon Linux instance. Amazon Linux uses RPMs and yum for package management, and that is likely the most direct way to install new applications. You should check to see if an application is available in our central Amazon Linux repository first, because many applications are available there. From there, you can add these applications to your Amazon Linux instance.

To upload your applications onto a running Amazon Linux instance, use **scp** or **sftp** and then configure the application by logging in to your instance. Your applications can also be uploaded during the instance launch by using the **PACKAGE\$1SETUP** action from the built-in cloud-init package. For more information, see [Using cloud-init on AL2](amazon-linux-cloud-init.md). 

### Security updates
<a name="security-updates"></a>

Security updates are provided using the package repositories. Both security updates and updated AMI security alerts are published in the [Amazon Linux Security Center](https://alas.aws.amazon.com). For more information about AWS security policies or to report a security problem, see [AWS Cloud Security](https://aws.amazon.com/security/).

AL1 and AL2 are configured to download and install critical or important security updates at launch time. Kernel updates are not included in this configuration.

In AL2023, this configuration has changed compared to AL1 and AL2. For more information about security updates for AL2023, see [Security updates and features](https://docs.aws.amazon.com/linux/al2023/ug/security-features.html) in the *Amazon Linux 2023 User Guide*.

We recommend that you make the necessary updates for your use case after launch. For example, you might want to apply all updates (not just security updates) at launch, or evaluate each update and apply only the ones applicable to your system. This is controlled using the following cloud-init setting: `repo_upgrade`. The following snippet of cloud-init configuration shows how you can change the settings in the user data text you pass to your instance initialization:

```
#cloud-config
repo_upgrade: security
```

 The possible values for `repo_upgrade` are as follows: 

`critical`  
Apply outstanding critical security updates.

`important`  
Apply outstanding critical and important security updates.

`medium`  
Apply outstanding critical, important, and medium security updates.

`low`  
Apply all outstanding security updates, including low-severity security updates.

`security`  
Apply outstanding critical or important updates that Amazon marks as security updates.

`bugfix`  
Apply updates that Amazon marks as bug fixes. Bug fixes are a larger set of updates, which include security updates and fixes for various other minor bugs.

`all`  
Apply all applicable available updates, regardless of their classification.

`none`  
Don't apply any updates to the instance on start up.

**Note**  
Amazon Linux does not mark any updates as `bugfix`. To apply non-security related updates from Amazon Linux use `repo_upgrade: all`.

The default setting for `repo_upgrade` is security. That is, if you don't specify a different value in your user data, by default, Amazon Linux performs the security upgrades at launch for any packages installed at that time. Amazon Linux also notifies you of any updates to the installed packages by listing the number of available updates upon login using the `/etc/motd` file. To install these updates, you need to run **sudo yum upgrade** on the instance. 

### Repository configuration
<a name="repository-config"></a>

For AL1 and AL2, AMIs are a snapshot of the packages available at the time the AMI was created, with the exception of security updates. Any packages not on the original AMI, but installed at runtime, will be the latest version available. To get the latest packages available for AL2, run **yum update -y**.

**Troubleshooting tip**  
If you get a `cannot allocate memory` error running **yum update** on nano instance types, such as `t3.nano`, you might need to allocate swap space to enable the update.

For AL2023, the repository configuration has changed compared to AL1 and AL2. For more information about the AL2023 repository, see [Managing packages and operating system updates](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html).

Versions up to AL2023 were configured to deliver a continuous flow of updates to roll from one minor version of Amazon Linux to the next version, also called *rolling releases*. As a best practice, we recommend you update your AMI to the latest available AMI rather than launching old AMIs and applying updates.

In-place upgrades are not supported between major Amazon Linux versions, such as from AL1 to AL2 or from AL2 to AL2023. For more information, see [Amazon Linux availability](what-is-amazon-linux.md#amazon-linux-availability).

# Using cloud-init on AL2
<a name="amazon-linux-cloud-init"></a>

The cloud-init package is an open-source application built by Canonical that is used to bootstrap Linux images in a cloud computing environment, such as Amazon EC2. Amazon Linux contains a customized version of cloud-init. This allows you to specify actions that should happen to your instance at boot time. You can pass desired actions to cloud-init through the user data fields when launching an instance. This means you can use common AMIs for many use cases and configure them dynamically at startup. Amazon Linux also uses cloud-init to perform initial configuration of the ec2-user account.

 For more information, see the [cloud-init documentation](http://cloudinit.readthedocs.org/en/latest/). 

Amazon Linux uses the cloud-init actions found in `/etc/cloud/cloud.cfg.d` and `/etc/cloud/cloud.cfg`. You can create your own cloud-init action files in `/etc/cloud/cloud.cfg.d`. All files in this directory are read by cloud-init. They are read in lexical order, and later files overwrite values in earlier files.

The cloud-init package performs these (and other) common configuration tasks for instances at boot:
+ Set the default locale.
+ Set the hostname.
+ Parse and handle user data.
+ Generate host private SSH keys.
+ Add a user's public SSH keys to `.ssh/authorized_keys` for easy login and administration.
+ Prepare the repositories for package management.
+ Handle package actions defined in user data.
+ Run user scripts found in user data.
+ Mount instance store volumes, if applicable.
  + By default, the `ephemeral0` instance store volume is mounted at `/media/ephemeral0` if it is present and contains a valid file system; otherwise, it is not mounted.
  + By default, any swap volumes associated with the instance are mounted (only for `m1.small` and `c1.medium` instance types).
  + You can override the default instance store volume mount with the following cloud-init directive:

    ```
    #cloud-config
    mounts:
    - [ ephemeral0 ]
    ```

    For more control over mounts, see [Mounts](http://cloudinit.readthedocs.io/en/latest/topics/modules.html#mounts) in the cloud-init documentation.
  + Instance store volumes that support TRIM are not formatted when an instance launches, so you must partition and format them before you can mount them. For more information, see [Instance store volume TRIM support](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html#InstanceStoreTrimSupport). You can use the `disk_setup` module to partition and format your instance store volumes at boot. For more information, see [Disk Setup](http://cloudinit.readthedocs.io/en/latest/topics/modules.html#disk-setup) in the cloud-init documentation.

## Supported user data formats
<a name="supported-user-data-formats"></a>

The cloud-init package supports user data handling of a variety of formats:
+ Gzip
  + If user data is gzip compressed, cloud-init decompresses the data and handles it appropriately.
+ MIME multipart
  + Using a MIME multipart file, you can specify more than one type of data. For example, you could specify both a user data script and a cloud config type. Each part of the multipart file can be handled by cloud-init if it is one of the supported formats.
+ Base64 decoding
  +  If user data is base64-encoded, cloud-init determines if it can understand the decoded data as one of the supported types. If it understands the decoded data, it decodes the data and handles it appropriately. If not, it returns the base64 data intact.
+ User data script
  + Begins with `#!` or `Content-Type: text/x-shellscript`.
  + The script is run by `/etc/init.d/cloud-init-user-scripts` during the first boot cycle. This occurs late in the boot process (after the initial configuration actions are performed).
+ Include file
  + Begins with `#include` or `Content-Type: text/x-include-url`.
  + This content is an include file. The file contains a list of URLs, one per line. Each of the URLs is read, and their content passed through this same set of rules. The content read from the URL can be gzip compressed, MIME-multi-part, or plaintext.
+ Cloud config data
  + Begins with `#cloud-config` or `Content-Type: text/cloud-config`.
  + This content is cloud config data.
+ Upstart job (not supported on AL2)
  + Begins with `#upstart-job` or `Content-Type: text/upstart-job`.
  + This content is stored in a file in `/etc/init`, and upstart consumes the content as it does with other upstart jobs.
+ Cloud boothook
  + Begins with `#cloud-boothook` or `Content-Type: text/cloud-boothook`.
  + This content is boothook data. It is stored in a file under `/var/lib/cloud` and then runs immediately.
  +  This is the earliest *hook* available. There is no mechanism provided for running it only one time. The boothook must take care of this itself. It is provided with the instance ID in the environment variable `INSTANCE_ID`. Use this variable to provide a once-per-instance set of boothook data.

# Configure AL2 instances
<a name="configure-ec2-instance"></a>

After you have successfully launched and logged into your AL2 instance, you can make changes to it. There are many different ways you can configure an instance to meet the needs of a specific application. The following are some common tasks to help get you started.

**Topics**
+ [

## Common configuration scenarios
](#instance-configuration-scenarios)
+ [

# Manage software on your AL2 instance
](managing-software.md)
+ [

# Processor state control for your Amazon EC2 AL2 instance
](processor_state_control.md)
+ [

# I/O scheduler for AL2
](io-scheduler.md)
+ [

# Change the hostname of your AL2 instance
](set-hostname.md)
+ [

# Set up dynamic DNS on your AL2 instance
](dynamic-dns.md)
+ [

# Configure your network interface using ec2-net-utils for AL2
](ec2-net-utils.md)

## Common configuration scenarios
<a name="instance-configuration-scenarios"></a>

The base distribution of Amazon Linux contains the software packages and utilities that are required for basic server operations. However, many more software packages are available in various software repositories, and even more packages are available for you to build from source code. For more information on installing and building software from these locations, see [Manage software on your AL2 instance](managing-software.md).

Amazon Linux instances come pre-configured with an `ec2-user`, but you may want to add other users that do not have super-user privileges. For more information on adding and removing users, see [Manage users on your Linux instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html) in the *Amazon EC2 User Guide*.

If you have your own network with a domain name registered to it, you can change the hostname of an instance to identify itself as part of that domain. You can also change the system prompt to show a more meaningful name without changing the hostname settings. For more information, see [Change the hostname of your AL2 instance](set-hostname.md). You can configure an instance to use a dynamic DNS service provider. For more information, see [Set up dynamic DNS on your AL2 instance](dynamic-dns.md).

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: cloud-init directives and shell scripts. For more information, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide*.

# Manage software on your AL2 instance
<a name="managing-software"></a>

The base distribution of Amazon Linux contains the software packages and utilities that are required for basic server operations.

This information applies to AL2. For information about AL2023, see [Manage packages and operating system updates in AL2023](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html) in the *Amazon Linux 2023 User Guide*.

It is important to keep software up to date. Many packages in a Linux distribution are updated frequently to fix bugs, add features, and protect against security exploits. For more information, see [Update instance software on your AL2 instance](install-updates.md).

By default, AL2 instances launch with the following repositories enabled:
+ `amzn2-core`
+ `amzn2extra-docker`

While there are many packages available in these repositories that are updated by AWS, there might be a package that you want to install that is contained in another repository. For more information, see [Add repositories on an AL2 instance](add-repositories.md). For help finding and installing packages in enabled repositories, see [Find and install software packages on an AL2 instance](find-install-software.md).

Not all software is available in software packages stored in repositories; some software must be compiled on an instance from its source code. For more information, see [Prepare to compile software on an AL2 instance](compile-software.md).

AL2 instances manage their software using the yum package manager. The yum package manager can install, remove, and update software, as well as manage all of the dependencies for each package.

**Topics**
+ [

# Update instance software on your AL2 instance
](install-updates.md)
+ [

# Add repositories on an AL2 instance
](add-repositories.md)
+ [

# Find and install software packages on an AL2 instance
](find-install-software.md)
+ [

# Prepare to compile software on an AL2 instance
](compile-software.md)

# Update instance software on your AL2 instance
<a name="install-updates"></a>

It is important to keep software up to date. Packages in a Linux distribution are updated frequently to fix bugs, add features, and protect against security exploits. When you first launch and connect to an Amazon Linux instance, you might see a message asking you to update software packages for security purposes. This section shows how to update an entire system, or just a single package.

This information applies to AL2. For information about AL2023, see [Manage packages and operating system updates in AL2023](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html) in the *Amazon Linux 2023 User Guide*.

For information about changes and updates to AL2, see [AL2 release notes](https://docs.aws.amazon.com/AL2/latest/relnotes/relnotes-al2.html).

For information about changes and updates to AL2023, see [AL2023 release notes](https://docs.aws.amazon.com/linux/al2023/release-notes/relnotes.html).

**Important**  
If you launched an EC2 instance that uses an Amazon Linux 2 AMI into an IPv6-only subnet, you must connect to the instance and run `sudo amazon-linux-https disable`. This lets your AL2 instance connect to the yum repository in S3 over IPv6 using the http patch service.

**To update all packages on an AL2 instance**

1. (Optional) Start a **screen** session in your shell window. Sometimes you might experience a network interruption that can disconnect the SSH connection to your instance. If this happens during a long software update, it can leave the instance in a recoverable, although confused state. A **screen** session allows you to continue running the update even if your connection is interrupted, and you can reconnect to the session later without problems.

   1. Execute the **screen** command to begin the session.

      ```
      [ec2-user ~]$ screen
      ```

   1. If your session is disconnected, log back into your instance and list the available screens.

      ```
      [ec2-user ~]$ screen -ls
      There is a screen on:
      	17793.pts-0.ip-12-34-56-78	(Detached)
      1 Socket in /var/run/screen/S-ec2-user.
      ```

   1. Reconnect to the screen using the **screen -r** command and the process ID from the previous command.

      ```
      [ec2-user ~]$ screen -r 17793
      ```

   1. When you are finished using **screen**, use the **exit** command to close the session.

      ```
      [ec2-user ~]$ exit
      [screen is terminating]
      ```

1. Run the **yum update** command. Optionally, you can add the `--security` flag to apply only security updates.

   ```
   [ec2-user ~]$ sudo yum update
   ```

1. Review the packages listed, enter **y**, and press Enter to accept the updates. Updating all of the packages on a system can take several minutes. The **yum** output shows the status of the update while it is running.

1. (Optional) [Reboot your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html) to ensure that you are using the latest packages and libraries from your update; kernel updates are not loaded until a reboot occurs. Updates to any `glibc` libraries should also be followed by a reboot. For updates to packages that control services, it might be sufficient to restart the services to pick up the updates, but a system reboot ensures that all previous package and library updates are complete.

**To update a single package on an AL2 instance**

Use this procedure to update a single package (and its dependencies) and not the entire system.

1. Run the **yum update** command with the name of the package to update.

   ```
   [ec2-user ~]$ sudo yum update openssl
   ```

1. Review the package information listed, enter **y**, and press Enter to accept the update or updates. Sometimes there will be more than one package listed if there are package dependencies that must be resolved. The **yum** output shows the status of the update while it is running.

1. (Optional) [Reboot your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html) to ensure that you are using the latest packages and libraries from your update; kernel updates are not loaded until a reboot occurs. Updates to any `glibc` libraries should also be followed by a reboot. For updates to packages that control services, it might be sufficient to restart the services to pick up the updates, but a system reboot ensures that all previous package and library updates are complete.

# Add repositories on an AL2 instance
<a name="add-repositories"></a>

This information applies to AL2. For information about AL2023, see [Deterministic upgrades through versioned repositories on AL2023](https://docs.aws.amazon.com/linux/al2023/ug/deterministic-upgrades.html) in the *Amazon Linux 2023 User Guide*.

By default, AL2 instances launch with the following repositories enabled:
+ `amzn2-core`
+ `amzn2extra-docker`

While there are many packages available in these repositories that are updated by Amazon Web Services, there might be a package that you want to install that is contained in another repository.

To install a package from a different repository with **yum**, you need to add the repository information to the `/etc/yum.conf` file or to its own `repository.repo` file in the `/etc/yum.repos.d` directory. You can do this manually, but most yum repositories provide their own `repository.repo` file at their repository URL.

**To determine what yum repositories are already installed**  
List the installed yum repositories with the following command:

```
[ec2-user ~]$ yum repolist all
```

The resulting output lists the installed repositories and reports the status of each. Enabled repositories display the number of packages they contain.

**To add a yum repository to /etc/yum.repos.d**

1. Find the location of the `.repo` file. This will vary depending on the repository you are adding. In this example, the `.repo` file is at `https://www.example.com/repository.repo`.

1. Add the repository with the **yum-config-manager** command.

   ```
   [ec2-user ~]$ sudo yum-config-manager --add-repo https://www.example.com/repository.repo
   Loaded plugins: priorities, update-motd, upgrade-helper
   adding repo from: https://www.example.com/repository.repo
   grabbing file https://www.example.com/repository.repo to /etc/yum.repos.d/repository.repo
   repository.repo                                      | 4.0 kB     00:00
   repo saved to /etc/yum.repos.d/repository.repo
   ```

After you install a repository, you must enable it as described in the next procedure.

**To enable a yum repository in /etc/yum.repos.d**  
Use the **yum-config-manager** command with the `--enable repository` flag. The following command enables the Extra Packages for Enterprise Linux (EPEL) repository from the Fedora project. By default, this repository is present in `/etc/yum.repos.d` on Amazon Linux AMI instances, but it is not enabled.

```
[ec2-user ~]$ sudo yum-config-manager --enable epel
```

For more information, and to download the latest version of this package, see [https://fedoraproject.org/wiki/EPEL](https://fedoraproject.org/wiki/EPEL).

# Find and install software packages on an AL2 instance
<a name="find-install-software"></a>

You can use a package management tool to find and install software packages. In Amazon Linux 2, the default software package management tool is YUM. In AL2023, the default software package management tool is DNF. For more information, see [Package management tool](https://docs.aws.amazon.com/linux/al2023/ug/package-management.html) in the *Amazon Linux 2023 User Guide*.

## Find software packages on an AL2 instance
<a name="find-software"></a>

You can use the **yum search** command to search the descriptions of packages that are available in your configured repositories. This is especially helpful if you don't know the exact name of the package you want to install. Simply append the keyword search to the command; for multiple word searches, wrap the search query with quotation marks.

```
[ec2-user ~]$ yum search "find"
```

The following is example output.

```
Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
============================== N/S matched: find ===============================
findutils.x86_64 : The GNU versions of find utilities (find and xargs)
gedit-plugin-findinfiles.x86_64 : gedit findinfiles plugin
ocaml-findlib-devel.x86_64 : Development files for ocaml-findlib
perl-File-Find-Rule.noarch : Perl module implementing an alternative interface to File::Find
robotfindskitten.x86_64 : A game/zen simulation. You are robot. Your job is to find kitten.
mlocate.x86_64 : An utility for finding files by name
ocaml-findlib.x86_64 : Objective CAML package manager and build helper
perl-Devel-Cycle.noarch : Find memory cycles in objects
perl-Devel-EnforceEncapsulation.noarch : Find access violations to blessed objects
perl-File-Find-Rule-Perl.noarch : Common rules for searching for Perl things
perl-File-HomeDir.noarch : Find your home and other directories on any platform
perl-IPC-Cmd.noarch : Finding and running system commands made easy
perl-Perl-MinimumVersion.noarch : Find a minimum required version of perl for Perl code
texlive-xesearch.noarch : A string finder for XeTeX
valgrind.x86_64 : Tool for finding memory management bugs in programs
valgrind.i686 : Tool for finding memory management bugs in programs
```

Multiple word search queries in quotation marks only return results that match the exact query. If you don't see the expected package, simplify your search to one keyword and then scan the results. You can also try keyword synonyms to broaden your search.

For more information about packages for AL2, see the following:
+ [AL2 Extras Library](al2-extras.md)
+ [Package repository](ec2.md#package-repository)

## Install software packages on an AL2 instance
<a name="install-software"></a>

In AL2, the yum package management tool searches all of your enabled repositories for different software packages and handles any dependencies in the software installation process. For information about installing software packages in AL2023, see [Managing packages and operating system updates](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html) in the *Amazon Linux 2023 User Guide*.

**To install a package from a repository**  
Use the **yum install *package*** command, replacing *package* with the name of the software to install. For example, to install the **links** text-based web browser, enter the following command.

```
[ec2-user ~]$ sudo yum install links
```

**To install RPM package files that you have downloaded**  
You can also use **yum install** to install RPM package files that you have downloaded from the internet. To do this, append the path name of an RPM file to the installation command instead of a repository package name.

```
[ec2-user ~]$ sudo yum install my-package.rpm
```

**To list installed packages**  
To view a list of installed packages on your instance, use the following command.

```
[ec2-user ~]$ yum list installed
```

# Prepare to compile software on an AL2 instance
<a name="compile-software"></a>

Open-source software is available on the internet that has not been pre-compiled and made available for download from a package repository. You might eventually discover a software package that you need to compile yourself, from its source code. For your system to be able to compile software in AL2 and Amazon Linux, you need to install several development tools, such as **make**, **gcc**, and **autoconf**.

Because software compilation is not a task that every Amazon EC2 instance requires, these tools are not installed by default, but they are available in a package group called "Development Tools" that is easily added to an instance with the **yum groupinstall** command.

```
[ec2-user ~]$ sudo yum groupinstall "Development Tools"
```

Software source code packages are often available for download (from websites such as [https://github.com/](https://github.com/) and [http://sourceforge.net/](https://sourceforge.net/)) as a compressed archive file, called a tarball. These tarballs will usually have the `.tar.gz` file extension. You can decompress these archives with the **tar** command.

```
[ec2-user ~]$ tar -xzf software.tar.gz
```

After you have decompressed and unarchived the source code package, you should look for a `README` or `INSTALL` file in the source code directory that can provide you with further instructions for compiling and installing the source code. 

**To retrieve source code for Amazon Linux packages**  
Amazon Web Services provides the source code for maintained packages. You can download the source code for any installed packages with the **yumdownloader --source** command.

Run the **yumdownloader --source *package*** command to download the source code for *package*. For example, to download the source code for the `htop` package, enter the following command.

```
[ec2-user ~]$ yumdownloader --source htop

Loaded plugins: priorities, update-motd, upgrade-helper
Enabling amzn-updates-source repository
Enabling amzn-main-source repository
amzn-main-source                                                                                              | 1.9 kB  00:00:00     
amzn-updates-source                                                                                           | 1.9 kB  00:00:00     
(1/2): amzn-updates-source/latest/primary_db                                                                  |  52 kB  00:00:00     
(2/2): amzn-main-source/latest/primary_db                                                                     | 734 kB  00:00:00     
htop-1.0.1-2.3.amzn1.src.rpm
```

The location of the source RPM is in the directory from which you ran the command.

# Processor state control for your Amazon EC2 AL2 instance
<a name="processor_state_control"></a>

C-states control the sleep levels that a core can enter when it is idle. C-states are numbered starting with C0 (the shallowest state where the core is totally awake and executing instructions) and go to C6 (the deepest idle state where a core is powered off).

P-states control the desired performance (in CPU frequency) from a core. P-states are numbered starting from P0 (the highest performance setting where the core is allowed to use Intel Turbo Boost Technology to increase frequency if possible), and they go from P1 (the P-state that requests the maximum baseline frequency) to P15 (the lowest possible frequency).

You might want to change the C-state or P-state settings to increase processor performance consistency, reduce latency, or tune your instance for a specific workload. The default C-state and P-state settings provide maximum performance, which is optimal for most workloads. However, if your application would benefit from reduced latency at the cost of higher single- or dual-core frequencies, or from consistent performance at lower frequencies as opposed to bursty Turbo Boost frequencies, consider experimenting with the C-state or P-state settings that are available to these instances.

For information about Amazon EC2 instance types that provide the ability for the operating system to control processor C-states and P-states, see [Processor state control for your Amazon EC2 instance](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/processor_state_control.html) in the *Amazon EC2 User Guide*.

The following sections describe the different processor state configurations and how to monitor the effects of your configuration. These procedures were written for, and apply to Amazon Linux; however, they might also work for other Linux distributions with a Linux kernel version of 3.9 or newer.

**Note**  
The examples on this page use the following:  
The **turbostat** utility to display processor frequency and C-state information. The **turbostat** utility is available on Amazon Linux by default.
The **stress** command to simulate a workload. To install **stress**, first enable the EPEL repository by running **sudo amazon-linux-extras install epel**, and then run **sudo yum install -y stress**.
If the output does not display the C-state information, include the **--debug** option in the command (**sudo turbostat --debug stress *<options>***).

**Topics**
+ [

## Highest performance with maximum Turbo Boost frequency
](#turbo-perf)
+ [

## High performance and low latency by limiting deeper C-states
](#c-states)
+ [

## Baseline performance with the lowest variability
](#baseline-perf)

## Highest performance with maximum Turbo Boost frequency
<a name="turbo-perf"></a>

This is the default processor state control configuration for the Amazon Linux AMI, and it is recommended for most workloads. This configuration provides the highest performance with lower variability. Allowing inactive cores to enter deeper sleep states provides the thermal headroom required for single or dual core processes to reach their maximum Turbo Boost potential.

The following example shows a `c4.8xlarge` instance with two cores actively performing work reaching their maximum processor Turbo Boost frequency.

```
[ec2-user ~]$ sudo turbostat stress -c 2 -t 10
stress: info: [30680] dispatching hogs: 2 cpu, 0 io, 0 vm, 0 hdd
stress: info: [30680] successful run completed in 10s
pk cor CPU    %c0  GHz  TSC SMI    %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7  Pkg_W RAM_W PKG_% RAM_%
             5.54 3.44 2.90   0   9.18   0.00  85.28   0.00   0.00   0.00   0.00   0.00  94.04 32.70 54.18  0.00
 0   0   0   0.12 3.26 2.90   0   3.61   0.00  96.27   0.00   0.00   0.00   0.00   0.00  48.12 18.88 26.02  0.00
 0   0  18   0.12 3.26 2.90   0   3.61
 0   1   1   0.12 3.26 2.90   0   4.11   0.00  95.77   0.00
 0   1  19   0.13 3.27 2.90   0   4.11
 0   2   2   0.13 3.28 2.90   0   4.45   0.00  95.42   0.00
 0   2  20   0.11 3.27 2.90   0   4.47
 0   3   3   0.05 3.42 2.90   0  99.91   0.00   0.05   0.00
 0   3  21  97.84 3.45 2.90   0   2.11
...
 1   1  10   0.06 3.33 2.90   0  99.88   0.01   0.06   0.00
 1   1  28  97.61 3.44 2.90   0   2.32
...
10.002556 sec
```

In this example, vCPUs 21 and 28 are running at their maximum Turbo Boost frequency because the other cores have entered the `C6` sleep state to save power and provide both power and thermal headroom for the working cores. vCPUs 3 and 10 (each sharing a processor core with vCPUs 21 and 28) are in the `C1` state, waiting for instruction.

In the following example, all 18 cores are actively performing work, so there is no headroom for maximum Turbo Boost, but they are all running at the "all core Turbo Boost" speed of 3.2 GHz.

```
[ec2-user ~]$ sudo turbostat stress -c 36 -t 10
stress: info: [30685] dispatching hogs: 36 cpu, 0 io, 0 vm, 0 hdd
stress: info: [30685] successful run completed in 10s
pk cor CPU    %c0  GHz  TSC SMI    %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7  Pkg_W RAM_W PKG_% RAM_%
            99.27 3.20 2.90   0   0.26   0.00   0.47   0.00   0.00   0.00   0.00   0.00 228.59 31.33 199.26  0.00
 0   0   0  99.08 3.20 2.90   0   0.27   0.01   0.64   0.00   0.00   0.00   0.00   0.00 114.69 18.55 99.32  0.00
 0   0  18  98.74 3.20 2.90   0   0.62
 0   1   1  99.14 3.20 2.90   0   0.09   0.00   0.76   0.00
 0   1  19  98.75 3.20 2.90   0   0.49
 0   2   2  99.07 3.20 2.90   0   0.10   0.02   0.81   0.00
 0   2  20  98.73 3.20 2.90   0   0.44
 0   3   3  99.02 3.20 2.90   0   0.24   0.00   0.74   0.00
 0   3  21  99.13 3.20 2.90   0   0.13
 0   4   4  99.26 3.20 2.90   0   0.09   0.00   0.65   0.00
 0   4  22  98.68 3.20 2.90   0   0.67
 0   5   5  99.19 3.20 2.90   0   0.08   0.00   0.73   0.00
 0   5  23  98.58 3.20 2.90   0   0.69
 0   6   6  99.01 3.20 2.90   0   0.11   0.00   0.89   0.00
 0   6  24  98.72 3.20 2.90   0   0.39
...
```

## High performance and low latency by limiting deeper C-states
<a name="c-states"></a>

C-states control the sleep levels that a core may enter when it is inactive. You may want to control C-states to tune your system for latency versus performance. Putting cores to sleep takes time, and although a sleeping core allows more headroom for another core to boost to a higher frequency, it takes time for that sleeping core to wake back up and perform work. For example, if a core that is assigned to handle network packet interrupts is asleep, there may be a delay in servicing that interrupt. You can configure the system to not use deeper C-states, which reduces the processor reaction latency, but that in turn also reduces the headroom available to other cores for Turbo Boost.

A common scenario for disabling deeper sleep states is a Redis database application, which stores the database in system memory for the fastest possible query response time.

**To limit deeper sleep states on AL2**

1. Open the `/etc/default/grub` file with your editor of choice.

   ```
   [ec2-user ~]$ sudo vim /etc/default/grub
   ```

1. Edit the `GRUB_CMDLINE_LINUX_DEFAULT` line and add the `intel_idle.max_cstate=1` and `processor.max_cstate=1` options to set `C1` as the deepest C-state for idle cores.

   ```
   GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 intel_idle.max_cstate=1 processor.max_cstate=1"
   GRUB_TIMEOUT=0
   ```

   The `intel_idle.max_cstate=1` option configures the C-state limit for Intel-based instances, and the `processor.max_cstate=1` option configures the C-state limit for AMD-based instances. It is safe to add both options to your configuration. This allows a single configuration to set the desired behavior on both Intel and AMD.

1. Save the file and exit your editor.

1.  Run the following command to rebuild the boot configuration.

   ```
   [ec2-user ~]$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
   ```

1. Reboot your instance to enable the new kernel option.

   ```
   [ec2-user ~]$ sudo reboot
   ```

**To limit deeper sleep states on Amazon Linux AMI**

1. Open the `/boot/grub/grub.conf` file with your editor of choice.

   ```
   [ec2-user ~]$ sudo vim /boot/grub/grub.conf
   ```

1. Edit the `kernel` line of the first entry and add the `intel_idle.max_cstate=1` and `processor.max_cstate=1` options to set `C1` as the deepest C-state for idle cores.

   ```
   # created by imagebuilder
   default=0
   timeout=1
   hiddenmenu
   
   title Amazon Linux 2014.09 (3.14.26-24.46.amzn1.x86_64)
   root (hd0,0)
   kernel /boot/vmlinuz-3.14.26-24.46.amzn1.x86_64 root=LABEL=/ console=ttyS0 intel_idle.max_cstate=1  processor.max_cstate=1
   initrd /boot/initramfs-3.14.26-24.46.amzn1.x86_64.img
   ```

   The `intel_idle.max_cstate=1` option configures the C-state limit for Intel-based instances, and the `processor.max_cstate=1` option configures the C-state limit for AMD-based instances. It is safe to add both options to your configuration. This allows a single configuration to set the desired behavior on both Intel and AMD.

1. Save the file and exit your editor.

1. Reboot your instance to enable the new kernel option.

   ```
   [ec2-user ~]$ sudo reboot
   ```

The following example shows a `c4.8xlarge` instance with two cores actively performing work at the "all core Turbo Boost" core frequency.

```
[ec2-user ~]$ sudo turbostat stress -c 2 -t 10
stress: info: [5322] dispatching hogs: 2 cpu, 0 io, 0 vm, 0 hdd
stress: info: [5322] successful run completed in 10s
pk cor CPU    %c0  GHz  TSC SMI    %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7  Pkg_W RAM_W PKG_% RAM_%
             5.56 3.20 2.90   0  94.44   0.00   0.00   0.00   0.00   0.00   0.00   0.00 131.90 31.11 199.47  0.00
 0   0   0   0.03 2.08 2.90   0  99.97   0.00   0.00   0.00   0.00   0.00   0.00   0.00  67.23 17.11 99.76  0.00
 0   0  18   0.01 1.93 2.90   0  99.99
 0   1   1   0.02 1.96 2.90   0  99.98   0.00   0.00   0.00
 0   1  19  99.70 3.20 2.90   0   0.30
...
 1   1  10   0.02 1.97 2.90   0  99.98   0.00   0.00   0.00
 1   1  28  99.67 3.20 2.90   0   0.33
 1   2  11   0.04 2.63 2.90   0  99.96   0.00   0.00   0.00
 1   2  29   0.02 2.11 2.90   0  99.98
...
```

In this example, the cores for vCPUs 19 and 28 are running at 3.2 GHz, and the other cores are in the `C1` C-state, awaiting instruction. Although the working cores are not reaching their maximum Turbo Boost frequency, the inactive cores will be much faster to respond to new requests than they would be in the deeper `C6` C-state.

## Baseline performance with the lowest variability
<a name="baseline-perf"></a>

You can reduce the variability of processor frequency with P-states. P-states control the desired performance (in CPU frequency) from a core. Most workloads perform better in P0, which requests Turbo Boost. But you may want to tune your system for consistent performance rather than bursty performance that can happen when Turbo Boost frequencies are enabled. 

Intel Advanced Vector Extensions (AVX or AVX2) workloads can perform well at lower frequencies, and AVX instructions can use more power. Running the processor at a lower frequency, by disabling Turbo Boost, can reduce the amount of power used and keep the speed more consistent. For more information about optimizing your instance configuration and workload for AVX, see the [Intel website ](https://www.intel.com/content/www/us/en/developer/articles/technical/the-intel-advanced-vector-extensions-512-feature-on-intel-xeon-scalable.html?wapkw=advanced%20vector%20extensions).

CPU idle drivers control P-state. Newer CPU generations require updated CPU idle drivers that correspond to the kernel level as follows:
+ Linux kernel versions 6.1 and higher – Supports Intel Granite Rapids (for example, R8i)
+ Linux kernel versions 5.10 and higher – Supports AMD Milan (for example, M6a)
+ Linux kernel versions 5.6 and higher – Supports Intel Icelake (for example, M6i)

To detect if a running system's kernel recognizes the CPU, run the following command.

```
if [ -d /sys/devices/system/cpu/cpu0/cpuidle ]; then echo "C-state control enabled"; else echo "Kernel cpuidle driver does not recognize this CPU generation"; fi
```

If the output of this command indicates a lack of support, we recommend that you upgrade the kernel.

This section describes how to limit deeper sleep states and disable Turbo Boost (by requesting the `P1` P-state) to provide low-latency and the lowest processor speed variability for these types of workloads.

**To limit deeper sleep states and disable Turbo Boost on AL2**

1. Open the `/etc/default/grub` file with your editor of choice.

   ```
   [ec2-user ~]$ sudo vim /etc/default/grub
   ```

1. Edit the `GRUB_CMDLINE_LINUX_DEFAULT` line and add the `intel_idle.max_cstate=1` and `processor.max_cstate=1` options to set `C1` as the deepest C-state for idle cores.

   ```
   GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 intel_idle.max_cstate=1 processor.max_cstate=1"
   GRUB_TIMEOUT=0
   ```

   The `intel_idle.max_cstate=1` option configures the C-state limit for Intel-based instances, and the `processor.max_cstate=1` option configures the C-state limit for AMD-based instances. It is safe to add both options to your configuration. This allows a single configuration to set the desired behavior on both Intel and AMD.

1. Save the file and exit your editor.

1.  Run the following command to rebuild the boot configuration.

   ```
   [ec2-user ~]$ grub2-mkconfig -o /boot/grub2/grub.cfg
   ```

1. Reboot your instance to enable the new kernel option.

   ```
   [ec2-user ~]$ sudo reboot
   ```

1. When you need the low processor speed variability that the `P1` P-state provides, run the following command to disable Turbo Boost.

   ```
   [ec2-user ~]$ sudo sh -c "echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"
   ```

1. When your workload is finished, you can re-enable Turbo Boost with the following command.

   ```
   [ec2-user ~]$ sudo sh -c "echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo"
   ```

**To limit deeper sleep states and disable Turbo Boost on Amazon Linux AMI**

1. Open the `/boot/grub/grub.conf` file with your editor of choice.

   ```
   [ec2-user ~]$ sudo vim /boot/grub/grub.conf
   ```

1. Edit the `kernel` line of the first entry and add the `intel_idle.max_cstate=1` and `processor.max_cstate=1` options to set `C1` as the deepest C-state for idle cores.

   ```
   # created by imagebuilder
   default=0
   timeout=1
   hiddenmenu
   
   title Amazon Linux 2014.09 (3.14.26-24.46.amzn1.x86_64)
   root (hd0,0)
   kernel /boot/vmlinuz-3.14.26-24.46.amzn1.x86_64 root=LABEL=/ console=ttyS0 intel_idle.max_cstate=1 processor.max_cstate=1
   initrd /boot/initramfs-3.14.26-24.46.amzn1.x86_64.img
   ```

   The `intel_idle.max_cstate=1` option configures the C-state limit for Intel-based instances, and the `processor.max_cstate=1` option configures the C-state limit for AMD-based instances. It is safe to add both options to your configuration. This allows a single configuration to set the desired behavior on both Intel and AMD.

1. Save the file and exit your editor.

1. Reboot your instance to enable the new kernel option.

   ```
   [ec2-user ~]$ sudo reboot
   ```

1. When you need the low processor speed variability that the `P1` P-state provides, run the following command to disable Turbo Boost.

   ```
   [ec2-user ~]$ sudo sh -c "echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo"
   ```

1. When your workload is finished, you can re-enable Turbo Boost with the following command.

   ```
   [ec2-user ~]$ sudo sh -c "echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo"
   ```

The following example shows a `c4.8xlarge` instance with two vCPUs actively performing work at the baseline core frequency, with no Turbo Boost.

```
[ec2-user ~]$ sudo turbostat stress -c 2 -t 10
stress: info: [5389] dispatching hogs: 2 cpu, 0 io, 0 vm, 0 hdd
stress: info: [5389] successful run completed in 10s
pk cor CPU    %c0  GHz  TSC SMI    %c1    %c3    %c6    %c7   %pc2   %pc3   %pc6   %pc7  Pkg_W RAM_W PKG_% RAM_%
             5.59 2.90 2.90   0  94.41   0.00   0.00   0.00   0.00   0.00   0.00   0.00 128.48 33.54 200.00  0.00
 0   0   0   0.04 2.90 2.90   0  99.96   0.00   0.00   0.00   0.00   0.00   0.00   0.00  65.33 19.02 100.00  0.00
 0   0  18   0.04 2.90 2.90   0  99.96
 0   1   1   0.05 2.90 2.90   0  99.95   0.00   0.00   0.00
 0   1  19   0.04 2.90 2.90   0  99.96
 0   2   2   0.04 2.90 2.90   0  99.96   0.00   0.00   0.00
 0   2  20   0.04 2.90 2.90   0  99.96
 0   3   3   0.05 2.90 2.90   0  99.95   0.00   0.00   0.00
 0   3  21  99.95 2.90 2.90   0   0.05
...
 1   1  28  99.92 2.90 2.90   0   0.08
 1   2  11   0.06 2.90 2.90   0  99.94   0.00   0.00   0.00
 1   2  29   0.05 2.90 2.90   0  99.95
```

The cores for vCPUs 21 and 28 are actively performing work at the baseline processor speed of 2.9 GHz, and all inactive cores are also running at the baseline speed in the `C1` C-state, ready to accept instructions.

# I/O scheduler for AL2
<a name="io-scheduler"></a>

The I/O scheduler is a part of the Linux operating system that sorts and merges I/O requests and determines the order in which they are processed.

I/O schedulers are particularly beneficial for devices such as magnetic hard drives, where seek time can be expensive and where it is optimal to merge co-located requests. I/O schedulers have less of an effect with solid state devices and virtualized environments. This is because for solid state devices, sequential and random access don't differ, and for virtualized environments, the host provides its own layer of scheduling.

This topic discusses the Amazon Linux I/O scheduler. For more information about the I/O scheduler used by other Linux distributions, refer to their respective documentation.

**Topics**
+ [

## Supported schedulers
](#supported-schedulers)
+ [

## Default scheduler
](#default-schedulers)
+ [

## Change the scheduler
](#change-scheduler)

## Supported schedulers
<a name="supported-schedulers"></a>

Amazon Linux supports the following I/O schedulers:
+ `deadline` — The *Deadline* I/O scheduler sorts I/O requests and handles them in the most efficient order. It guarantees a start time for each I/O request. It also gives I/O requests that have been pending for too long a higher priority.
+ `cfq` — The *Completely Fair Queueing* (CFQ) I/O scheduler attempts to fairly allocate I/O resources between processes. It sorts and inserts I/O requests into per-process queues.
+ `noop` — The *No Operation* (noop) I/O scheduler inserts all I/O requests into a FIFO queue and then merges them into a single request. This scheduler does not do any request sorting.

## Default scheduler
<a name="default-schedulers"></a>

No Operation (noop) is the default I/O scheduler for Amazon Linux. This scheduler is used for the following reasons:
+ Many instance types use virtualized devices where the underlying host performs scheduling for the instance.
+ Solid state devices are used in many instance types where the benefits of an I/O scheduler have less effect.
+ It is the least invasive I/O scheduler, and it can be customized if needed.

## Change the scheduler
<a name="change-scheduler"></a>

Changing the I/O scheduler can increase or decrease performance based on whether the scheduler results in more or fewer I/O requests being completed in a given time. This is largely dependent on your workload, the generation of the instance type that's being used, and the type of device being accessed. If you change the I/O scheduler being used, we recommend that you use a tool, such as **iotop**, to measure I/O performance and to determine whether the change is beneficial for your use case.

You can view the I/O scheduler for a device using the following command, which uses `nvme0n1` as an example. Replace `nvme0n1` in the following command with the device listed in `/sys/block` on your instance.

```
$  cat /sys/block/nvme0n1/queue/scheduler
```

To set the I/O scheduler for the device, use the following command. 

```
$  echo cfq|deadline|noop > /sys/block/nvme0n1/queue/scheduler
```

For example, to set the I/O scheduler for an *xvda* device from `noop` to `cfq`, use the following command. 

```
$  echo cfq > /sys/block/xvda/queue/scheduler
```

# Change the hostname of your AL2 instance
<a name="set-hostname"></a>

When you launch an instance into a private VPC, Amazon EC2 assigns a guest OS hostname. The type of hostname that Amazon EC2 assigns depends on your subnet settings. For more information about EC2 hostnames, see [Amazon EC2 instance hostname types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-naming.html) in the *Amazon EC2 User Guide*.

A typical Amazon EC2 private DNS name for an EC2 instance configured to use IP-based naming with an IPv4 address looks something like this: `ip-12-34-56-78.us-west-2.compute.internal`, where the name consists of the internal domain, the service (in this case, `compute`), the region, and a form of the private IPv4 address. Part of this hostname is displayed at the shell prompt when you log into your instance (for example, `ip-12-34-56-78`). Each time you stop and restart your Amazon EC2 instance (unless you are using an Elastic IP address), the public IPv4 address changes, and so does your public DNS name, system hostname, and shell prompt.

**Important**  
This information applies to Amazon Linux. For information about other distributions, see their specific documentation.

## Change the system hostname
<a name="set-hostname-system"></a>

If you have a public DNS name registered for the IP address of your instance (such as `webserver.mydomain.com`), you can set the system hostname so your instance identifies itself as a part of that domain. This also changes the shell prompt so that it displays the first portion of this name instead of the hostname supplied by AWS (for example, `ip-12-34-56-78`). If you do not have a public DNS name registered, you can still change the hostname, but the process is a little different.

In order for your hostname update to persist, you must verify that the `preserve_hostname` cloud-init setting is set to `true`. You can run the following command to edit or add this setting:

```
sudo vi /etc/cloud/cloud.cfg
```

If the `preserve_hostname` setting is not listed, add the following line of text to the end of the file: 

```
preserve_hostname: true
```

**To change the system hostname to a public DNS name**

Follow this procedure if you already have a public DNS name registered.

1. 
   + For AL2: Use the **hostnamectl** command to set your hostname to reflect the fully qualified domain name (such as **webserver.mydomain.com**).

     ```
     [ec2-user ~]$ sudo hostnamectl set-hostname webserver.mydomain.com
     ```
   + For Amazon Linux AMI: On your instance, open the `/etc/sysconfig/network` configuration file in your favorite text editor and change the `HOSTNAME` entry to reflect the fully qualified domain name (such as **webserver.mydomain.com**).

     ```
     HOSTNAME=webserver.mydomain.com
     ```

1. Reboot the instance to pick up the new hostname.

   ```
   [ec2-user ~]$ sudo reboot
   ```

   Alternatively, you can reboot using the Amazon EC2 console (on the **Instances** page, select the instance and choose **Instance state**, **Reboot instance**).

1. Log into your instance and verify that the hostname has been updated. Your prompt should show the new hostname (up to the first ".") and the **hostname** command should show the fully-qualified domain name.

   ```
   [ec2-user@webserver ~]$ hostname
   webserver.mydomain.com
   ```

**To change the system hostname without a public DNS name**

1. 
   + For AL2: Use the **hostnamectl** command to set your hostname to reflect the desired system hostname (such as **webserver**).

     ```
     [ec2-user ~]$ sudo hostnamectl set-hostname webserver.localdomain
     ```
   + For Amazon Linux AMI: On your instance, open the `/etc/sysconfig/network` configuration file in your favorite text editor and change the `HOSTNAME` entry to reflect the desired system hostname (such as **webserver**).

     ```
     HOSTNAME=webserver.localdomain
     ```

1. Open the `/etc/hosts` file in your favorite text editor and change the entry beginning with **127.0.0.1** to match the example below, substituting your own hostname.

   ```
   127.0.0.1 webserver.localdomain webserver localhost4 localhost4.localdomain4
   ```

1. Reboot the instance to pick up the new hostname.

   ```
   [ec2-user ~]$ sudo reboot
   ```

   Alternatively, you can reboot using the Amazon EC2 console (on the **Instances** page, select the instance and choose **Instance state**, **Reboot instance**).

1. Log into your instance and verify that the hostname has been updated. Your prompt should show the new hostname (up to the first ".") and the **hostname** command should show the fully-qualified domain name.

   ```
   [ec2-user@webserver ~]$ hostname
   webserver.localdomain
   ```

You can also implement more programmatic solutions, such as specifying user data to configure your instance. If your instance is part of an Auto Scaling group, you can use lifecycle hooks to define user data. For more information, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) and [Lifecycle hook for instance launch](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-autoscaling-lifecyclehook.html#aws-resource-autoscaling-lifecyclehook--examples--Lifecycle_hook_for_instance_launch) in the *AWS CloudFormation User Guide*.

## Change the shell prompt without affecting the hostname
<a name="set-hostname-shell"></a>

If you do not want to modify the hostname for your instance, but you would like to have a more useful system name (such as **webserver**) displayed than the private name supplied by AWS (for example, `ip-12-34-56-78`), you can edit the shell prompt configuration files to display your system nickname instead of the hostname.

**To change the shell prompt to a host nickname**

1. Create a file in `/etc/profile.d` that sets the environment variable called `NICKNAME` to the value you want in the shell prompt. For example, to set the system nickname to **webserver**, run the following command.

   ```
   [ec2-user ~]$ sudo sh -c 'echo "export NICKNAME=webserver" > /etc/profile.d/prompt.sh'
   ```

1. Open the `/etc/bashrc` (Red Hat) or `/etc/bash.bashrc` (Debian/Ubuntu) file in your favorite text editor (such as **vim** or **nano**). You need to use **sudo** with the editor command because `/etc/bashrc` and `/etc/bash.bashrc` are owned by `root`.

1. Edit the file and change the shell prompt variable (`PS1`) to display your nickname instead of the hostname. Find the following line that sets the shell prompt in `/etc/bashrc` or `/etc/bash.bashrc` (several surrounding lines are shown below for context; look for the line that starts with `[ "$PS1"`):

   ```
     # Turn on checkwinsize
     shopt -s checkwinsize
     [ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "
     # You might want to have e.g. tty in prompt (e.g. more virtual machines)
     # and console windows
   ```

   Change the `\h` (the symbol for `hostname`) in that line to the value of the `NICKNAME` variable.

   ```
     # Turn on checkwinsize
     shopt -s checkwinsize
     [ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@$NICKNAME \W]\\$ "
     # You might want to have e.g. tty in prompt (e.g. more virtual machines)
     # and console windows
   ```

1. (Optional) To set the title on shell windows to the new nickname, complete the following steps.

   1. Create a file named `/etc/sysconfig/bash-prompt-xterm`.

      ```
      [ec2-user ~]$ sudo touch /etc/sysconfig/bash-prompt-xterm
      ```

   1. Make the file executable using the following command.

      ```
      [ec2-user ~]$ sudo chmod +x /etc/sysconfig/bash-prompt-xterm
      ```

   1. Open the `/etc/sysconfig/bash-prompt-xterm` file in your favorite text editor (such as **vim** or **nano**). You need to use **sudo** with the editor command because `/etc/sysconfig/bash-prompt-xterm` is owned by `root`.

   1. Add the following line to the file.

      ```
      echo -ne "\033]0;${USER}@${NICKNAME}:${PWD/#$HOME/~}\007"
      ```

1. Log out and then log back in to pick up the new nickname value.

## Change the hostname on other Linux distributions
<a name="set-hostname-other-linux"></a>

The procedures on this page are intended for use with Amazon Linux only. For more information about other Linux distributions, see their specific documentation and the following articles:
+ [How do I assign a static hostname to a private Amazon EC2 instance running RHEL 7 or Centos 7?](https://aws.amazon.com/premiumsupport/knowledge-center/linux-static-hostname-rhel7-centos7/)

# Set up dynamic DNS on your AL2 instance
<a name="dynamic-dns"></a>

When you launch an EC2 instance, it is assigned a public IP address and a public Domain Name System (DNS) name that you can use to reach it from the internet. Because there are so many hosts in the Amazon Web Services domain, these public names must be quite long for each name to remain unique. A typical Amazon EC2 public DNS name looks something like this: `ec2-12-34-56-78.us-west-2.compute.amazonaws.com`, where the name consists of the Amazon Web Services domain, the service (in this case, `compute`), the AWS Region, and a form of the public IP address.

Dynamic DNS services provide custom DNS host names within their domain area that can be easy to remember and that can also be more relevant to your host's use case. Some of these services are also free of charge. You can use a dynamic DNS provider with Amazon EC2 and configure the instance to update the IP address associated with a public DNS name each time the instance starts. There are many different providers to choose from, and the specific details of choosing a provider and registering a name with them are outside the scope of this guide.<a name="procedure-dynamic-dns"></a>

**To use dynamic DNS with Amazon EC2**

1. Sign up with a dynamic DNS service provider and register a public DNS name with their service. This procedure uses the free service from [noip.com/free](https://www.noip.com/free) as an example.

1. Configure the dynamic DNS update client. After you have a dynamic DNS service provider and a public DNS name registered with their service, point the DNS name to the IP address for your instance. Many providers (including [noip.com](https://noip.com)) allow you to do this manually from your account page on their website, but many also support software update clients. If an update client is running on your EC2 instance, your dynamic DNS record is updated each time the IP address changes, as happens after a shutdown and restart. In this example, you install the noip2 client, which works with the service provided by [noip.com](https://noip.com).

   1. Enable the Extra Packages for Enterprise Linux (EPEL) repository to gain access to the `noip2` client.
**Note**  
AL2 instances have the GPG keys and repository information for the EPEL repository installed by default. For more information, and to download the latest version of this package, see [https://fedoraproject.org/wiki/EPEL](https://fedoraproject.org/wiki/EPEL).

      ```
      [ec2-user ~]$ sudo amazon-linux-extras install epel -y
      ```

   1. Install the `noip` package.

      ```
      [ec2-user ~]$ sudo yum install -y noip
      ```

   1. Create the configuration file. Enter the login and password information when prompted and answer the subsequent questions to configure the client.

      ```
      [ec2-user ~]$ sudo noip2 -C
      ```

1. Enable the noip service.

   ```
   [ec2-user ~]$ sudo systemctl enable noip.service
   ```

1. Start the noip service.

   ```
   [ec2-user ~]$ sudo systemctl start noip.service
   ```

   This command starts the client, which reads the configuration file (`/etc/no-ip2.conf`) that you created earlier and updates the IP address for the public DNS name that you chose.

1. Verify that the update client has set the correct IP address for your dynamic DNS name. Allow a few minutes for the DNS records to update, and then try to connect to your instance using SSH with the public DNS name that you configured in this procedure.

# Configure your network interface using ec2-net-utils for AL2
<a name="ec2-net-utils"></a>

Amazon Linux 2 AMIs may contain additional scripts installed by AWS, known as ec2-net-utils. These scripts optionally automate the configuration of your network interfaces. These scripts are available for AL2 only.

**Note**  
For Amazon Linux 2023, the `amazon-ec2-net-utils` package generates interface-specific configurations in the `/run/systemd/network` directory. For more information, see [Networking service](https://docs.aws.amazon.com/linux/al2023/ug/networking-service.html) in the *Amazon Linux 2023 User Guide*.

Use the following command to install the package on AL2 if it's not already installed, or update it if it's installed and additional updates are available:

```
$ yum install ec2-net-utils
```

The following components are part of ec2-net-utils:

udev rules (`/etc/udev/rules.d`)  
Identifies network interfaces when they are attached, detached, or reattached to a running instance, and ensures that the hotplug script runs (`53-ec2-network-interfaces.rules`). Maps the MAC address to a device name (`75-persistent-net-generator.rules`, which generates `70-persistent-net.rules`).

hotplug script  
Generates an interface configuration file suitable for use with DHCP (`/etc/sysconfig/network-scripts/ifcfg-eth`*N*). Also generates a route configuration file (`/etc/sysconfig/network-scripts/route-eth`*N*).

DHCP script  
Whenever the network interface receives a new DHCP lease, this script queries the instance metadata for Elastic IP addresses. For each Elastic IP address, it adds a rule to the routing policy database to ensure that outbound traffic from that address uses the correct network interface. It also adds each private IP address to the network interface as a secondary address.

**ec2ifup** eth*N* (`/usr/sbin/`)  
Extends the functionality of the standard **ifup**. After this script rewrites the configuration files `ifcfg-eth`*N* and `route-eth`*N*, it runs **ifup**.

**ec2ifdown** eth*N* (`/usr/sbin/`)  
Extends the functionality of the standard **ifdown**. After this script removes any rules for the network interface from the routing policy database, it runs **ifdown**.

**ec2ifscan** (`/usr/sbin/`)  
Checks for network interfaces that have not been configured and configures them.  
This script isn't available in the initial release of ec2-net-utils.

To list any configuration files that were generated by ec2-net-utils, use the following command:

```
$ ls -l /etc/sysconfig/network-scripts/*-eth?
```

To disable the automation, you can add `EC2SYNC=no` to the corresponding `ifcfg-eth`*N* file. For example, use the following command to disable the automation for the eth1 interface:

```
$ sed -i -e 's/^EC2SYNC=yes/EC2SYNC=no/' /etc/sysconfig/network-scripts/ifcfg-eth1
```

To disable the automation completely, you can remove the package using the following command:

```
$ yum remove ec2-net-utils
```

# User provided kernels
<a name="UserProvidedKernels"></a>

If you need a custom kernel on your Amazon EC2 instances, you can start with an AMI that is close to what you want, compile the custom kernel on your instance, and update the bootloader to point to the new kernel. This process varies depending on the virtualization type that your AMI uses. For more information, see [Linux AMI virtualization types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html) in the *Amazon EC2 User Guide*.

**Topics**
+ [

## HVM AMIs (GRUB)
](#HVM_instances)
+ [

## Paravirtual AMIs (PV-GRUB)
](#Paravirtual_instances)

## HVM AMIs (GRUB)
<a name="HVM_instances"></a>

HVM instance volumes are treated like actual physical disks. The boot process is similar to that of a bare metal operating system with a partitioned disk and bootloader, which enables it to work with all currently supported Linux distributions. The most common bootloader is GRUB or GRUB2.

By default, GRUB does not send its output to the instance console because it creates an extra boot delay. For more information, see [Instance console output](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshoot-unreachable-instance.html#instance-console-console-output) in the *Amazon EC2 User Guide*. If you are installing a custom kernel, you should consider enabling GRUB output.

You don't need to specify a fallback kernel, but we recommend that you have a fallback when you test a new kernel. GRUB can fall back to another kernel in the event that the new kernel fails. Having a fallback kernel enables the instance to boot even if the new kernel isn't found.

The legacy GRUB for Amazon Linux uses `/boot/grub/menu.lst`. GRUB2 for AL2 uses `/etc/default/grub`. For more information about updating the default kernel in the bootloader, see the documentation for your Linux distribution.

## Paravirtual AMIs (PV-GRUB)
<a name="Paravirtual_instances"></a>

AMIs that use paravirtual (PV) virtualization use a system called *PV-GRUB* during the boot process. PV-GRUB is a paravirtual bootloader that runs a patched version of GNU GRUB 0.97. When you start an instance, PV-GRUB starts the boot process and then chain loads the kernel specified by your image's `menu.lst` file.

PV-GRUB understands standard `grub.conf` or `menu.lst` commands, which allows it to work with all currently supported Linux distributions. Older distributions such as Ubuntu 10.04 LTS, Oracle Enterprise Linux, or CentOS 5.x require a special "ec2" or "xen" kernel package, while newer distributions include the required drivers in the default kernel package.

Most modern paravirtual AMIs use a PV-GRUB AKI by default (including all of the paravirtual Linux AMIs available in the Amazon EC2 Launch Wizard Quick Start menu), so there are no additional steps that you need to take to use a different kernel on your instance, provided that the kernel you want to use is compatible with your distribution. The best way to run a custom kernel on your instance is to start with an AMI that is close to what you want and then to compile the custom kernel on your instance and modify the `menu.lst` file to boot with that kernel.

You can verify that the kernel image for an AMI is a PV-GRUB AKI. Run the following [describe-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html) command (substituting your kernel image ID) and check whether the `Name` field starts with `pv-grub`:

```
aws ec2 describe-images --filters Name=image-id,Values=aki-880531cd
```

**Topics**
+ [

### Limitations of PV-GRUB
](#pv-grub-limitations)
+ [Configuring GRUB](#configuringGRUB)
+ [

### Amazon PV-GRUB Kernel Image IDs
](#AmazonKernelImageIDs)
+ [

### Update PV-GRUB
](#UpdatingPV-GRUB)

### Limitations of PV-GRUB
<a name="pv-grub-limitations"></a>

PV-GRUB has the following limitations:
+ You can't use the 64-bit version of PV-GRUB to start a 32-bit kernel or vice versa.
+ You can't specify an Amazon ramdisk image (ARI) when using a PV-GRUB AKI.
+ AWS has tested and verified that PV-GRUB works with these file system formats: EXT2, EXT3, EXT4, JFS, XFS, and ReiserFS. Other file system formats might not work.
+ PV-GRUB can boot kernels compressed using the gzip, bzip2, lzo, and xz compression formats.
+ Cluster AMIs don't support or need PV-GRUB, because they use full hardware virtualization (HVM). While paravirtual instances use PV-GRUB to boot, HVM instance volumes are treated like actual disks, and the boot process is similar to the boot process of a bare metal operating system with a partitioned disk and bootloader. 
+ PV-GRUB versions 1.03 and earlier don't support GPT partitioning; they support MBR partitioning only.
+ If you plan to use a logical volume manager (LVM) with Amazon Elastic Block Store (Amazon EBS) volumes, you need a separate boot partition outside of the LVM. Then you can create logical volumes with the LVM.

### Configure GRUB for paravirtual AMIs
<a name="configuringGRUB"></a>

To boot PV-GRUB, a GRUB `menu.lst` file must exist in the image; the most common location for this file is `/boot/grub/menu.lst`.

The following is an example of a `menu.lst` configuration file for booting an AMI with a PV-GRUB AKI. In this example, there are two kernel entries to choose from: Amazon Linux 2018.03 (the original kernel for this AMI), and Vanilla Linux 4.16.4 (a newer version of the Vanilla Linux kernel from [https://www.kernel.org/](https://www.kernel.org/)). The Vanilla entry was copied from the original entry for this AMI, and the `kernel` and `initrd` paths were updated to the new locations. The `default 0` parameter points the bootloader to the first entry it sees (in this case, the Vanilla entry), and the `fallback 1` parameter points the bootloader to the next entry if there is a problem booting the first.

```
default 0
fallback 1
timeout 0
hiddenmenu

title Vanilla Linux 4.16.4
root (hd0)
kernel /boot/vmlinuz-4.16.4 root=LABEL=/ console=hvc0
initrd /boot/initrd.img-4.16.4

title Amazon Linux 2018.03 (4.14.26-46.32.amzn1.x86_64)
root (hd0)
kernel /boot/vmlinuz-4.14.26-46.32.amzn1.x86_64 root=LABEL=/ console=hvc0
initrd /boot/initramfs-4.14.26-46.32.amzn1.x86_64.img
```

You don't need to specify a fallback kernel in your `menu.lst` file, but we recommend that you have a fallback when you test a new kernel. PV-GRUB can fall back to another kernel in the event that the new kernel fails. Having a fallback kernel allows the instance to boot even if the new kernel isn't found. 

PV-GRUB checks the following locations for `menu.lst`, using the first one it finds:
+  `(hd0)/boot/grub` 
+  `(hd0,0)/boot/grub` 
+  `(hd0,0)/grub` 
+  `(hd0,1)/boot/grub` 
+  `(hd0,1)/grub` 
+  `(hd0,2)/boot/grub` 
+  `(hd0,2)/grub` 
+  `(hd0,3)/boot/grub` 
+  `(hd0,3)/grub` 

Note that PV-GRUB 1.03 and earlier only check one of the first two locations in this list.

### Amazon PV-GRUB Kernel Image IDs
<a name="AmazonKernelImageIDs"></a>

PV-GRUB AKIs are available in all Amazon EC2 regions, excluding Asia Pacific (Osaka). There are AKIs for both 32-bit and 64-bit architecture types. Most modern AMIs use a PV-GRUB AKI by default.

We recommend that you always use the latest version of the PV-GRUB AKI, as not all versions of the PV-GRUB AKI are compatible with all instance types. Use the following [describe-images](https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html) command to get a list of the PV-GRUB AKIs for the current region:

```
aws ec2 describe-images --owners amazon --filters Name=name,Values=pv-grub-*.gz
```

PV-GRUB is the only AKI available in the `ap-southeast-2` Region. You should verify that any AMI you want to copy to this Region is using a version of PV-GRUB that is available in this Region.

The following are the current AKI IDs for each Region. Register new AMIs using an hd0 AKI.

**Note**  
We continue to provide hd00 AKIs for backward compatibility in Regions where they were previously available.


**ap-northeast-1, Asia Pacific (Tokyo)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-f975a998  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-7077ab11  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**ap-southeast-1, Asia Pacific (Singapore) Region**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-17a40074  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-73a50110  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**ap-southeast-2, Asia Pacific (Sydney)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-ba5665d9  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-66506305  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**eu-central-1, Europe (Frankfurt)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-1419e57b  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-931fe3fc  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**eu-west-1, Europe (Ireland)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-1c9fd86f  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-dc9ed9af  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**sa-east-1, South America (São Paulo)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-7cd34110  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-912fbcfd  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**us-east-1, US East (N. Virginia)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-04206613  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-5c21674b  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**us-gov-west-1, AWS GovCloud (US-West)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-5ee9573f  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-9ee55bff  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**us-west-1, US West (N. California)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-43cf8123  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-59cc8239  |  pv-grub-hd0\$11.05-x86\$164.gz  | 


**us-west-2, US West (Oregon)**  

| Image ID | Image Name | 
| --- | --- | 
|  aki-7a69931a  |  pv-grub-hd0\$11.05-i386.gz  | 
|  aki-70cb0e10  |  pv-grub-hd0\$11.05-x86\$164.gz  | 

### Update PV-GRUB
<a name="UpdatingPV-GRUB"></a>

We recommend that you always use the latest version of the PV-GRUB AKI, as not all versions of the PV-GRUB AKI are compatible with all instance types. Also, older versions of PV-GRUB are not available in all regions, so if you copy an AMI that uses an older version to a Region that does not support that version, you will be unable to boot instances launched from that AMI until you update the kernel image. Use the following procedures to check your instance's version of PV-GRUB and update it if necessary.

**To check your PV-GRUB version**

1. Find the kernel ID for your instance.

   ```
   aws ec2 describe-instance-attribute --instance-id instance_id --attribute kernel --region region
   
   {
       "InstanceId": "instance_id", 
       "KernelId": "aki-70cb0e10"
   }
   ```

   The kernel ID for this instance is `aki-70cb0e10`.

1. View the version information of that kernel ID.

   ```
   aws ec2 describe-images --image-ids aki-70cb0e10 --region region
   
   {
       "Images": [
           {
               "VirtualizationType": "paravirtual", 
               "Name": "pv-grub-hd0_1.05-x86_64.gz", 
               ...
               "Description": "PV-GRUB release 1.05, 64-bit"
           }
       ]
   }
   ```

   This kernel image is PV-GRUB 1.05. If your PV-GRUB version is not the newest version (as shown in [Amazon PV-GRUB Kernel Image IDs](#AmazonKernelImageIDs)), you should update it using the following procedure.

**To update your PV-GRUB version**

If your instance is using an older version of PV-GRUB, you should update it to the latest version.

1. Identify the latest PV-GRUB AKI for your Region and processor architecture from [Amazon PV-GRUB Kernel Image IDs](#AmazonKernelImageIDs).

1. Stop your instance. Your instance must be stopped to modify the kernel image used.

   ```
   aws ec2 stop-instances --instance-ids instance_id --region region
   ```

1. Modify the kernel image used for your instance.

   ```
   aws ec2 modify-instance-attribute --instance-id instance_id --kernel kernel_id --region region
   ```

1. Restart your instance.

   ```
   aws ec2 start-instances --instance-ids instance_id --region region 
   ```

# AL2 AMI release notifications
<a name="linux-ami-notifications"></a>

To be notified when new Amazon Linux AMIs are released, you can subscribe using Amazon SNS.

For information about subscribing to notifications for AL2023, see [Receiving notifications on new updates](https://docs.aws.amazon.com/linux/al2023/ug/receive-update-notification.html) in the *Amazon Linux 2023 User Guide*.

**Note**  
Standard support for AL1 ended on December 31, 2020. The AL1 maintenance support phase ended December 31, 2023. For more information about the AL1 EOL and maintenance support, see the blog post [Update on Amazon Linux AMI end-of-life](https://aws.amazon.com/blogs/aws/update-on-amazon-linux-ami-end-of-life/).

**To subscribe to Amazon Linux notifications**

1. Open the Amazon SNS console at [https://console.aws.amazon.com/sns/v3/home](https://console.aws.amazon.com/sns/v3/home).

1. In the navigation bar, change the Region to **US East (N. Virginia)**, if necessary. You must select the Region in which the SNS notification that you are subscribing to was created.

1. In the navigation pane, choose **Subscriptions**, **Create subscription**.

1. For the **Create subscription** dialog box, do the following:

   1. [AL2] For **Topic ARN**, copy and paste the following Amazon Resource Name (ARN): **arn:aws:sns:us-east-1:137112412989:amazon-linux-2-ami-updates**.

   1. [Amazon Linux] For **Topic ARN**, copy and paste the following Amazon Resource Name (ARN): **arn:aws:sns:us-east-1:137112412989:amazon-linux-ami-updates**.

   1. For **Protocol**, choose **Email**.

   1. For **Endpoint**, enter an email address that you can use to receive the notifications.

   1. Choose **Create subscription**.

1. You receive a confirmation email with the subject line "AWS Notification - Subscription Confirmation". Open the email and choose **Confirm subscription** to complete your subscription.

Whenever AMIs are released, we send notifications to the subscribers of the corresponding topic. To stop receiving these notifications, use the following procedure to unsubscribe.

**To unsubscribe from Amazon Linux notifications**

1. Open the Amazon SNS console at [https://console.aws.amazon.com/sns/v3/home](https://console.aws.amazon.com/sns/v3/home).

1. In the navigation bar, change the Region to **US East (N. Virginia)**, if necessary. You must use the Region in which the SNS notification was created.

1. In the navigation pane, choose **Subscriptions**, select the subscription, and choose **Actions**, **Delete subscriptions**.

1. When prompted for confirmation, choose **Delete**.

**Amazon Linux AMI SNS message format**  
The schema for the SNS message is as follows. 

```
{
    "description": "Validates output from AMI Release SNS message",
    "type": "object",
    "properties": {
        "v1": {
            "type": "object",
            "properties": {
                "ReleaseVersion": {
                    "description": "Major release (ex. 2018.03)",
                    "type": "string"
                },
                "ImageVersion": {
                    "description": "Full release (ex. 2018.03.0.20180412)",
                    "type": "string"
                },
                "ReleaseNotes": {
                    "description": "Human-readable string with extra information",
                    "type": "string"
                },
                "Regions": {
                    "type": "object",
                    "description": "Each key will be a region name (ex. us-east-1)",
                    "additionalProperties": {
                        "type": "array",
                        "items": {
                            "type": "object",
                            "properties": {
                                "Name": {
                                    "description": "AMI Name (ex. amzn-ami-hvm-2018.03.0.20180412-x86_64-gp2)",
                                    "type": "string"
                                },
                                "ImageId": {
                                    "description": "AMI Name (ex.ami-467ca739)",
                                    "type": "string"
                                }
                            },
                            "required": [
                                "Name",
                                "ImageId"
                            ]
                        }
                    }
                }
            },
            "required": [
                "ReleaseVersion",
                "ImageVersion",
                "ReleaseNotes",
                "Regions"
            ]
        }
    },
    "required": [
        "v1"
    ]
}
```

# Configure the AL2 MATE desktop connection
<a name="amazon-linux-ami-mate"></a>

The [MATE desktop environment](https://mate-desktop.org/) is pre-installed and pre-configured in AMIs with the following description:

"`.NET Core x.x, Mono x.xx, PowerShell x.x, and MATE DE pre-installed to run your .NET applications on Amazon Linux 2 with Long Term Support (LTS).`"

The environment provides an intuitive graphical user interface for administering AL2 instances with minimal use of the command line. The interface uses graphical representations, such as icons, windows, toolbars, folders, wallpapers, and desktop widgets. Built-in, GUI-based tools are available to perform common tasks. For example, there are tools for adding and removing software, applying updates, organizing files, launching programs, and monitoring system health.

**Important**  
`xrdp` is the remote desktop software bundled in the AMI. By default, `xrdp` uses a self-signed TLS certificate to encrypt remote desktop sessions. Neither AWS nor the `xrdp` maintainers recommend using self-signed certificates in production. Instead, obtain a certificate from an appropriate certificate authority (CA) and install it on your instances. For more information about TLS configuration, see [TLS security layer](https://github.com/neutrinolabs/xrdp/wiki/TLS-security-layer) on the `xrdp` wiki.

**Note**  
If you prefer to use a virtual network computing (VNC) service instead of xrdp, see the [How do I install a GUI on my Amazon EC2 instance running AL2](https://repost.aws/knowledge-center/ec2-linux-2-install-gui) AWS Knowledge Center article.

## Prerequisite
<a name="al2-mate-configure-prerequisite"></a>

To run the commands shown in this topic, you must install the AWS Command Line Interface (AWS CLI) or AWS Tools for Windows PowerShell, and configure your AWS profile.

**Options**

1. Install the AWS CLI – For more information, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html) and [Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) in the *AWS Command Line Interface User Guide*.

1. Install the Tools for Windows PowerShell – For more information, see [Installing the AWS Tools for Windows PowerShell](https://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html) and [Shared credentials](https://docs.aws.amazon.com/powershell/latest/userguide/shared-credentials-in-aws-powershell.html) in the *AWS Tools for PowerShell User Guide*.

**Tip**  
As an alternative to doing a full installation of the AWS CLI, you can use [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) for a browser-based, pre-authenticated shell that launches directly from the AWS Management Console. Check [supported AWS Regions](https://docs.aws.amazon.com/cloudshell/latest/userguide/supported-aws-regions.html), to make sure it's available in the region you are working in.

## Configure the RDP connection
<a name="al2-mate-configure-connection"></a>

Follow these steps to set up a Remote Desktop Protocol (RDP) connection from your local machine to an AL2 instance running the MATE desktop environment.

1. To get the ID of the AMI for AL2 that includes MATE in the AMI name, you can use the [describe-images](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-images.html) command from your local command line tool. If you have not installed the command line tools, you can perform the following query directly from an AWS CloudShell session. For information about how to launch a shell session from CloudShell, see [Getting started with AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/getting-started.html). From the Amazon EC2 console, you can find the MATE-included AMI by launching an instance, and then entering `MATE` in the AMI search bar. The AL2 Quick Start with MATE pre-installed will appear in the search results.

   ```
   aws ec2 describe-images --filters "Name=name,Values=amzn2*MATE*" --query "Images[*].[ImageId,Name,Description]"
   [
       [
           "ami-0123example0abc12",
           "amzn2-x86_64-MATEDE_DOTNET-2020.12.04",
           ".NET Core 5.0, Mono 6.12, PowerShell 7.1, and MATE DE pre-installed to run your .NET applications on Amazon Linux 2 with Long Term Support (LTS)."
       ],
       [
           "ami-0456example0def34",
           "amzn2-x86_64-MATEDE_DOTNET-2020.04.14",
           "Amazon Linux 2 with .Net Core, PowerShell, Mono, and MATE Desktop Environment"
       ]
   ]
   ```

   Choose the AMI that is appropriate for your use.

1. Launch an EC2 instance with the AMI that you located in the previous step. Configure the security group to allow for inbound TCP traffic to port 3389. For more information about configuring security groups, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html). This configuration enables you to use an RDP client to connect to the instance.

1. Connect to the instance using [SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-ssh.html).

1. Update the software and kernel on the instance.

   ```
   [ec2-user ~]$ sudo yum update
   ```

   After the update completes, reboot the instance to ensure that it is using the latest packages and libraries from the update; kernel updates are not loaded until a reboot occurs.

   ```
   [ec2-user ~]$ sudo reboot
   ```

1. Reconnect to the instance and run the following command on your Linux instance to set the password for `ec2-user`.

   ```
   [ec2-user ~]$ sudo passwd ec2-user
   ```

1. Install the certificate and key.

   If you already have a certificate and key, copy them to the `/etc/xrdp/` directory as follows:
   + Certificate — `/etc/xrdp/cert.pem`
   + Key — `/etc/xrdp/key.pem`

   If you do not have a certificate and key, use the following command to generate them in the `/etc/xrdp` directory.

   ```
   $ sudo openssl req -x509 -sha384 -newkey rsa:3072 -nodes -keyout /etc/xrdp/key.pem -out /etc/xrdp/cert.pem -days 365
   ```
**Note**  
This command generates a certificate that is valid for 365 days.

1. Open an RDP client on the computer from which you will connect to the instance (for example, Remote Desktop Connection on a computer running Microsoft Windows). Enter `ec2-user` as the user name and enter the password that you set in the previous step.

**To disable `xrdp` on your Amazon EC2 instance**  
You can disable `xrdp` at any time by running one of the following commands on your Linux instance. The following commands do not impact your ability to use MATE using an X11 server.

```
[ec2-user ~]$ sudo systemctl disable xrdp
```

```
[ec2-user ~]$ sudo systemctl stop xrdp
```

**To enable `xrdp` on your Amazon EC2 instance**  
To re-enable `xrdp` so that you can connect to your AL2 instance running the MATE desktop environment, run one of the following commands on your Linux instance.

```
[ec2-user ~]$ sudo systemctl enable xrdp
```

```
[ec2-user ~]$ sudo systemctl start xrdp
```

# AL2 Tutorials
<a name="al2-tutorials"></a>

 The following tutorials show you how to perform common tasks using Amazon EC2 instances running AL2. For video tutorials, see [AWS Instructional videos and labs](https://www.aws.training/). 

For AL2023 instructions, see [Tutorials](https://docs.aws.amazon.com/linux/al2023/ug/tutorials-al2023.html) in the *Amazon Linux 2023 User Guide*.

**Topics**
+ [

# Tutorial: Install a LAMP server on AL2
](ec2-lamp-amazon-linux-2.md)
+ [

# Tutorial: Configure SSL/TLS on AL2
](SSL-on-amazon-linux-2.md)
+ [

# Tutorial: Host a WordPress blog on AL2
](hosting-wordpress.md)

# Tutorial: Install a LAMP server on AL2
<a name="ec2-lamp-amazon-linux-2"></a>

The following procedures help you install an Apache web server with PHP and [MariaDB](https://mariadb.org/about/) (a community-developed fork of MySQL) support on your AL2 instance (sometimes called a LAMP web server or LAMP stack). You can use this server to host a static website or deploy a dynamic PHP application that reads and writes information to a database.

**Important**  
If you are trying to set up a LAMP web server on a different distribution, such as Ubuntu or Red Hat Enterprise Linux, this tutorial will not work. For AL2023, see [Install a LAMP server on AL2023](https://docs.aws.amazon.com//linux/al2023/ug/ec2-lamp-amazon-linux-2023.html). For Ubuntu, see the following Ubuntu community documentation: [ApacheMySQLPHP](https://help.ubuntu.com/community/ApacheMySQLPHP). For other distributions, see their specific documentation.

**Option: Complete this tutorial using automation**  
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the [AWSDocs-InstallALAMPServer-AL2](https://console.aws.amazon.com/systems-manager/automation/execute/AWSDocs-InstallALAMPServer-AL2) Automation document.

**Topics**
+ [

## Step 1: Prepare the LAMP server
](#prepare-lamp-server)
+ [

## Step 2: Test your LAMP server
](#test-lamp-server)
+ [

## Step 3: Secure the database server
](#secure-mariadb-lamp-server)
+ [

## Step 4: (Optional) Install phpMyAdmin
](#install-phpmyadmin-lamp-server)
+ [

## Troubleshoot
](#lamp-troubleshooting)
+ [

## Related topics
](#lamp-more-info)

## Step 1: Prepare the LAMP server
<a name="prepare-lamp-server"></a>

**Prerequisites**
+ This tutorial assumes that you have already launched a new instance using AL2, with a public DNS name that is reachable from the internet. For more information, see [Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*. You must also have configured your security group to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443) connections. For more information about these prerequisites, see [Security group rules](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html) in the *Amazon EC2 User Guide*.
+ The following procedure installs the latest PHP version available on AL2, currently `php8.2`. If you plan to use PHP applications other than those described in this tutorial, you should check their compatibility with `php8.2`.<a name="install_apache-2"></a>

**To prepare the LAMP server**

1. [Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) to your instance.

1. To ensure that all of your software packages are up to date, perform a quick software update on your instance. This process may take a few minutes, but it is important to make sure that you have the latest security updates and bug fixes.

   The `-y` option installs the updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option.

   ```
   [ec2-user ~]$ sudo yum update -y
   ```

1. Install the `mariadb10.5` Amazon Linux Extras repositories to get the latest version of the MariaDB package.

   ```
   [ec2-user ~]$ sudo amazon-linux-extras install mariadb10.5
   ```

   If you receive an error stating `sudo: amazon-linux-extras: command not found`, then your instance was not launched with an Amazon Linux 2 AMI (perhaps you are using the Amazon Linux AMI instead). You can view your version of Amazon Linux using the following command.

   ```
   cat /etc/system-release
   ```

1. Install the `php8.2` Amazon Linux Extras repositories to get the latest version of the PHP package for AL2.

   ```
   [ec2-user ~]$ sudo amazon-linux-extras install php8.2
   ```

1. Now that your instance is current, you can install the Apache web server, MariaDB, and PHP software packages. Use the yum install command to install multiple software packages and all related dependencies at the same time

   ```
   [ec2-user ~]$ sudo yum install -y httpd
   ```

   You can view the current versions of these packages using the following command:

   ```
   yum info package_name
   ```

1. Start the Apache web server.

   ```
   [ec2-user ~]$ sudo systemctl start httpd
   ```

1.  Use the **systemctl** command to configure the Apache web server to start at each system boot. 

   ```
   [ec2-user ~]$ sudo systemctl enable httpd
   ```

   You can verify that **httpd** is on by running the following command:

   ```
   [ec2-user ~]$ sudo systemctl is-enabled httpd
   ```

1. Add a security rule to allow inbound HTTP (port 80) connections to your instance if you have not already done so. By default, a **launch-wizard-*N*** security group was set up for your instance during initialization. This group contains a single rule to allow SSH connections. 

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. Choose **Instances** and select your instance.

   1. On the **Security** tab, view the inbound rules. You should see the following rule:

      ```
      Port range   Protocol     Source
      22           tcp          0.0.0.0/0
      ```
**Warning**  
Using `0.0.0.0/0` allows all IPv4 addresses to access your instance using SSH. This is acceptable for a short time in a test environment, but it's unsafe for production environments. In production, you authorize only a specific IP address or range of addresses to access your instance.

   1. Choose the link for the security group. Using the procedures in [Add rules to a security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule), add a new inbound security rule with the following values:
      + **Type**: HTTP
      + **Protocol**: TCP
      + **Port Range**: 80
      + **Source**: Custom

1. Test your web server. In a web browser, type the public DNS address (or the public IP address) of your instance. If there is no content in `/var/www/html`, you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the **Public DNS** column; if this column is hidden, choose **Show/Hide Columns** (the gear-shaped icon) and choose **Public DNS**).

   Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For more information, see [Add rules to security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule).
**Important**  
If you are not using Amazon Linux, you may also need to configure the firewall on your instance to allow these connections. For more information about how to configure the firewall, see the documentation for your specific distribution.  
![\[The test of the server shows the Apache test page.\]](http://docs.aws.amazon.com/linux/al2/ug/images/apache_test_page_al2_2.4.png)

Apache **httpd** serves files that are kept in a directory called the Apache document root. The Amazon Linux Apache document root is `/var/www/html`, which by default is owned by root.

To allow the `ec2-user` account to manipulate files in this directory, you must modify the ownership and permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add `ec2-user` to the `apache` group, to give the `apache` group ownership of the `/var/www` directory and assign write permissions to the group.<a name="setting-file-permissions-2"></a>

**To set file permissions**

1. Add your user (in this case, `ec2-user`) to the `apache` group.

   ```
   [ec2-user ~]$ sudo usermod -a -G apache ec2-user
   ```

1. Log out and then log back in again to pick up the new group, and then verify your membership.

   1. Log out (use the **exit** command or close the terminal window):

      ```
      [ec2-user ~]$ exit
      ```

   1. To verify your membership in the `apache` group, reconnect to your instance, and then run the following command:

      ```
      [ec2-user ~]$ groups
      ec2-user adm wheel apache systemd-journal
      ```

1. Change the group ownership of `/var/www` and its contents to the `apache` group.

   ```
   [ec2-user ~]$ sudo chown -R ec2-user:apache /var/www
   ```

1. To add group write permissions and to set the group ID on future subdirectories, change the directory permissions of `/var/www` and its subdirectories.

   ```
   [ec2-user ~]$ sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
   ```

1. To add group write permissions, recursively change the file permissions of `/var/www` and its subdirectories:

   ```
   [ec2-user ~]$ find /var/www -type f -exec sudo chmod 0664 {} \;
   ```

Now, `ec2-user` (and any future members of the `apache` group) can add, delete, and edit files in the Apache document root, enabling you to add content, such as a static website or a PHP application.

**To secure your web server (Optional)**  
A web server running the HTTP protocol provides no transport security for the data that it sends or receives. When you connect to an HTTP server using a web browser, the URLs that you visit, the content of webpages that you receive, and the contents (including passwords) of any HTML forms that you submit are all visible to eavesdroppers anywhere along the network pathway. The best practice for securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with SSL/TLS encryption.

For information about enabling HTTPS on your server, see [Tutorial: Configure SSL/TLS on AL2](SSL-on-amazon-linux-2.md).

## Step 2: Test your LAMP server
<a name="test-lamp-server"></a>

If your server is installed and running, and your file permissions are set correctly, your `ec2-user` account should be able to create a PHP file in the `/var/www/html` directory that is available from the internet.

**To test your LAMP server**

1. Create a PHP file in the Apache document root.

   ```
   [ec2-user ~]$ echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
   ```

   If you get a "Permission denied" error when trying to run this command, try logging out and logging back in again to pick up the proper group permissions that you configured in [To set file permissions](#setting-file-permissions-2).

1. In a web browser, type the URL of the file that you just created. This URL is the public DNS address of your instance followed by a forward slash and the file name. For example:

   ```
   http://my.public.dns.amazonaws.com/phpinfo.php
   ```

   You should see the PHP information page:  
![\[Test of the LAMP server shows the PHP information page.\]](http://docs.aws.amazon.com/linux/al2/ug/images/phpinfo7.2.10.png)

   If you do not see this page, verify that the `/var/www/html/phpinfo.php` file was created properly in the previous step. You can also verify that all of the required packages were installed with the following command.

   ```
   [ec2-user ~]$ sudo yum list installed httpd mariadb-server php-mysqlnd
   ```

   If any of the required packages are not listed in your output, install them with the **sudo yum install *package*** command. Also verify that the `php7.2` and `lamp-mariadb10.2-php7.2` extras are enabled in the output of the **amazon-linux-extras** command.

1. Delete the `phpinfo.php` file. Although this can be useful information, it should not be broadcast to the internet for security reasons.

   ```
   [ec2-user ~]$ rm /var/www/html/phpinfo.php
   ```

You should now have a fully functional LAMP web server. If you add content to the Apache document root at `/var/www/html`, you should be able to view that content at the public DNS address for your instance. 

## Step 3: Secure the database server
<a name="secure-mariadb-lamp-server"></a>

The default installation of the MariaDB server has several features that are great for testing and development, but they should be disabled or removed for production servers. The **mysql\$1secure\$1installation** command walks you through the process of setting a root password and removing the insecure features from your installation. Even if you are not planning on using the MariaDB server, we recommend performing this procedure.<a name="securing-maria-db"></a>

**To secure the MariaDB server**

1. Start the MariaDB server.

   ```
   [ec2-user ~]$ sudo systemctl start mariadb
   ```

1. Run **mysql\$1secure\$1installation**.

   ```
   [ec2-user ~]$ sudo mysql_secure_installation
   ```

   1. When prompted, type a password for the root account.

      1. Type the current root password. By default, the root account does not have a password set. Press Enter.

      1. Type **Y** to set a password, and type a secure password twice. For more information about creating a secure password, see [https://identitysafe.norton.com/password-generator/](https://identitysafe.norton.com/password-generator/). Make sure to store this password in a safe place.

         Setting a root password for MariaDB is only the most basic measure for securing your database. When you build or install a database-driven application, you typically create a database service user for that application and avoid using the root account for anything but database administration. 

   1. Type **Y** to remove the anonymous user accounts.

   1. Type **Y** to disable the remote root login.

   1. Type **Y** to remove the test database.

   1. Type **Y** to reload the privilege tables and save your changes.

1. (Optional) If you do not plan to use the MariaDB server right away, stop it. You can restart it when you need it again.

   ```
   [ec2-user ~]$ sudo systemctl stop mariadb
   ```

1. (Optional) If you want the MariaDB server to start at every boot, type the following command.

   ```
   [ec2-user ~]$ sudo systemctl enable mariadb
   ```

## Step 4: (Optional) Install phpMyAdmin
<a name="install-phpmyadmin-lamp-server"></a>

[phpMyAdmin](https://www.phpmyadmin.net/) is a web-based database management tool that you can use to view and edit the MySQL databases on your EC2 instance. Follow the steps below to install and configure `phpMyAdmin` on your Amazon Linux instance.

**Important**  
We do not recommend using `phpMyAdmin` to access a LAMP server unless you have enabled SSL/TLS in Apache; otherwise, your database administrator password and other data are transmitted insecurely across the internet. For security recommendations from the developers, see [Securing your phpMyAdmin installation](https://docs.phpmyadmin.net/en/latest/setup.html#securing-your-phpmyadmin-installation). For general information about securing a web server on an EC2 instance, see [Tutorial: Configure SSL/TLS on AL2](SSL-on-amazon-linux-2.md).

**To install phpMyAdmin**

1. Install the required dependencies.

   ```
   [ec2-user ~]$ sudo yum install php-mbstring php-xml -y
   ```

1. Restart Apache.

   ```
   [ec2-user ~]$ sudo systemctl restart httpd
   ```

1. Restart `php-fpm`.

   ```
   [ec2-user ~]$ sudo systemctl restart php-fpm
   ```

1. Navigate to the Apache document root at `/var/www/html`.

   ```
   [ec2-user ~]$ cd /var/www/html
   ```

1. Select a source package for the latest phpMyAdmin release from [https://www.phpmyadmin.net/downloads](https://www.phpmyadmin.net/downloads). To download the file directly to your instance, copy the link and paste it into a **wget** command, as in this example:

   ```
   [ec2-user html]$ wget https://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
   ```

1. Create a `phpMyAdmin` folder and extract the package into it with the following command.

   ```
   [ec2-user html]$ mkdir phpMyAdmin && tar -xvzf phpMyAdmin-latest-all-languages.tar.gz -C phpMyAdmin --strip-components 1
   ```

1. Delete the *phpMyAdmin-latest-all-languages.tar.gz* tarball.

   ```
   [ec2-user html]$ rm phpMyAdmin-latest-all-languages.tar.gz
   ```

1.  (Optional) If the MySQL server is not running, start it now.

   ```
   [ec2-user ~]$ sudo systemctl start mariadb
   ```

1. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS address (or the public IP address) of your instance followed by a forward slash and the name of your installation directory. For example:

   ```
   http://my.public.dns.amazonaws.com/phpMyAdmin
   ```

   You should see the phpMyAdmin login page:  
![\[Result of typing the URL of your phpMyAdmin installation is the phpMyAdmin login screen.\]](http://docs.aws.amazon.com/linux/al2/ug/images/phpmyadmin_login.png)

1. Log in to your phpMyAdmin installation with the `root` user name and the MySQL root password you created earlier.

   Your installation must still be configured before you put it into service. We suggest that you begin by manually creating the configuration file, as follows:

   1. To start with a minimal configuration file, use your favorite text editor to create a new file, and then copy the contents of `config.sample.inc.php` into it.

   1. Save the file as `config.inc.php` in the phpMyAdmin directory that contains `index.php`.

   1. Refer to post-file creation instructions in the [Using the Setup script](https://docs.phpmyadmin.net/en/latest/setup.html#using-the-setup-script) section of the phpMyAdmin installation instructions for any additional setup.

    For information about using phpMyAdmin, see the [phpMyAdmin User Guide](http://docs.phpmyadmin.net/en/latest/user.html).

## Troubleshoot
<a name="lamp-troubleshooting"></a>

This section offers suggestions for resolving common problems you may encounter while setting up a new LAMP server. 

### I can't connect to my server using a web browser
<a name="is_apache_on"></a>

Perform the following checks to see if your Apache web server is running and accessible.
+ **Is the web server running?**

  You can verify that **httpd** is on by running the following command:

  ```
  [ec2-user ~]$ sudo systemctl is-enabled httpd
  ```

  If the **httpd** process is not running, repeat the steps described in [To prepare the LAMP server](#install_apache-2).
+ **Is the firewall correctly configured?**

  Verify that the security group for the instance contains a rule to allow HTTP traffic on port 80. For more information, see [Add rules to security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule).

### I can't connect to my server using HTTPS
<a name="is-https-enabled"></a>

Perform the following checks to see if your Apache web server is configured to support HTTPS.
+ **Is the web server correctly configured?**

  After you install Apache, the server is configured for HTTP traffic. To support HTTPS, enable TLS on the server and install an SSL certificate. For information, see [Tutorial: Configure SSL/TLS on AL2](SSL-on-amazon-linux-2.md).
+ **Is the firewall correctly configured?**

  Verify that the security group for the instance contains a rule to allow HTTPS traffic on port 443. For more information, see [Add rules to a security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule).

## Related topics
<a name="lamp-more-info"></a>

For more information about transferring files to your instance or installing a WordPress blog on your web server, see the following documentation:
+ [Transfer files to your Linux instance using WinSCP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html#Transfer_WinSCP).
+ [Transfer files to Linux instances using an SCP client](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-ssh.html#linux-file-transfer-scp).
+ [Tutorial: Host a WordPress blog on AL2](hosting-wordpress.md)

For more information about the commands and software used in this tutorial, see the following webpages:
+ Apache web server: [http://httpd.apache.org/](http://httpd.apache.org/)
+ MariaDB database server: [https://mariadb.org/](https://mariadb.org/)
+ PHP programming language: [http://php.net/](http://php.net/)
+ The `chmod` command: [https://en.wikipedia.org/wiki/Chmod](https://en.wikipedia.org/wiki/Chmod)
+ The `chown` command: [https://en.wikipedia.org/wiki/Chown](https://en.wikipedia.org/wiki/Chown)

For more information about registering a domain name for your web server, or transferring an existing domain name to this host, see [Creating and Migrating Domains and Subdomains to Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/creating-migrating.html) in the *Amazon Route 53 Developer Guide*.

# Tutorial: Configure SSL/TLS on AL2
<a name="SSL-on-amazon-linux-2"></a>

Secure Sockets Layer/Transport Layer Security (SSL/TLS) creates an encrypted channel between a web server and web client that protects data in transit from being eavesdropped on. This tutorial explains how to add support manually for SSL/TLS on an EC2 instance with AL2 and Apache web server. This tutorial assumes that you are not using a load balancer. If you are using Elastic Load Balancing, you can choose to configure SSL offload on the load balancer, using a certificate from [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/) instead.

For historical reasons, web encryption is often referred to simply as SSL. While web browsers still support SSL, its successor protocol TLS is less vulnerable to attack. AL2 disables server-side support for all versions of SSL by default. [Security standards bodies](https://www.ssl.com/article/deprecating-early-tls/) consider TLS 1.0 to be unsafe. TLS 1.0 and TLS 1.1 were formally [deprecated](https://datatracker.ietf.org/doc/rfc8996/) in March 2021. This tutorial contains guidance based exclusively on enabling TLS 1.2. TLS 1.3 was finalized in 2018 and is available in AL2 as long as the underlying TLS library (OpenSSL in this tutorial) is supported and enabled. [Clients must support TLS 1.2 or later by June 28, 2023](https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/). For more information about the updated encryption standards, see [RFC 7568](https://tools.ietf.org/html/rfc7568) and [RFC 8446](https://tools.ietf.org/html/rfc8446).

This tutorial refers to modern web encryption simply as TLS.

**Important**  
These procedures are intended for use with AL2. We also assume that you are starting with a new Amazon EC2 instance. If you are trying to set up an EC2 instance running a different distribution, or an instance running an old version of AL2, some procedures in this tutorial might not work. For Ubuntu, see the following community documentation: [Open SSL on Ubuntu](https://help.ubuntu.com/community/OpenSSL). For Red Hat Enterprise Linux, see the following: [Setting up the Apache HTTP Web Server](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/setting-apache-http-server_deploying-different-types-of-servers). For other distributions, see their specific documentation.

**Note**  
Alternatively, you can use AWS Certificate Manager (ACM) for AWS Nitro enclaves, which is an enclave application that allows you to use public and private SSL/TLS certificates with your web applications and servers running on Amazon EC2 instances with AWS Nitro Enclaves. Nitro Enclaves is an Amazon EC2 capability that enables creation of isolated compute environments to protect and securely process highly sensitive data, such as SSL/TLS certificates and private keys.  
ACM for Nitro Enclaves works with **nginx** running on your Amazon EC2 Linux instance to create private keys, to distribute certificates and private keys, and to manage certificate renewals.  
To use ACM for Nitro Enclaves, you must use an enclave-enabled Linux instance.  
For more information, see [ What is AWS Nitro Enclaves?](https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html) and [AWS Certificate Manager for Nitro Enclaves](https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave-refapp.html) in the *AWS Nitro Enclaves User Guide*.

**Topics**
+ [

## Prerequisites
](#ssl_prereq)
+ [

## Step 1: Enable TLS on the server
](#ssl_enable)
+ [

## Step 2: Obtain a CA-signed certificate
](#ssl_certificate)
+ [

## Step 3: Test and harden the security configuration
](#ssl_test)
+ [

## Troubleshoot
](#troubleshooting)

## Prerequisites
<a name="ssl_prereq"></a>

Before you begin this tutorial, complete the following steps:
+ Launch an Amazon EBS backed AL2 instance. For more information, see [Launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*.
+ Configure your security groups to allow your instance to accept connections on the following TCP ports: 
  + SSH (port 22)
  + HTTP (port 80)
  + HTTPS (port 443)

  For more information, see [Security group rules](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules.html) in the *Amazon EC2 User Guide*.
+ Install the Apache web server. For step-by-step instructions, see [Tutorial: Install a LAMP Web Server on AL2](ec2-lamp-amazon-linux-2.md). Only the httpd package and its dependencies are needed, so you can ignore the instructions involving PHP and MariaDB.
+ To identify and authenticate websites, the TLS public key infrastructure (PKI) relies on the Domain Name System (DNS). To use your EC2 instance to host a public website, you need to register a domain name for your web server or transfer an existing domain name to your Amazon EC2 host. Numerous third-party domain registration and DNS hosting services are available for this, or you can use [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html). 

## Step 1: Enable TLS on the server
<a name="ssl_enable"></a>

**Option: Complete this tutorial using automation**  
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the [automation document](https://console.aws.amazon.com/systems-manager/documents/AWSDocs-Configure-SSL-TLS-AL2/).

This procedure takes you through the process of setting up TLS on AL2 with a self-signed digital certificate. 

**Note**  
A self-signed certificate is acceptable for testing but not production. If you expose your self-signed certificate to the internet, visitors to your site are greeted by security warnings. 

**To enable TLS on a server**

1. [Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) to your instance and confirm that Apache is running.

   ```
   [ec2-user ~]$ sudo systemctl is-enabled httpd
   ```

   If the returned value is not "enabled," start Apache and set it to start each time the system boots.

   ```
   [ec2-user ~]$ sudo systemctl start httpd && sudo systemctl enable httpd
   ```

1. To ensure that all of your software packages are up to date, perform a quick software update on your instance. This process may take a few minutes, but it is important to make sure that you have the latest security updates and bug fixes.
**Note**  
The `-y` option installs the updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option.

   ```
   [ec2-user ~]$ sudo yum update -y
   ```

1. Now that your instance is current, add TLS support by installing the Apache module `mod_ssl`.

   ```
   [ec2-user ~]$ sudo yum install -y mod_ssl
   ```

   Your instance now has the following files that you use to configure your secure server and create a certificate for testing:
   +  `/etc/httpd/conf.d/ssl.conf` 

     The configuration file for mod\$1ssl. It contains *directives* telling Apache where to find encryption keys and certificates, the TLS protocol versions to allow, and the encryption ciphers to accept. 
   + `/etc/pki/tls/certs/make-dummy-cert`

     A script to generate a self-signed X.509 certificate and private key for your server host. This certificate is useful for testing that Apache is properly set up to use TLS. Because it offers no proof of identity, it should not be used in production. If used in production, it triggers warnings in Web browsers.

1. Run the script to generate a self-signed dummy certificate and key for testing.

   ```
   [ec2-user ~]$ cd /etc/pki/tls/certs
   sudo ./make-dummy-cert localhost.crt
   ```

   This generates a new file `localhost.crt` in the `/etc/pki/tls/certs/` directory. The specified file name matches the default that is assigned in the **SSLCertificateFile** directive in `/etc/httpd/conf.d/ssl.conf`. 

   This file contains both a self-signed certificate and the certificate's private key. Apache requires the certificate and key to be in PEM format, which consists of Base64-encoded ASCII characters framed by "BEGIN" and "END" lines, as in the following abbreviated example.

   ```
   -----BEGIN PRIVATE KEY-----
   MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQD2KKx/8Zk94m1q
   3gQMZF9ZN66Ls19+3tHAgQ5Fpo9KJDhzLjOOCI8u1PTcGmAah5kEitCEc0wzmNeo
   BCl0wYR6G0rGaKtK9Dn7CuIjvubtUysVyQoMVPQ97ldeakHWeRMiEJFXg6kZZ0vr
   GvwnKoMh3DlK44D9dX7IDua2PlYx5+eroA+1Lqf32ZSaAO0bBIMIYTHigwbHMZoT
   ...
   56tE7THvH7vOEf4/iUOsIrEzaMaJ0mqkmY1A70qQGQKBgBF3H1qNRNHuyMcPODFs
   27hDzPDinrquSEvoZIggkDMlh2irTiipJ/GhkvTpoQlv0fK/VXw8vSgeaBuhwJvS
   LXU9HvYq0U6O4FgD3nAyB9hI0BE13r1HjUvbjT7moH+RhnNz6eqqdscCS09VtRAo
   4QQvAqOa8UheYeoXLdWcHaLP
   -----END PRIVATE KEY-----                    
   
   -----BEGIN CERTIFICATE-----
   MIIEazCCA1OgAwIBAgICWxQwDQYJKoZIhvcNAQELBQAwgbExCzAJBgNVBAYTAi0t
   MRIwEAYDVQQIDAlTb21lU3RhdGUxETAPBgNVBAcMCFNvbWVDaXR5MRkwFwYDVQQK
   DBBTb21lT3JnYW5pemF0aW9uMR8wHQYDVQQLDBZTb21lT3JnYW5pemF0aW9uYWxV
   bml0MRkwFwYDVQQDDBBpcC0xNzItMzEtMjAtMjM2MSQwIgYJKoZIhvcNAQkBFhVy
   ...
   z5rRUE/XzxRLBZOoWZpNWTXJkQ3uFYH6s/sBwtHpKKZMzOvDedREjNKAvk4ws6F0
   CuIjvubtUysVyQoMVPQ97ldeakHWeRMiEJFXg6kZZ0vrGvwnKoMh3DlK44D9dlU3
   WanXWehT6FiSZvB4sTEXXJN2jdw8g+sHGnZ8zCOsclknYhHrCVD2vnBlZJKSZvak
   3ZazhBxtQSukFMOnWPP2a0DMMFGYUHOd0BQE8sBJxg==
   -----END CERTIFICATE-----
   ```

   The file names and extensions are a convenience and have no effect on function. For example, you can call a certificate `cert.crt`, `cert.pem`, or any other file name, so long as the related directive in the `ssl.conf` file uses the same name.
**Note**  
When you replace the default TLS files with your own customized files, be sure that they are in PEM format. 

1. Open the `/etc/httpd/conf.d/ssl.conf` file using your favorite text editor (such as **vim** or **nano**) as root user and comment out the following line, because the self-signed dummy certificate also contains the key. If you do not comment out this line before you complete the next step, the Apache service fails to start.

   ```
   SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
   ```

1. Restart Apache.

   ```
   [ec2-user ~]$ sudo systemctl restart httpd
   ```
**Note**  
Make sure that TCP port 443 is accessible on your EC2 instance, as previously described.

1. Your Apache web server should now support HTTPS (secure HTTP) over port 443. Test it by entering the IP address or fully qualified domain name of your EC2 instance into a browser URL bar with the prefix **https://**.

   Because you are connecting to a site with a self-signed, untrusted host certificate, your browser may display a series of security warnings. Override the warnings and proceed to the site. 

   If the default Apache test page opens, it means that you have successfully configured TLS on your server. All data passing between the browser and server is now encrypted.
**Note**  
To prevent site visitors from encountering warning screens, you must obtain a trusted, CA-signed certificate that not only encrypts, but also publicly authenticates you as the owner of the site. 

## Step 2: Obtain a CA-signed certificate
<a name="ssl_certificate"></a>

You can use the following process to obtain a CA-signed certificate:
+ Generate a certificate signing request (CSR) from a private key
+ Submit the CSR to a certificate authority (CA)
+ Obtain a signed host certificate
+ Configure Apache to use the certificate

A self-signed TLS X.509 host certificate is cryptologically identical to a CA-signed certificate. The difference is social, not mathematical. A CA promises, at a minimum, to validate a domain's ownership before issuing a certificate to an applicant. Each web browser contains a list of CAs trusted by the browser vendor to do this. An X.509 certificate consists primarily of a public key that corresponds to your private server key, and a signature by the CA that is cryptographically tied to the public key. When a browser connects to a web server over HTTPS, the server presents a certificate for the browser to check against its list of trusted CAs. If the signer is on the list, or accessible through a *chain of trust* consisting of other trusted signers, the browser negotiates a fast encrypted data channel with the server and loads the page. 

Certificates generally cost money because of the labor involved in validating the requests, so it pays to shop around. A few CAs offer basic-level certificates free of charge. The most notable of these CAs is the [Let's Encrypt](https://letsencrypt.org/) project, which also supports the automation of the certificate creation and renewal process. For more information about using a Let's Encrypt certificate, see [Get Certbot](https://eff-certbot.readthedocs.io/en/stable/install.html).

If you plan to offer commercial-grade services, [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) is a good option.

Underlying the host certificate is the key. As of 2019, [government](http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf) and [industry](https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.6.5.pdf) groups recommend using a minimum key (modulus) size of 2048 bits for RSA keys intended to protect documents, through 2030. The default modulus size generated by OpenSSL in AL2 is 2048 bits, which is suitable for use in a CA-signed certificate. In the following procedure, an optional step provided for those who want a customized key, for example, one with a larger modulus or using a different encryption algorithm.

**Important**  
These instructions for acquiring a CA-signed host certificate do not work unless you own a registered and hosted DNS domain.

**To obtain a CA-signed certificate**

1.  [Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) to your instance and navigate to /etc/pki/tls/private/. This is the directory where you store the server's private key for TLS. If you prefer to use an existing host key to generate the CSR, skip to Step 3.

1. (Optional) Generate a new private key. Here are some examples of key configurations. Any of the resulting keys works with your web server, but they vary in the degree and type of security that they implement.
   + **Example 1:** Create a default RSA host key. The resulting file, **custom.key**, is a 2048-bit RSA private key.

     ```
     [ec2-user ~]$ sudo openssl genrsa -out custom.key
     ```
   + **Example 2:** Create a stronger RSA key with a bigger modulus. The resulting file, **custom.key**, is a 4096-bit RSA private key.

     ```
     [ec2-user ~]$ sudo openssl genrsa -out custom.key 4096
     ```
   + **Example 3:** Create a 4096-bit encrypted RSA key with password protection. The resulting file, **custom.key**, is a 4096-bit RSA private key encrypted with the AES-128 cipher.
**Important**  
Encrypting the key provides greater security, but because an encrypted key requires a password, services depending on it cannot be auto-started. Each time you use this key, you must supply the password (in the preceding example, "abcde12345") over an SSH connection.

     ```
     [ec2-user ~]$ sudo openssl genrsa -aes128 -passout pass:abcde12345 -out custom.key 4096
     ```
   + **Example 4:** Create a key using a non-RSA cipher. RSA cryptography can be relatively slow because of the size of its public keys, which are based on the product of two large prime numbers. However, it is possible to create keys for TLS that use non-RSA ciphers. Keys based on the mathematics of elliptic curves are smaller and computationally faster when delivering an equivalent level of security.

     ```
     [ec2-user ~]$ sudo openssl ecparam -name prime256v1 -out custom.key -genkey
     ```

     The result is a 256-bit elliptic curve private key using prime256v1, a "named curve" that OpenSSL supports. Its cryptographic strength is slightly greater than a 2048-bit RSA key, [according to NIST](http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-57pt1r4.pdf).
**Note**  
Not all CAs provide the same level of support for elliptic-curve-based keys as for RSA keys.

   Make sure that the new private key has highly restrictive ownership and permissions (owner=root, group=root, read/write for owner only). The commands would be as shown in the following example.

   ```
   [ec2-user ~]$ sudo chown root:root custom.key
   [ec2-user ~]$ sudo chmod 600 custom.key
   [ec2-user ~]$ ls -al custom.key
   ```

   The preceding commands yield the following result.

   ```
   -rw------- root root custom.key
   ```

    After you have created and configured a satisfactory key, you can create a CSR. 

1. Create a CSR using your preferred key. The following example uses **custom.key**.

   ```
   [ec2-user ~]$ sudo openssl req -new -key custom.key -out csr.pem
   ```

   OpenSSL opens a dialog and prompts you for the information shown in the following table. All of the fields except **Common Name** are optional for a basic, domain-validated host certificate.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/linux/al2/ug/SSL-on-amazon-linux-2.html)

   Finally, OpenSSL prompts you for an optional challenge password. This password applies only to the CSR and to transactions between you and your CA, so follow the CA's recommendations about this and the other optional field, optional company name. The CSR challenge password has no effect on server operation.

   The resulting file **csr.pem** contains your public key, your digital signature of your public key, and the metadata that you entered.

1. Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying the contents into a web form. At this time, you may be asked to supply one or more subject alternate names (SANs) to be placed on the certificate. If **www.example.com** is the common name, then **example.com** would be a good SAN, and vice versa. A visitor to your site entering either of these names would see an error-free connection. If your CA web form allows it, include the common name in the list of SANs. Some CAs include it automatically.

   After your request has been approved, you receive a new host certificate signed by the CA. You might also be instructed to download an *intermediate certificate* file that contains additional certificates needed to complete the CA's chain of trust. 
**Note**  
Your CA might send you files in multiple formats intended for various purposes. For this tutorial, you should only use a certificate file in PEM format, which is usually (but not always) marked with a `.pem` or `.crt` file extension. If you are uncertain which file to use, open the files with a text editor and find the one containing one or more blocks beginning with the following line.  

   ```
   - - - - -BEGIN CERTIFICATE - - - - - 
   ```
The file should also end with the following line.  

   ```
   - - - -END CERTIFICATE - - - - -
   ```
You can also test the file at the command line as shown in the following.  

   ```
   [ec2-user certs]$ openssl x509 -in certificate.crt -text
   ```
Verify that these lines appear in the file. Do not use files ending with `.p7b`, `.p7c`, or similar file extensions.

1. Place the new CA-signed certificate and any intermediate certificates in the `/etc/pki/tls/certs` directory.
**Note**  
There are several ways to upload your new certificate to your EC2 instance, but the most straightforward and informative way is to open a text editor (for example, vi, nano, or notepad) on both your local computer and your instance, and then copy and paste the file contents between them. You need root [sudo] permissions when performing these operations on the EC2 instance. This way, you can see immediately if there are any permission or path problems. Be careful, however, not to add any additional lines while copying the contents, or to change them in any way. 

   From inside the `/etc/pki/tls/certs` directory, check that the file ownership, group, and permission settings match the highly restrictive AL2 defaults (owner=root, group=root, read/write for owner only). The following example shows the commands to use. 

   ```
   [ec2-user certs]$ sudo chown root:root custom.crt
   [ec2-user certs]$ sudo chmod 600 custom.crt
   [ec2-user certs]$ ls -al custom.crt
   ```

   These commands should yield the following result. 

   ```
   -rw------- root root custom.crt
   ```

   The permissions for the intermediate certificate file are less stringent (owner=root, group=root, owner can write, group can read, world can read). The following example shows the commands to use. 

   ```
   [ec2-user certs]$ sudo chown root:root intermediate.crt
   [ec2-user certs]$ sudo chmod 644 intermediate.crt
   [ec2-user certs]$ ls -al intermediate.crt
   ```

   These commands should yield the following result.

   ```
   -rw-r--r-- root root intermediate.crt
   ```

1. Place the private key that you used to create the CSR in the `/etc/pki/tls/private/` directory. 
**Note**  
There are several ways to upload your custom key to your EC2 instance, but the most straightforward and informative way is to open a text editor (for example, vi, nano, or notepad) on both your local computer and your instance, and then copy and paste the file contents between them. You need root [sudo] permissions when performing these operations on the EC2 instance. This way, you can see immediately if there are any permission or path problems. Be careful, however, not to add any additional lines while copying the contents, or to change them in any way.

   From inside the `/etc/pki/tls/private` directory, use the following commands to verify that the file ownership, group, and permission settings match the highly restrictive AL2 defaults (owner=root, group=root, read/write for owner only).

   ```
   [ec2-user private]$ sudo chown root:root custom.key
   [ec2-user private]$ sudo chmod 600 custom.key
   [ec2-user private]$ ls -al custom.key
   ```

   These commands should yield the following result.

   ```
   -rw------- root root custom.key
   ```

1. Edit `/etc/httpd/conf.d/ssl.conf` to reflect your new certificate and key files.

   1. Provide the path and file name of the CA-signed host certificate in Apache's `SSLCertificateFile` directive:

      ```
      SSLCertificateFile /etc/pki/tls/certs/custom.crt
      ```

   1. If you received an intermediate certificate file (`intermediate.crt` in this example), provide its path and file name using Apache's `SSLCACertificateFile` directive:

      ```
      SSLCACertificateFile /etc/pki/tls/certs/intermediate.crt
      ```
**Note**  
Some CAs combine the host certificate and the intermediate certificates in a single file, making the `SSLCACertificateFile` directive unnecessary. Consult the instructions provided by your CA.

   1. Provide the path and file name of the private key (`custom.key` in this example) in Apache's `SSLCertificateKeyFile` directive:

      ```
      SSLCertificateKeyFile /etc/pki/tls/private/custom.key
      ```

1. Save `/etc/httpd/conf.d/ssl.conf` and restart Apache.

   ```
   [ec2-user ~]$ sudo systemctl restart httpd
   ```

1. Test your server by entering your domain name into a browser URL bar with the prefix `https://`. Your browser should load the test page over HTTPS without generating errors.

## Step 3: Test and harden the security configuration
<a name="ssl_test"></a>

After your TLS is operational and exposed to the public, you should test how secure it really is. This is easy to do using online services such as [Qualys SSL Labs](https://www.ssllabs.com/ssltest/analyze.html), which performs a free and thorough analysis of your security setup. Based on the results, you may decide to harden the default security configuration by controlling which protocols you accept, which ciphers you prefer, and which you exclude. For more information, see [how Qualys formulates its scores](https://github.com/ssllabs/research/wiki/SSL-Server-Rating-Guide).

**Important**  
Real-world testing is crucial to the security of your server. Small configuration errors may lead to serious security breaches and loss of data. Because recommended security practices change constantly in response to research and emerging threats, periodic security audits are essential to good server administration. 

On the [Qualys SSL Labs](https://www.ssllabs.com/ssltest/analyze.html) site, enter the fully qualified domain name of your server, in the form **www.example.com**. After about two minutes, you receive a grade (from A to F) for your site and a detailed breakdown of the findings. The following table summarizes the report for a domain with settings identical to the default Apache configuration on AL2, and with a default Certbot certificate. 


|  |  | 
| --- |--- |
| Overall rating | B | 
| Certificate | 100% | 
| Protocol support | 95% | 
| Key exchange | 70% | 
| Cipher strength | 90% | 

Though the overview shows that the configuration is mostly sound, the detailed report flags several potential problems, listed here in order of severity:

✗ **The RC4 cipher is supported for use by certain older browsers.** A cipher is the mathematical core of an encryption algorithm. RC4, a fast cipher used to encrypt TLS data-streams, is known to have several [serious weaknesses](http://www.imperva.com/docs/hii_attacking_ssl_when_using_rc4.pdf). Unless you have very good reasons to support legacy browsers, you should disable this.

✗ **Old TLS versions are supported.** The configuration supports TLS 1.0 (already deprecated) and TLS 1.1 (on a path to deprecation). Only TLS 1.2 has been recommended since 2018.

✗ **Forward secrecy is not fully supported.** [Forward secrecy](https://en.wikipedia.org/wiki/Forward_secrecy) is a feature of algorithms that encrypt using temporary (ephemeral) session keys derived from the private key. This means in practice that attackers cannot decrypt HTTPS data even if they possess a web server's long-term private key.

**To correct and future-proof the TLS configuration**

1. Open the configuration file `/etc/httpd/conf.d/ssl.conf` in a text editor and comment out the following line by entering "\$1" at the beginning of the line.

   ```
   #SSLProtocol all -SSLv3
   ```

1. Add the following directive:

   ```
   #SSLProtocol all -SSLv3
   SSLProtocol -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 +TLSv1.2
   ```

   This directive explicitly disables SSL versions 2 and 3, as well as TLS versions 1.0 and 1.1. The server now refuses to accept encrypted connections with clients using anything except TLS 1.2. The verbose wording in the directive conveys more clearly, to a human reader, what the server is configured to do.
**Note**  
Disabling TLS versions 1.0 and 1.1 in this manner blocks a small percentage of outdated web browsers from accessing your site.

**To modify the list of allowed ciphers**

1. In the configuration file `/etc/httpd/conf.d/ssl.conf`, find the section with the **SSLCipherSuite** directive and comment out the existing line by entering "\$1" at the beginning of the line.

   ```
   #SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5
   ```

1. Specify explicit cipher suites and a cipher order that prioritizes forward secrecy and avoids insecure ciphers. The `SSLCipherSuite` directive used here is based on output from the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/), which tailors a TLS configuration to the specific software running on your server. First determine your Apache and OpenSSL versions by using the output from the following commands.

   ```
   [ec2-user ~]$ yum list installed | grep httpd
   
   [ec2-user ~]$ yum list installed | grep openssl
   ```

   For example, if the returned information is Apache 2.4.34 and OpenSSL 1.0.2, we enter this into the generator. If you choose the "modern" compatibility model, this creates an `SSLCipherSuite` directive that aggressively enforces security but still works for most browsers. If your software doesn't support the modern configuration, you can update your software or choose the "intermediate" configuration instead.

   ```
   SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:
   ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:
   ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
   ```

   The selected ciphers have *ECDHE* in their names, an abbreviation for *Elliptic Curve Diffie-Hellman Ephemeral *. The term *ephemeral* indicates forward secrecy. As a by-product, these ciphers do not support RC4.

   We recommend that you use an explicit list of ciphers instead of relying on defaults or terse directives whose content isn't visible.

   Copy the generated directive into `/etc/httpd/conf.d/ssl.conf`.
**Note**  
Though shown here on several lines for readability, the directive must be on a single line when copied to `/etc/httpd/conf.d/ssl.conf`, with only a colon (no spaces) between cipher names.

1. Finally, uncomment the following line by removing the "\$1" at the beginning of the line.

   ```
   #SSLHonorCipherOrder on
   ```

   This directive forces the server to prefer high-ranking ciphers, including (in this case) those that support forward secrecy. With this directive turned on, the server tries to establish a strong secure connection before falling back to allowed ciphers with lesser security.

After completing both of these procedures, save the changes to `/etc/httpd/conf.d/ssl.conf` and restart Apache.

If you test the domain again on [Qualys SSL Labs](https://www.ssllabs.com/ssltest/analyze.html), you should see that the RC4 vulnerability and other warnings are gone and the summary looks something like the following.


|  |  | 
| --- |--- |
| Overall rating | A | 
| Certificate | 100% | 
| Protocol support | 100% | 
| Key exchange | 90% | 
| Cipher strength | 90% | 

Each update to OpenSSL introduces new ciphers and removes support for old ones. Keep your EC2 AL2 instance up-to-date, watch for security announcements from [OpenSSL](https://www.openssl.org/), and be alert to reports of new security exploits in the technical press.

## Troubleshoot
<a name="troubleshooting"></a>
+ **My Apache webserver doesn't start unless I enter a password**

  This is expected behavior if you installed an encrypted, password-protected, private server key.

  You can remove the encryption and password requirement from the key. Assuming that you have a private encrypted RSA key called `custom.key` in the default directory, and that the password on it is **abcde12345**, run the following commands on your EC2 instance to generate an unencrypted version of the key.

  ```
  [ec2-user ~]$ cd /etc/pki/tls/private/
  [ec2-user private]$ sudo cp custom.key custom.key.bak
  [ec2-user private]$ sudo openssl rsa -in custom.key -passin pass:abcde12345 -out custom.key.nocrypt 
  [ec2-user private]$ sudo mv custom.key.nocrypt custom.key
  [ec2-user private]$ sudo chown root:root custom.key
  [ec2-user private]$ sudo chmod 600 custom.key
  [ec2-user private]$ sudo systemctl restart httpd
  ```

  Apache should now start without prompting you for a password.
+  **I get errors when I run sudo yum install -y mod\$1ssl.**

  When you are installing the required packages for SSL, you may see errors similar to the following.

  ```
  Error: httpd24-tools conflicts with httpd-tools-2.2.34-1.16.amzn1.x86_64
  Error: httpd24 conflicts with httpd-2.2.34-1.16.amzn1.x86_64
  ```

  This typically means that your EC2 instance is not running AL2. This tutorial only supports instances freshly created from an official AL2 AMI.

# Tutorial: Host a WordPress blog on AL2
<a name="hosting-wordpress"></a>

The following procedures will help you install, configure, and secure a WordPress blog on your AL2 instance. This tutorial is a good introduction to using Amazon EC2 in that you have full control over a web server that hosts your WordPress blog, which is not typical with a traditional hosting service.

You are responsible for updating the software packages and maintaining security patches for your server. For a more automated WordPress installation that does not require direct interaction with the web server configuration, the CloudFormation service provides a WordPress template that can also get you started quickly. For more information, see [Get started](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.Walkthrough.html) in the *AWS CloudFormation User Guide*. If you need a high-availability solution with a decoupled database, see [Deploying a high-availability WordPress website](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html) in the *AWS Elastic Beanstalk Developer Guide*.

**Important**  
These procedures are intended for use with AL2. For more information about other distributions, see their specific documentation. Many steps in this tutorial do not work on Ubuntu instances. For help installing WordPress on an Ubuntu instance, see [WordPress](https://help.ubuntu.com/community/WordPress) in the Ubuntu documentation. You can also use [CodeDeploy](https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress-launch-instance.html) to accomplish this task on Amazon Linux, macOS, or Unix systems.

**Topics**
+ [

## Prerequisites
](#hosting-wordpress-prereqs)
+ [

## Install WordPress
](#install-wordpress)
+ [

## Next steps
](#wordpress-next-steps)
+ [

## Help\$1 My public DNS name changed and now my blog is broken
](#wordpress-troubleshooting)

## Prerequisites
<a name="hosting-wordpress-prereqs"></a>

This tutorial assumes that you have launched an AL2 instance with a functional web server with PHP and database (either MySQL or MariaDB) support by following all of the steps in [Tutorial: Install a LAMP server on AL2](ec2-lamp-amazon-linux-2.md). This tutorial also has steps for configuring a security group to allow `HTTP` and `HTTPS` traffic, as well as several steps to ensure that file permissions are set properly for your web server. For information about adding rules to your security group, see [Add rules to a security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule).

We strongly recommend that you associate an Elastic IP address (EIP) to the instance you are using to host a WordPress blog. This prevents the public DNS address for your instance from changing and breaking your installation. If you own a domain name and you want to use it for your blog, you can update the DNS record for the domain name to point to your EIP address (for help with this, contact your domain name registrar). You can have one EIP address associated with a running instance at no charge. For more information, see [Elastic IP addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) in the *Amazon EC2 User Guide*.

If you don't already have a domain name for your blog, you can register a domain name with Route 53 and associate your instance's EIP address with your domain name. For more information, see [Registering domain names using Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/registrar.html) in the *Amazon Route 53 Developer Guide*.

## Install WordPress
<a name="install-wordpress"></a>

**Option: Complete this tutorial using automation**  
To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the [automation document](https://console.aws.amazon.com/systems-manager/documents/AWSDocs-HostingAWordPressBlog/).

Connect to your instance, and download the WordPress installation package.

**To download and unzip the WordPress installation package**

1. Download the latest WordPress installation package with the **wget** command. The following command should always download the latest release.

   ```
   [ec2-user ~]$ wget https://wordpress.org/latest.tar.gz
   ```

1. Unzip and unarchive the installation package. The installation folder is unzipped to a folder called `wordpress`.

   ```
   [ec2-user ~]$ tar -xzf latest.tar.gz
   ```<a name="create_user_and_database"></a>

**To create a database user and database for your WordPress installation**

Your WordPress installation needs to store information, such as blog posts and user comments, in a database. This procedure helps you create your blog's database and a user that is authorized to read and save information to it. 

1. Start the database server.
   + 

     ```
     [ec2-user ~]$ sudo systemctl start mariadb
     ```

1. Log in to the database server as the `root` user. Enter your database `root` password when prompted; this may be different than your `root` system password, or it might even be empty if you have not secured your database server.

   If you have not secured your database server yet, it is important that you do so. For more information, see [To secure the MariaDB server](ec2-lamp-amazon-linux-2.md#securing-maria-db) (AL2).

   ```
   [ec2-user ~]$ mysql -u root -p
   ```

1. <a name="create_database_user"></a>Create a user and password for your MySQL database. Your WordPress installation uses these values to communicate with your MySQL database. 

   Make sure that you create a strong password for your user. Do not use the single quote character ( ' ) in your password, because this will break the preceding command. Do not reuse an existing password, and make sure to store this password in a safe place.

   Enter the following command, substituting a unique user name and password.

   ```
   CREATE USER 'wordpress-user'@'localhost' IDENTIFIED BY 'your_strong_password';
   ```

1. <a name="create_database"></a>Create your database. Give your database a descriptive, meaningful name, such as `wordpress-db`.
**Note**  
The punctuation marks surrounding the database name in the command below are called backticks. The backtick (```) key is usually located above the `Tab` key on a standard keyboard. Backticks are not always required, but they allow you to use otherwise illegal characters, such as hyphens, in database names.

   ```
   CREATE DATABASE `wordpress-db`;
   ```

1. Grant full privileges for your database to the WordPress user that you created earlier.

   ```
   GRANT ALL PRIVILEGES ON `wordpress-db`.* TO "wordpress-user"@"localhost";
   ```

1. Flush the database privileges to pick up all of your changes.

   ```
   FLUSH PRIVILEGES;
   ```

1. Exit the `mysql` client.

   ```
   exit
   ```

**To create and edit the wp-config.php file**

The WordPress installation folder contains a sample configuration file called `wp-config-sample.php`. In this procedure, you copy this file and edit it to fit your specific configuration.

1. Copy the `wp-config-sample.php` file to a file called `wp-config.php`. This creates a new configuration file and keeps the original sample file intact as a backup.

   ```
   [ec2-user ~]$ cp wordpress/wp-config-sample.php wordpress/wp-config.php
   ```

1. Edit the `wp-config.php` file with your favorite text editor (such as **nano** or **vim**) and enter values for your installation. If you do not have a favorite text editor, `nano` is suitable for beginners.

   ```
   [ec2-user ~]$ nano wordpress/wp-config.php
   ```

   1. Find the line that defines `DB_NAME` and change `database_name_here` to the database name that you created in [Step 4](#create_database) of [To create a database user and database for your WordPress installation](#create_user_and_database).

      ```
      define('DB_NAME', 'wordpress-db');
      ```

   1. Find the line that defines `DB_USER` and change `username_here` to the database user that you created in [Step 3](#create_database_user) of [To create a database user and database for your WordPress installation](#create_user_and_database).

      ```
      define('DB_USER', 'wordpress-user');
      ```

   1. Find the line that defines `DB_PASSWORD` and change `password_here` to the strong password that you created in [Step 3](#create_database_user) of [To create a database user and database for your WordPress installation](#create_user_and_database).

      ```
      define('DB_PASSWORD', 'your_strong_password');
      ```

   1. Find the section called `Authentication Unique Keys and Salts`. These `KEY` and `SALT` values provide a layer of encryption to the browser cookies that WordPress users store on their local machines. Basically, adding long, random values here makes your site more secure. Visit [https://api.wordpress.org/secret-key/1.1/salt/](https://api.wordpress.org/secret-key/1.1/salt/) to randomly generate a set of key values that you can copy and paste into your `wp-config.php` file. To paste text into a PuTTY terminal, place the cursor where you want to paste the text and right-click your mouse inside the PuTTY terminal.

      For more information about security keys, go to [https://wordpress.org/support/article/editing-wp-config-php/\$1security-keys](https://wordpress.org/support/article/editing-wp-config-php/#security-keys).
**Note**  
The values below are for example purposes only; do not use these values for your installation.

      ```
      define('AUTH_KEY',         ' #U$$+[RXN8:b^-L 0(WU_+ c+WFkI~c]o]-bHw+)/Aj[wTwSiZ<Qb[mghEXcRh-');
      define('SECURE_AUTH_KEY',  'Zsz._P=l/|y.Lq)XjlkwS1y5NJ76E6EJ.AV0pCKZZB,*~*r ?6OP$eJT@;+(ndLg');
      define('LOGGED_IN_KEY',    'ju}qwre3V*+8f_zOWf?{LlGsQ]Ye@2Jh^,8x>)Y |;(^[Iw]Pi+LG#A4R?7N`YB3');
      define('NONCE_KEY',        'P(g62HeZxEes|LnI^i=H,[XwK9I&[2s|:?0N}VJM%?;v2v]v+;+^9eXUahg@::Cj');
      define('AUTH_SALT',        'C$DpB4Hj[JK:?{ql`sRVa:{:7yShy(9A@5wg+`JJVb1fk%_-Bx*M4(qc[Qg%JT!h');
      define('SECURE_AUTH_SALT', 'd!uRu#}+q#{f$Z?Z9uFPG.${+S{n~1M&%@~gL>U>NV<zpD-@2-Es7Q1O-bp28EKv');
      define('LOGGED_IN_SALT',   ';j{00P*owZf)kVD+FVLn-~ >.|Y%Ug4#I^*LVd9QeZ^&XmK|e(76miC+&W&+^0P/');
      define('NONCE_SALT',       '-97r*V/cgxLmp?Zy4zUU4r99QQ_rGs2LTd%P;|_e1tS)8_B/,.6[=UK<J_y9?JWG');
      ```

   1. Save the file and exit your text editor.

**To install your WordPress files under the Apache document root**
+ Now that you've unzipped the installation folder, created a MySQL database and user, and customized the WordPress configuration file, you are ready to copy your installation files to your web server document root so you can run the installation script that completes your installation. The location of these files depends on whether you want your WordPress blog to be available at the actual root of your web server (for example, `my.public.dns.amazonaws.com`) or in a subdirectory or folder under the root (for example, `my.public.dns.amazonaws.com/blog`).
  + If you want WordPress to run at your document root, copy the contents of the wordpress installation directory (but not the directory itself) as follows: 

    ```
    [ec2-user ~]$ cp -r wordpress/* /var/www/html/
    ```
  + If you want WordPress to run in an alternative directory under the document root, first create that directory, and then copy the files to it. In this example, WordPress will run from the directory `blog`:

    ```
    [ec2-user ~]$ mkdir /var/www/html/blog
    [ec2-user ~]$ cp -r wordpress/* /var/www/html/blog/
    ```

**Important**  
For security purposes, if you are not moving on to the next procedure immediately, stop the Apache web server (`httpd`) now. After you move your installation under the Apache document root, the WordPress installation script is unprotected and an attacker could gain access to your blog if the Apache web server were running. To stop the Apache web server, enter the command **sudo systemctl stop httpd**. If you are moving on to the next procedure, you do not need to stop the Apache web server.

**To allow WordPress to use permalinks**

WordPress permalinks need to use Apache `.htaccess` files to work properly, but this is not enabled by default on Amazon Linux. Use this procedure to allow all overrides in the Apache document root.

1. Open the `httpd.conf` file with your favorite text editor (such as **nano** or **vim**). If you do not have a favorite text editor, `nano` is suitable for beginners.

   ```
   [ec2-user ~]$ sudo vim /etc/httpd/conf/httpd.conf
   ```

1. Find the section that starts with `<Directory "/var/www/html">`.

   ```
   <Directory "/var/www/html">
       #
       # Possible values for the Options directive are "None", "All",
       # or any combination of:
       #   Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
       #
       # Note that "MultiViews" must be named *explicitly* --- "Options All"
       # doesn't give it to you.
       #
       # The Options directive is both complicated and important.  Please see
       # http://httpd.apache.org/docs/2.4/mod/core.html#options
       # for more information.
       #
       Options Indexes FollowSymLinks
   
       #
       # AllowOverride controls what directives may be placed in .htaccess files.
       # It can be "All", "None", or any combination of the keywords:
       #   Options FileInfo AuthConfig Limit
       #
       AllowOverride None
   
       #
       # Controls who can get stuff from this server.
       #
       Require all granted
   </Directory>
   ```

1. Change the `AllowOverride None` line in the above section to read `AllowOverride All`.
**Note**  
There are multiple `AllowOverride` lines in this file; be sure you change the line in the `<Directory "/var/www/html">` section.

   ```
   AllowOverride All
   ```

1. Save the file and exit your text editor.

**To install the PHP graphics drawing library on AL2**  
The GD library for PHP enables you to modify images. Install this library if you need to crop the header image for your blog. The version of phpMyAdmin that you install might require a specific minimum version of this library (for example, version 7.2).

Use the following command to install the PHP graphics drawing library on AL2. For example, if you installed php7.2 from amazon-linux-extras as part of installing the LAMP stack, this command installs version 7.2 of the PHP graphics drawing library.

```
[ec2-user ~]$ sudo yum install php-gd
```

To verify the installed version, use the following command:

```
[ec2-user ~]$ sudo yum list installed php-gd
```

The following is example output:

```
php-gd.x86_64                     7.2.30-1.amzn2             @amzn2extra-php7.2
```

**To fix file permissions for the Apache web server**

Some of the available features in WordPress require write access to the Apache document root (such as uploading media though the Administration screens). If you have not already done so, apply the following group memberships and permissions (as described in greater detail in the [Tutorial: Install a LAMP server on AL2](ec2-lamp-amazon-linux-2.md)).

1. Grant file ownership of `/var/www` and its contents to the `apache` user.

   ```
   [ec2-user ~]$ sudo chown -R apache /var/www
   ```

1. Grant group ownership of `/var/www` and its contents to the `apache` group.

   ```
   [ec2-user ~]$ sudo chgrp -R apache /var/www
   ```

1. Change the directory permissions of `/var/www` and its subdirectories to add group write permissions and to set the group ID on future subdirectories.

   ```
   [ec2-user ~]$ sudo chmod 2775 /var/www
   [ec2-user ~]$ find /var/www -type d -exec sudo chmod 2775 {} \;
   ```

1. Recursively change the file permissions of `/var/www` and its subdirectories.

   ```
   [ec2-user ~]$ find /var/www -type f -exec sudo chmod 0644 {} \;
   ```
**Note**  
 If you intend to also use WordPress as an FTP server, you'll need more permissive Group settings here. Please review the recommended [steps and security settings in WordPress](https://wordpress.org/support/article/changing-file-permissions/) to accomplish this. 

1. Restart the Apache web server to pick up the new group and permissions.
   + 

     ```
     [ec2-user ~]$ sudo systemctl restart httpd
     ```

**Run the WordPress installation script with AL2**

You are ready to install WordPress. The commands that you use depend on the operating system. The commands in this procedure are for use with AL2.

1. Use the **systemctl** command to ensure that the `httpd` and database services start at every system boot.

   ```
   [ec2-user ~]$ sudo systemctl enable httpd && sudo systemctl enable mariadb
   ```

1. Verify that the database server is running.

   ```
   [ec2-user ~]$ sudo systemctl status mariadb
   ```

   If the database service is not running, start it.

   ```
   [ec2-user ~]$ sudo systemctl start mariadb
   ```

1. Verify that your Apache web server (`httpd`) is running.

   ```
   [ec2-user ~]$ sudo systemctl status httpd
   ```

   If the `httpd` service is not running, start it.

   ```
   [ec2-user ~]$ sudo systemctl start httpd
   ```

1. In a web browser, type the URL of your WordPress blog (either the public DNS address for your instance, or that address followed by the `blog` folder). You should see the WordPress installation script. Provide the information required by the WordPress installation. Choose **Install WordPress** to complete the installation. For more information, see [Step 5: Run the Install Script](https://wordpress.org/support/article/how-to-install-wordpress/#step-5-run-the-install-script) on the WordPress website.

## Next steps
<a name="wordpress-next-steps"></a>

After you have tested your WordPress blog, consider updating its configuration.

**Use a custom domain name**  
If you have a domain name associated with your EC2 instance's EIP address, you can configure your blog to use that name instead of the EC2 public DNS address. For more information, see [Changing The Site URL](https://wordpress.org/support/article/changing-the-site-url/) on the WordPress website.

**Configure your blog**  
You can configure your blog to use different [themes](https://wordpress.org/themes/) and [plugins](https://wordpress.org/plugins/) to offer a more personalized experience for your readers. However, sometimes the installation process can backfire, causing you to lose your entire blog. We strongly recommend that you create a backup Amazon Machine Image (AMI) of your instance before attempting to install any themes or plugins so you can restore your blog if anything goes wrong during installation. For more information, see [Create your own AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html#creating-an-ami).

**Increase capacity**  
If your WordPress blog becomes popular and you need more compute power or storage, consider the following steps:
+ Expand the storage space on your instance. For more information, see [Amazon EBS Elastic Volumes](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-modify-volume.html) in the *Amazon EBS User Guide*.
+ Move your MySQL database to [Amazon RDS](https://aws.amazon.com/rds) to take advantage of the service's ability to scale easily.

**Improve network performance of your internet traffic**  
If you expect your blog to drive traffic from users located around the world, consider [AWS Global Accelerator](https://aws.amazon.com/global-accelerator). Global Accelerator helps you achieve lower latency by improving internet traffic performance between your users’ client devices and your WordPress application running on AWS. Global Accelerator uses the [AWS global network](https://aws.amazon.com/about-aws/global-infrastructure/global_network/) to direct traffic to a healthy application endpoint in the AWS Region that is closest to the client.

**Learn more about WordPress**  
For information about WordPress, see the WordPress Codex help documentation at [http://codex.wordpress.org/](http://codex.wordpress.org/).

For more information about troubleshooting your installation, see [Common installation problems](https://wordpress.org/support/article/how-to-install-wordpress/#common-installation-problems).

For information about making your WordPress blog more secure, see [Hardening WordPress](https://wordpress.org/support/article/hardening-wordpress/).

For information about keeping your WordPress blog up-to-date, see [Updating WordPress](https://wordpress.org/support/article/updating-wordpress/).

## Help\$1 My public DNS name changed and now my blog is broken
<a name="wordpress-troubleshooting"></a>

Your WordPress installation is automatically configured using the public DNS address for your EC2 instance. If you stop and restart the instance, the public DNS address changes (unless it is associated with an Elastic IP address) and your blog will not work anymore because it references resources at an address that no longer exists (or is assigned to another EC2 instance). A more detailed description of the problem and several possible solutions are outlined in [Changing the Site URL](https://wordpress.org/support/article/changing-the-site-url/).

If this has happened to your WordPress installation, you might be able to recover your blog with the procedure below, which uses the **wp-cli** command line interface for WordPress.

**To change your WordPress site URL with the **wp-cli****

1. Connect to your EC2 instance with SSH. 

1. Note the old site URL and the new site URL for your instance. The old site URL is likely the public DNS name for your EC2 instance when you installed WordPress. The new site URL is the current public DNS name for your EC2 instance. If you are not sure of your old site URL, you can use **curl** to find it with the following command.

   ```
   [ec2-user ~]$ curl localhost | grep wp-content
   ```

   You should see references to your old public DNS name in the output, which will look like this (old site URL in red):

   ```
   <script type='text/javascript' src='http://ec2-52-8-139-223.us-west-1.compute.amazonaws.com/wp-content/themes/twentyfifteen/js/functions.js?ver=20150330'></script>
   ```

1. Download the **wp-cli** with the following command.

   ```
   [ec2-user ~]$ curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
   ```

1. Search and replace the old site URL in your WordPress installation with the following command. Substitute the old and new site URLs for your EC2 instance and the path to your WordPress installation (usually `/var/www/html` or `/var/www/html/blog`).

   ```
   [ec2-user ~]$ php wp-cli.phar search-replace 'old_site_url' 'new_site_url' --path=/path/to/wordpress/installation --skip-columns=guid
   ```

1. In a web browser, enter the new site URL of your WordPress blog to verify that the site is working properly again. If it is not, see [Changing the Site URL](https://wordpress.org/support/article/changing-the-site-url/) and [Common installation problems](https://wordpress.org/support/article/how-to-install-wordpress/#common-installation-problems) for more information.