Create a ROSA classic cluster that uses AWS PrivateLink
ROSA classic clusters can be deployed in a few different ways: public, private, or private with AWS PrivateLink. For more information about ROSA classic, see ROSA architecture. For both public and private cluster configurations, the OpenShift cluster has access to the internet, and privacy is set on the application workloads at the application layer.
If you require both the cluster and the application workloads to be private, you can configure AWS PrivateLink with ROSA classic. AWS PrivateLink is a highly available, scalable technology that ROSA uses to create a private connection between the ROSA service and cluster resources in the AWS customer account. With AWS PrivateLink, the Red Hat site reliability engineering (SRE) team can access the cluster for support and remediation purposes by using a private subnet connected to the cluster’s AWS PrivateLink endpoint.
For more information about AWS PrivateLink, see What is AWS PrivateLink?
Topics
- Prerequisites
- Create Amazon VPC architecture
- Create a ROSA classic cluster using the ROSA CLI and AWS PrivateLink
- Configure AWS PrivateLink DNS forwarding
- Configure an identity provider and grant cluster access
- Grant user access to a cluster
- Configure cluster-admin permissions
- Configure dedicated-admin permissions
- Access a cluster through the Red Hat Hybrid Cloud Console
- Deploy an application from the Developer Catalog
- Revoke cluster-admin permissions from a user
- Revoke dedicated-admin permissions from a user
- Revoke user access to a cluster
- Delete a cluster and AWS STS resources
Prerequisites
Complete the prerequisite actions listed in Set up to use ROSA.
Create Amazon VPC architecture
The following procedure creates Amazon VPC architecture that can be used to host a cluster.
All cluster resources are hosted in the private subnet.
The public subnet routes outbound traffic from the private subnet through a NAT gateway to the public internet.
This example uses the CIDR block 10.0.0.0/16
for the Amazon VPC.
However, you can choose a different CIDR block.
For more information, see VPC sizing.
Important
If Amazon VPC requirements are not met, cluster creation fails.
Create a ROSA classic cluster using the ROSA CLI and AWS PrivateLink
You can use the ROSA CLI and AWS PrivateLink to create a cluster with a single Availability Zone (Single-AZ) or multiple Availability Zones (Multi-AZ). In either case, your machine’s CIDR value must match your VPC’s CIDR value.
The following procedure uses the rosa create cluster
command to create a ROSA classic cluster.
To create a Multi-AZ cluster, specify --multi-az
in the command, and then select the private subnet IDs that you want to use when prompted.
Note
If you use a firewall, you must configure it so that ROSA can access the sites that it requires to function.
For more information, see AWS firewall prerequisites
-
Create the required IAM account roles and policies using
--mode auto
or--mode manual
.-
rosa create account-roles --classic --mode auto
-
rosa create account-roles --classic --mode manual
Note
If your offline access token has expired, the ROSA CLI outputs an error message stating that your authorization token needs updated. For steps to troubleshoot, see Troubleshoot ROSA CLI expired offline access tokens.
-
-
Create a cluster by running one of the following commands.
-
Single-AZ
rosa create cluster --private-link --cluster-name=<CLUSTER_NAME> --machine-cidr=10.0.0.0/16 --subnet-ids=<PRIVATE_SUBNET_ID>
-
Multi-AZ
rosa create cluster --private-link --multi-az --cluster-name=<CLUSTER_NAME> --machine-cidr=10.0.0.0/16
Note
To create a cluster that uses AWS PrivateLink with AWS Security Token Service (AWS STS) short-lived credentials, append
--sts --mode auto
or--sts --mode manual
to the end of therosa create cluster
command.
-
-
Create the cluster operator IAM roles by following the interactive prompts.
rosa create operator-roles --interactive -c <CLUSTER_NAME>
-
Create the OpenID Connect (OIDC) provider the cluster operators use to authenticate.
rosa create oidc-provider --interactive -c <CLUSTER_NAME>
-
Check the status of your cluster.
rosa describe cluster -c <CLUSTER_NAME>
Note
It may take up to 40 minutes for the cluster
State
field to show theready
status. If provisioning fails or doesn’t show asready
after 40 minutes, see Troubleshooting. To contact AWS Support or Red Hat support for assistance, see Getting ROSA support. -
Track the progress of the cluster creation by watching the OpenShift installer logs.
rosa logs install -c <CLUSTER_NAME> --watch
Configure AWS PrivateLink DNS forwarding
Clusters that use AWS PrivateLink create a public hosted zone and a private hosted zone in Route 53. Records within the Route 53 private hosted zone are resolvable only from within the VPC that it’s assigned to.
The Let’s Encrypt DNS-01 validation requires a public zone so that valid and publicly trusted certificates can be issued for the domain. The validation records are deleted after Let’s Encrypt validation is complete. The zone is still required for issuing and renewing these certificates, which are typically required every 60 days. Although these zones usually appear empty, a public zone serves a critical role in the validation process.
For more information about AWS private hosted zones, see Working with private zones. For more information about public hosted zones, see Working with public hosted zones.
Configure a Route 53 Resolver inbound endpoint
-
To allow for records such as
api.<cluster_domain>
and*.apps.<cluster_domain>
to resolve outside of the VPC, configure a Route 53 Resolver inbound endpoint.Note
When you configure an inbound endpoint, you are required to specify a minimum of two IP addresses for redundancy. We recommend that you specify IP addresses in at least two Availability Zones. You can optionally specify additional IP addresses in those or other Availability Zones.
-
When you configure the inbound endpoint, select the VPC and private subnets that were used when you created the cluster.
Configure DNS forwarding for the cluster
After the Route 53 Resolver internal endpoint is associated and operational, configure DNS forwarding so DNS queries can be handled by the designated servers on your network.
-
Configure your corporate network to forward DNS queries to those IP addresses for the top-level domain, such as
drow-pl-01.htno.p1.openshiftapps.com
. -
If you’re forwarding DNS queries from one VPC to another VPC, follow the instructions in Managing forwarding rules.
-
If you’re configuring your remote network DNS server, see your specific DNS server documentation to configure selective DNS forwarding for the installed cluster domain.
Configure an identity provider and grant cluster access
ROSA includes a built-in OAuth server.
After your ROSA
cluster is created, you must configure OAuth to use an identity provider.
You can then add users to your configured identity provider to grant them access to your cluster.
You can grant these users cluster-admin
or dedicated-admin
permissions as required.
You can configure different identity provider types for your cluster. The supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect, and HTPasswd identity providers.
Important
The HTPasswd identity provider is included only to enable a single, static administrator user to be created. HTPasswd isn’t supported as a general-use identity provider for ROSA.
The following procedure configures a GitHub identity provider as an example.
For instructions on how to configure each of the supported identity provider types, see Configuring identity providers for AWS STS
-
Navigate to github.com
and log in to your GitHub account. -
If you don’t have a GitHub organization to use for identity provisioning for your ROSA cluster, create one. For more information, see the steps in the GitHub documentation
. -
Using the ROSA CLI’s interactive mode, configure an identity provider for your cluster by running the following command.
rosa create idp --cluster=<CLUSTER_NAME> --interactive
-
Follow the configuration prompts in the output to restrict cluster access to members of your GitHub organization.
I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <GITHUB_ORG_NAME> ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<GITHUB_ORG_NAME>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<CLUSTER_NAME>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<CLUSTER_NAME>/<RANDOM_STRING>.p1.openshiftapps.com - Click on 'Register application' ...
-
Open the URL in the output, replacing
<GITHUB_ORG_NAME>
with the name of your GitHub organization. -
On the GitHub web page, choose Register application to register a new OAuth application in your GitHub organization.
-
Use the information from the GitHub OAuth page to populate the remaining
rosa create idp
interactive prompts, replacing<GITHUB_CLIENT_ID>
and<GITHUB_CLIENT_SECRET>
with the credentials from your GitHub OAuth application.... ? Client ID: <GITHUB_CLIENT_ID> ? Client Secret: [? for help] <GITHUB_CLIENT_SECRET> ? GitHub Enterprise Hostname (optional): ? Mapping method: claim I: Configuring IDP for cluster '<CLUSTER_NAME>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<CLUSTER_NAME>.<RANDOM_STRING>.p1.openshiftapps.com and click on github-1.
Note
It might take around two minutes for the identity provider configuration to become active. If you configured a
cluster-admin
user, you can run theoc get pods -n openshift-authentication --watch
command to watch the OAuth pods redeploy with the updated configuration. -
Verify the identity provider has been configured correctly.
rosa list idps --cluster=<CLUSTER_NAME>
Grant user access to a cluster
You can grant a user access to your cluster by adding them to the configured identity provider.
The following procedure adds a user to a GitHub organization that’s configured for identity provisioning to the cluster.
-
Navigate to github.com
and log in to your GitHub account. -
Invite users that require cluster access to your GitHub organization. For more information, see Inviting users to join your organization
in the GitHub documentation.
Configure cluster-admin
permissions
-
Grant the
cluster-admin
permissions using the following command. Replace<IDP_USER_NAME>
and<CLUSTER_NAME>
with your user and cluster name.rosa grant user cluster-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
-
Verify the user is listed as a member of the
cluster-admins
group.rosa list users --cluster=<CLUSTER_NAME>
Configure dedicated-admin
permissions
-
Grant the
dedicated-admin
permissions with the following command. Replace<IDP_USER_NAME>
and<CLUSTER_NAME>
with your user and cluster name.rosa grant user dedicated-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
-
Verify the user is listed as a member of the
cluster-admins
group.rosa list users --cluster=<CLUSTER_NAME>
Access a cluster through the Red Hat Hybrid Cloud Console
After you created a cluster administrator user or added a user to your configured identity provider, you can log in to your cluster through the Red Hat Hybrid Cloud Console.
-
Obtain the console URL for your cluster using the following command. Replace
<CLUSTER_NAME>
with the name of your cluster.rosa describe cluster -c <CLUSTER_NAME> | grep Console
-
Navigate to the console URL in the output and log in.
-
If you created a
cluster-admin
user, log in using the provided credentials. -
If you configured an identity provider for your cluster, choose the identity provider name in the Log in with… dialog and complete any authorization requests presented by your provider.
-
Deploy an application from the Developer Catalog
From the Red Hat Hybrid Cloud Console, you can deploy a Developer Catalog test application and expose it with a route.
-
Navigate to Red Hat Hybrid Cloud Console
and choose the cluster that you want to deploy the app into. -
On the cluster’s page, choose Open console.
-
In the Administrator perspective, choose Home > Projects > Create Project.
-
Enter a name for your project and optionally add a Display Name and Description.
-
Choose Create to create the project.
-
Switch to the Developer perspective and choose +Add. Make sure that the selected project is the one that was just created.
-
In the Developer Catalog dialog, choose All services.
-
In the Developer Catalog page, choose Languages > JavaScript from the menu.
-
Choose Node.js, and then choose Create Application to open the Create Source-to-Image Application page.
Note
You might need to choose Clear All Filters to display the Node.js option.
-
In the Git section, choose Try Sample.
-
In the Name field, add a unique name.
-
Choose Create.
Note
The new application takes several minutes to deploy.
-
When the deployment is complete, choose the route URL for the application.
A new tab in the browser opens with a message that’s similar to the following.
Welcome to your Node.js application on OpenShift
-
(Optional) Delete the application and clean up resources.
-
In the Administrator perspective, choose Home > Projects.
-
Open the action menu for your project and choose Delete Project.
-
Revoke cluster-admin
permissions from a user
-
Revoke the
cluster-admin
permissions using the following command. Replace<IDP_USER_NAME>
and<CLUSTER_NAME>
with your user and cluster name.rosa revoke user cluster-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
-
Verify that the user isn’t listed as a member of the
cluster-admins
group.rosa list users --cluster=<CLUSTER_NAME>
Revoke dedicated-admin
permissions from a user
-
Revoke the
dedicated-admin
permissions using the following command. Replace<IDP_USER_NAME>
and<CLUSTER_NAME>
with your user and cluster name.rosa revoke user dedicated-admin --user=<IDP_USER_NAME> --cluster=<CLUSTER_NAME>
-
Verify that the user isn’t listed as a member of the
dedicated-admins
group.rosa list users --cluster=<CLUSTER_NAME>
Revoke user access to a cluster
You can revoke cluster access for an identity provider user by removing them from the configured identity provider.
You can configure different types of identity providers for your cluster. The following procedure revokes cluster access for a member of a GitHub organization.
-
Navigate to github.com
and log in to your GitHub account. -
Remove the user from your GitHub organization. For more information, see Removing a member from your organization
in the GitHub documentation.
Delete a cluster and AWS STS resources
You can use the ROSA CLI to delete a cluster that uses AWS Security Token Service (AWS STS). You can also use the ROSA CLI to delete the IAM roles and OIDC provider created by ROSA. To delete the IAM policies created by ROSA, you can use the IAM console.
Important
IAM roles and policies created by ROSA might be used by other ROSA clusters in the same account.
-
Delete the cluster and watch the logs. Replace
<CLUSTER_NAME>
with the name or ID of your cluster.rosa delete cluster --cluster=<CLUSTER_NAME> --watch
Important
You must wait for the cluster to delete completely before you remove the IAM roles, policies, and OIDC provider. The account IAM roles are required to delete the resources created by the installer. The operator IAM roles are required to clean up the resources created by the OpenShift operators. The operators use the OIDC provider to authenticate.
-
Delete the OIDC provider that the cluster operators use to authenticate by running the following command.
rosa delete oidc-provider -c <CLUSTER_ID> --mode auto
-
Delete the cluster-specific operator IAM roles.
rosa delete operator-roles -c <CLUSTER_ID> --mode auto
-
Delete the account IAM roles using the following command. Replace
<PREFIX>
with the prefix of the account IAM roles to delete. If you specified a custom prefix when creating the account IAM roles, specify the defaultManagedOpenShift
prefix.rosa delete account-roles --prefix <PREFIX> --mode auto
-
Delete the IAM policies created by ROSA.
-
Log in to the IAM console
. -
On the left menu under Access management, choose Policies.
-
Select the policy that you want to delete and choose Actions > Delete.
-
Enter the policy name and choose Delete.
-
Repeat this step to delete each of the IAM policies for the cluster.
-