Path routing pattern
Routing by paths is the mechanism of grouping multiple or all APIs under the same
hostname, and using a request URI to isolate services; for example,
api.example.com/service-a
or api.example.com/service-b
.
Typical use case
Most teams opt for this method because they want a simple architecture―a developer has
to remember only one URL such as api.example.com
to interact with the HTTP API.
API documentation is often easier to digest because it is often kept together instead of
being split across different portals or PDFs.
Path-based routing is considered a simple mechanism for sharing an HTTP API. However, it involves operational overhead such as configuration, authorization, integrations, and additional latency due to multiple hops. It also requires mature change management processes to ensure that a misconfiguration doesn't disrupt all services.
On AWS, there are multiple ways to share an API and route effectively to the correct service. The following sections discuss three approaches: HTTP service reverse proxy, API Gateway, and Amazon CloudFront. None of the suggested approaches for unifying API services relies on the downstream services running on AWS. The services could run anywhere without issue or on any technology, as long as they're HTTP-compatible.
HTTP service reverse proxy
You can use an HTTP server such as NGINX
The following configuration for NGINX dynamically maps an HTTP request of
api.example.com/my-service/
to
my-service.internal.api.example.com
.
server { listen 80; location (^/[\w-]+)/(.*) { proxy_pass $scheme://$1.internal.api.example.com/$2; } }
The following diagram illustrates the HTTP service reverse proxy method.
This approach might be sufficient for some use cases that don't use additional configurations to start processing requests, allowing for the downstream API to collect metrics and logs.
To get ready for operational production readiness, you will want to be able to add observability to every level of your stack, add additional configuration, or add scripts to customize your API ingress point to allow for more advanced features such as rate limiting or usage tokens.
Pros
The ultimate aim of the HTTP service reverse proxy method is to create a scalable and
manageable approach to unifying APIs into a single domain so it appears coherent to any
API consumer. This approach also enables your service teams to deploy and manage their own
APIs, with minimal overhead after deployment. AWS managed services for tracing, such as
AWS X-Ray
Cons
The major downside of this approach is the extensive testing and management of infrastructure components that are required, although this might not be an issue if you have site reliability engineering (SRE) teams in place.
There is a cost tipping point with this method. At low to medium volumes, it is more expensive than some of the other methods discussed in this guide. At high volumes, it is very cost-effective (around 100K transactions per second or better).
API Gateway
The Amazon API Gatewayapi.example.com
, and then proxy
requests to the nested service; for example,
billing.internal.api.example.com
.
You probably don't want to get too granular by mapping every path in every service in
the root or core API gateway. Instead, opt for wildcard paths such as
/billing/*
to forward requests to the billing service. By not mapping every
path in the root or core API gateway, you gain more flexibility over your APIs, because you
don't have to update the root API gateway with every API change.
Pros
For control over more complex workflows, such as changing request attributes, REST APIs expose the Apache Velocity Template Language (VTL) to allow you to modify the request and response. REST APIs can provide additional benefits such as these:
-
Auth N/Z with AWS Identity and Access Management (IAM), Amazon Cognito, or AWS Lambda authorizers
-
Usage tokens for bucketing consumers into different tiers (see Throttle API requests for better throughput in the API Gateway documentation)
Cons
At high volumes, cost might be an issue for some users.
CloudFront
You can use the dynamic origin selection featureapi.example.com
.
Typical use case
The routing logic lives as code within the Lambda@Edge function, so it supports highly customizable routing mechanisms such as A/B testing, canary releases, feature flagging, and path rewriting. This is illustrated in the following diagram.
Pros
If you require caching API responses, this method is good way to unify a collection of services behind a single endpoint. It is a cost-effective method to unify collections of APIs.
Also, CloudFront supports field-level encryption as well as integration with AWS WAF for basic rate limiting and basic ACLs.
Cons
This method supports a maximum of 250 origins (services) that can be unified. This limit is sufficient for most deployments, but it might cause issues with a large number of APIs as you grow your portfolio of services.
Updating Lambda@Edge functions currently takes a few minutes. CloudFront also takes up to 30 minutes to complete propagating changes to all points of presence. This ultimately blocks further updates until they complete.