Control the cache key with a policy
With a CloudFront cache policy, you can specify the HTTP headers, cookies, and query strings that CloudFront includes in the cache key for objects that are cached at CloudFront edge locations. The cache key is the unique identifier for every object in the cache, and it determines whether a viewer's HTTP request results in a cache hit.
A cache hit occurs when a viewer request generates the same cache key as a prior request, and the object for that cache key is in the edge location's cache and valid. When there's a cache hit, the object is served to the viewer from a CloudFront edge location, which has the following benefits:
-
Reduced load on your origin server
-
Reduced latency for the viewer
Including fewer values in the cache key increases the likelihood of a cache hit. This can get you better performance from your website or application because there's a higher cache hit ratio (a higher proportion of viewer requests that result in a cache hit). For more information, see Understand the cache key.
To control the cache key, you use a CloudFront cache policy. You attach a cache policy to one or more cache behaviors in a CloudFront distribution.
You can also use the cache policy to specify time to live (TTL) settings for objects in the CloudFront cache, and enable CloudFront to request and cache compressed objects.
Note
Cache settings don't affect gRPC requests because gRPC traffic can't be cached. For more information, see Using gRPC with CloudFront distributions.