HTTP caching is used to store a copy of a resource so that when this resource is requested again, the new request is intercepted and the cached copy is served instead. It’s possible to find extensive literature on caching on the internet, so this blog post is meant to be a brief introduction and summary. At the end of the article, you’ll find a list of useful links to check out if you’d like to dive deeper.
So, you may be wondering why we even cache.
According to Jeff Posnick and Ilya Grigorik, “fetching resources over the network is both slow and expensive,” for reasons outlined in their Prevent unnecessary network requests with the HTTP Cache blog post.
But this can be addressed with caching. As the MDN Web Docs puts it: “The performance of web sites and applications can be significantly improved by reusing previously fetched resources. Web caches reduce latency and network traffic and thus lessen the time needed to display a representation of a resource. By making use of HTTP caching, Web sites become more responsive.”
In other words, we cache to improve the speed of websites and web applications.
More specifically, the browser’s HTTP cache helps you work against unnecessary network requests: By caching, you’re storing a copy of a given resource, which will be served back when requested. When the browser fires a request to the server, the request is first routed to the browser cache to see if there’s already a valid cached response that fits the request. If there’s a match, the response is read from the cache, thereby eliminating an unnecessary network request.
As Posnick and Grigorik write, “It’s not necessarily the most powerful or flexible approach, and you have limited control over the lifetime of cached responses, but it’s effective, it’s supported in all browsers, and it doesn’t require much work.”
The entire HTTP cache’s strategy is controlled by directives — request headers and response headers. This post only covers response headers. There are two reasons for this: The first is that a browser almost always takes care of setting request headers when it makes requests. The second is that the setup that matters the most is that of the response headers that your web server adds to the responses.
You can think of the following headers as freshness checkers:
Cache-Controlallows you to specify both a time period in which the browser should cache a certain response, and the logic to cache said response.
Expires— as the name suggests — has the timestamp after which the response is considered expired. It’ll be overwritten by
Cache-Control, if set.
Last-Modifiedis a validator flag to determine if a resource has changed since the last request. The origin service appends a timestamp to the resource, and the timestamp corresponds to the last time said resource was modified.
ETagworks similar to how the
Last-Modifiedtimestamp does, but instead of a timestamp, this identifier is a tag where the server decides its format (usually it’s a hash). For any new version of a resource, a new
Etagneeds to be generated.
As you can see, the above APIs have different approaches when it comes to identifying if the file that has been requested can be served from the cache.
Expires rely on a time limit — for example, let’s say we set
Cache-Control: max-age=8640000. This tells the browser to keep your asset in its cache for up to 100 days (100 × 24 × 60 × 60 = 8,640,000 seconds), and during this time span, this cached asset will be served when the request is sent.
Etag apply more granular control by appending a tag (timestamp or hash), which means that whenever the asset on the server gets a new tag following a modification, the revalidation of the cached copy will fail, meaning the newly modified asset will be served when requested.
What if instead, you want to make sure you aren’t served a cached response? For example, imagine you’re upgrading PSPDFKit. You want the
pspdfkit.css file you’re using to be new, and if you end up using a cached version, it could result in unexpected behavior.
In a development environment, you can rely on browser settings (e.g. the Chrome network tab has an option to disable the cache). Meanwhile, for production, you can force the browser to reach the network by appending a random query parameter at the end of the URL, like what’s mentioned in this Stack Overflow answer.
In PSPDFKit for Web Standalone, we let webpack take care of caching static files for us: webpack will add a hash to the output file name, and it relies on the same mechanism the
For more information on caching static resources, please see our guide about caching on the web.
Overall, when trying to improve loading performance, the HTTP cache is proven to be an effective strategy — the fastest requests are the ones that are never sent.