PSPDFKit Document Engine uses two different kinds of caches to improve performance and reduce the load on the asset backend.
When PSPDFKit Document Engine needs to access a PDF file via the asset storage backend, the document is cached on the local file system to speed up future access. You can configure the size of the cache using the
When a PSPDFKit for Web client requests a document, PSPDFKit Document Engine renders the individual pages and provides them to the client. These pages are cached in memory to speed up future rendering.
Since both caches only work for a single PSPDFKit Document Engine node and are not shared across multiple PSPDFKit Document Engine nodes, you can use Redis to enable a shared cache.
PSPDFKit Document Engine supports usage of Redis as a shared cache for rendered pages. This allows multiple PSPDFKit Document Engine nodes to share their work, since only one has to render a document and others will be able to access the cached results. Please be aware that this only works with PSPDFKit for Web.
Check out the configuration overview for more information on how to configure PSPDFKit Document Engine to use Redis as an additional cache.
Please note that you also have to manage and configure Redis. Examples of doing this include setting the memory limit and setting an eviction policy. We recommend the
allkeys-lru eviction policy for most use cases. Check out Using Redis as an LRU Cache for more information.
When moving from a trial to a production license, we also recommend deleting all cached data, as this will prevent trial overlays from showing up in production.
PSPDFKit Document Engine is compatible with any Redis version newer than 2.2.0.
PSPDFKit Document Engine only supports single-node Redis deployments and Redis Sentinel deployments. PSPDFKit Document Engine cannot be configured to use multi-cluster Redis setups directly.
Refer to the documentation on the prerendering endpoint to find out how you can render and cache documents in advance to speed up the loading of your documents even more.