This is a reference page for all internal metrics exported by PSPDFKit Server.
Metrics follow the DogStatsD protocol format, a variant of the popular StatsD protocol. Server sends a metric update to the compatible monitoring agent either when some event happens (e.g. an HTTP response is sent), or when a measurement is collected periodically (e.g. the memory used is sampled). The agent aggregates metrics in fixed time windows and forwards them to the monitoring system, where they are persisted for further analysis. How the metric is aggregated depends on its type and the agent implementation (e.g. Telegraf might perform different aggregations than CloudWatch agent does). Refer to our Integration guide to learn how to export Server metrics in different environments and deployment settings.
Server exports three types of metrics:
- Counters — Each metric update carries a value that increases a counter by that value. An example of this is when a file system cache hits metric, where a counter increment is sent every time an item is found in the cache.
- Gauges — A metric update carries the most recent value of a particular measurement. This is a very common type for metrics gathered periodically, e.g. memory usage.
- Timings — Each metric update carries the duration of a particular event. An example of this is an HTTP request handling duration. Usually agents aggregate timings by calculating statistics based on measurements falling into the time window, e.g. count, minimum, maximum, mean, percentiles, etc. Timings are often used when we need to both count the events and measure their duration.
Apart from the metric name and value, each metric update includes a set of tags that allow you to group and filter measurements belonging to the same metric when analyzing them.
This is a list of common tags attached to every metric exported by Server:
|host||The hostname of the Server container.|
|node||The unique ID of the Server node in the cluster.|
|family||This is always set to
This is the duration it takes PSPDFKit Server to process the HTTP request and respond.
|status||HTTP response status.|
|method||HTTP request method.|
When analyzing HTTP performance metrics, make sure to separate metrics based on the
group. By definition, long polling requests take a long time to complete because the client keeps the connection open to allow Server to send a response only when it has data ready. In most situations, you’re most likely interested in metrics with the
group set to
These measurements concern the performance of SQL queries made by Server against PostgreSQL:
pg_client.querytells you how long it took to actually execute the query.
pg_client.queuetells you how long the query waited for the connection to be available from the pool.
pg_client.decodetells you how long it took to decode the query results.
pg_client.result_sizeis a measurement that tells you how many rows were returned per query.
To get insight into the total time it takes to complete the database query, you need to take the sum of the
|command||The SQL command that was executed. One of
|error_code||PostgreSQL error code, only set when
|severity||Error severity, only set when
File System Cache
These measurements are related to the Server file system cache used for document source files:
cache.fs_missmeasurements count cache hits and misses.
cache.fs_sizemeasurement reports the current size of the file system cache. The cache size is limited by the
cache.fs_freemeasurement tells you how long it took to clear a full cache.
These measurements are related to the in-memory cache for PDF metadata.
cache.fs_miss measurements count cache hits and misses.
These measurements are related to the optional Redis cache used for caching rendering results between multiple PSPDFKit Server instances.
cache.redis_hitindicates how long it took to fetch an item from Redis when there was a cache hit.
cache.redis_missindicates how long the request to Redis took when there was a cache miss.
cache.redis_setindicates how long it took to store an item in Redis.
cache_redis_errorindicates how long a Redis operation that errored out took.
|op||The Redis operation that was performed. Only set for the
These measurements concern the time it takes the Server to fetch documents from remote URLs.
remote_doc.response_starttells you the time between when the Server sent the request and when the first byte of data has been received.
remote_doc.response_endtells you how long the actual data transfer took after the remote server started responding.
To get the total remote document response time, sum up both metrics.
The duration of the Office documents conversion.
These measurements concern all Server operations that involve working with PDFs, including rendering, extracting content, and preparing PDFs for being downloaded.
pspdfkit.queuetells you how long an operation had to wait until there was a worker available to execute it.
pspdfkit.exectells you how long the operation actually took.
|request||The PDF operation performed.|
How long it took the signing service to respond to the signing request.
These metrics track the duration of Instant sync phases. For operations that fail, Server emits only
|result||Only set for
The total amount of memory allocated by the Server process. Note that the amount of memory taken by the Server container is usually larger than this number, since there are also other processes running inside the container.
Compute Resources Utilization
vm_scheduler_wall_time.activetells you how much time the Erlang VM spent being active in the last interval.
vm_scheduler_wall_time.totaltells you the total uptime of the Erlang VM.
If you divide the active time by the total time, the resulting number indicates the utilization of compute resources assigned to Server. In other words, it says which percentage of the time the Server was busy doing work.
Note that, as with memory, this only concerns the Server process — the CPU utilization of the container may be different, as there are other processes running inside it as well.
|scheduler_number||The internal Server scheduler number.|
Server starts as many schedulers as there are logical CPU cores available. In most cases, you’ll want to take the average of metrics described here across schedulers.