Data is smaller and disks are bigger than you'd think.

When planning a Prometheus deployment you might have gotten the impression that Prometheus isn't designed for more than 2 weeks of storage, so if you want more you have to use a clustered storage system such as Thanos, Cortex, or M3DB. This is however a common misconception. That Prometheus retention defaults to keeping 15 days of metrics is historical, and comes from two storage generations back when things were far less efficient than they are today.

 

Prometheus itself doesn't have any technical limits to how much disk space it can access. The Prometheus TSDB uses mmap to open files, so you only need enough memory to cover what you actively use. The RAM costs of having longer retention, and thus having a higher number of blocks on disk, can basically be ignored as of 2.15. So the real question is how much SSD storage is available.

So how much can you store on one machine? As a data point, on AWS a gp2 EBS volume has a 16TB limit. Assuming a conservative 2 bytes per sample, if you wanted to store data for a whole year that would cover a beefy Prometheus ingesting around 250K sample/s. You can of course add more than one EBS volume to an EC2 instance, so if you want longer retention you can add more volumes. If you have your own machines, then this is also viable as 4TB and 8TB SSD drives are available.

 

So at least a year retention is viable for a beefy Prometheus. This is all on the high end, so the numbers may seem a little daunting at first glance. If on the other hand you've only a small Prometheus which is ingesting 10k samples/s you would only need around 650GB to have retention of a year, which is easily achievable.

 

But you might ask, what if that machine fails or the data becomes corrupted on disk? One option is to run two identical Prometheus servers, which you'd likely want to do for alerting reliability anyway, and if one fails copy over the data from the other. Another option is that of as of Prometheus 2.1 it's easy to take backups of Prometheus blocks via snapshots, and then copy the backups on S3 or similar. You could also use the Thanos sidecar to automatically ship completed blocks to S3, rather than having to implement all that yourself.

 

You don't have to use a clustered storage system to have a long retention in Prometheus. A retention measured in years is viable, if you have the disk space.

 

Need help with Prometheus storage? Contact us.