Recent versions of Prometheus added an experimental remote write feature. Let's take a look.


Prometheus's local storage isn't intended as a long term data store, rather as more of an ephemeral cache. This is as storing effectively unbounded amounts of time series data would require a distributed storage system, whose reliability characteristics would not be what you want from a monitoring system. Instead the intention is that a separate system would handle durable storage and allow for seamless querying from Prometheus.

The remote write path is one half of thus. It allows for each sample that's ingested from scrapes, and calculated from rules, to be sent out in real time to another system. This could be long term storage, or an adaptor that sends to something like Kafka for further processing.

So let's give it a spin. This is currently all experimental, so we're using the simplest thing which will work - which is to say Protocol Buffers over HTTP. This is very likely to change in the future.


I'm going to assume you already have a working Go environment:

go get -d -u{cmd/prometheus,documentation/examples/remote_storage/example_write_adapter}
cd ${GOPATH}/src/
go run server.go &

This will run the demo write adapter which prints out the samples it is sent.

Now we just need to run a Prometheus that is pointing at it:

cd ${GOPATH}/
cat <<EOF > prometheus.yml
  scrape_interval: 5s
  - url: ":1234/receive"
  - job_name: prometheus
    - targets: ['localhost:9090']

After a few seconds you'll see the samples in the output of the adapter.

This is just a simple example, but you're free to expand it in whatever way you like. You can also use relabelling to restrict what time series are sent out.


It is planned that the existing experimental Graphite/OpenTSDB/InfluxDB write support will be removed in favour of this new generic interface.


Wondering how to take advantage of remote write? Contact us.