Code available on GitHub here: DanieleSalatti/Prometeo
What Is Prometheus
Prometheus is cool software used for event monitoring and alerting. It includes a time series database used to record real-time metrics, it has a pretty flexible query language and alerting functions. It scrapes metrics over HTTP using a pull model - which has some advantages in terms of scaling compared to a push model (more on this in a future post).
- Docker on a PC or laptop
- A Raspberry Pi (or ARMv6/v7 boards)
You can get a Raspberry Pi from here if you don't have one already:
The links above are for the Raspberry Pi only. You'll also need a power source, an SD card, a micro-HDMI to HDMI adapter, and optionally an enclosure. Sometimes it's worth to get it all as a bundle, so here's a couple links:
I have the 8GB kit from the last link and can vouch for them.
Preparing the Docker Image
Well, actually, before we build our Docker image we need a way to test our Prometheus instance when we run it locally. That's a pretty simple thing to do as all we need it to set up a NodeExporter instance. A NodeExporter is a simple local service that exposes an HTTP endpoint (usually on port 9100) and publishes a set of system/hardware metrics, allowing Prometheus to scrape it.
I am going to set up my NodeExporter on my Raspberry Pi(s), but you can set it up locally or on any instance/VPS you want to monitor.
Set up a NodeExporter Instance
Let's start by installing a NodeExporter:
pi@televisione:~ $ sudo apt-get install prometheus-node-exporter
Once the installation process finishes, we can test out endpoint:
pi@televisione:~ $ curl localhost:9100/metrics | less
You should see a bunch of metrics being emitted. Done.
Create a New Docker Image
Thankfully for us there already is a Prometheus image on Docker Hub, so all we need to do is change the config file to suit our needs.
Extract default config file from Docker image:
$ docker create --name prom_empty prom/prometheus $ docker cp prom_empty:/etc/prometheus/prometheus.yml ./prometheus.yml
Use your favorite editor and open
./prometheus.yml. You'll see a block similar to this:
# A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090']
We are going to add our endpoints in the
scrape_configs section. So let's add our test endpoint (comments removed for readability):
scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'televisione' static_configs: - targets: ['192.168.1.224:9100']
The second job name is called "televisione" because that Raspberry Pi is attached to my smart TV (TV = televisione in Italian). Feel free to change that name. Now we need to create our Dockerfile, telling Docker to copy our config file over for Prometheus to use. Here's the content:
FROM prom/prometheus COPY ./prometheus.yml /etc/prometheus/prometheus.yml
And finally we need to build our image:
$ docker build -t prometheus/cluster-local .
Once the command succeed we can run Prometheus:
$ docker run -p 9091:9090 --restart=always --name prometheus-local -d prometheus/cluster-local
Note that I'm redirecting port 9091 from the host to port 9090 on the container. That's because I am going to have another Prometheus instance running on that Raspberry Pi as part of a separate tool that I'll discuss in a follow up post. You can leave it as
9090:9090 if you prefer.
When you want to stop the container type in:
docker rm -f prometheus-local. Remember that if you make any change to the config file, you will need to build the image again, then stop the container and run it again as above.
Now you can go to http://localhost:9091/ and see Prometheus up and running. Click on Status -> Targets in the top menu to see a list of targets. There should be only two at the moment:
- The local Prometheus instance and
- The test NodeExporter we setup earlier
Deploying to the Raspberry Pi
Install Docker on your Raspberry Pis. Instructions here will differ depending on the OS. I'll cover Raspbian and Ubuntu server, since that's what I have.
Install Docker on Ubuntu Server
Do not use the install script, do it properly. Instructions here and below in a shortened version.
In essence what we have to do is install a few dependencies first, then add the Docker repository to our repository list, and finally install Docker.
$ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Remember to always validate any key you get from the Internet. You can do so by looking for the last 8 digits of the fingerprint:
$ sudo apt-key fingerprint 0EBFCD88 pub rsa4096 2017-02-22 [SCEA] 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid [ unknown] Docker Release (CE deb) <email@example.com> sub rsa4096 2017-02-22 [S]
If the fingerprint looks like this one you are good to go:
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
I have installed Ubuntu 64 bit on my Raspberry Pis 4 (8 GB RAM), so to add the repository I need to type this:
sudo add-apt-repository \ "deb [arch=arm64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
If you have a different architecture you need to replace
amd64 depending on what you have.
$ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io
Install Docker on Raspbian
Unfortunately with Raspbian we cannot use the repository, so we need to use the convenience script:
$ curl -fsSL https://get.docker.com -o get-docker.sh $ sudo sh get-docker.sh
Now that we have Docker running, we can copy our config and Docker file over to the Raspberry Pi and start our container. For convenience, I uploaded everything we need in this GitHub repository: https://github.com/DanieleSalatti/Prometeo
If you do check that out, remember to change your config files. It would probably be a good idea to create a fork, so you can track your own changes there.
Once everything is moved over to the Pi, all that's needed is to build and run our image again:
$ docker build -t prometheus/cluster-local . $ docker run -p 9091:9090 --restart=always --name prometheus-local -d prometheus/cluster-local
Docker volumes aren't the best way to persist data, so I added a little script in the GitHub repository to start our Prometheus image in a slightly different way. The script is called
run.sh. Let's quickly take a look:
#! /bin/bash mkdir -p /media/usb-ssd-1/prometeo # creates a folder for your data ID=$(id -u) # saves your user id in the ID variable docker run -p 9091:9090 --restart=always \ --user $ID \ --volume "/media/usb-ssd-1/prometeo:/prometheus" \ --name prometheus-local -d prometheus/cluster-local
As you can see, at the top of the file we create a new folder (if it doesn't already exist - the
-p parameter takes care of that). After that we start our image, but there's one additional parameter that we pass to
--volume "/media/usb-ssd-1/prometeo:/prometheus". That allows us to specify a mount-point on our host for a folder in our container. We also run our image with our own user, and that's to ensure that we have the correct read/write permissions for the folder we just created. Be sure to change the
run.sh file to suit your needs before attempting to start it.
Little bonus: in the GitHub repository I also included a second little script to run a Grafana image. No additional config is needed since Grafana can be configured from the UI itself. Just like the previous script it will attempt to create a folder and start the image with a volume mapped to that folder for data persistence. Be sure to tweak it as needed.
Oh, and be sure to import this dashboard once you start playing with your data. It's pretty well done.
What About Alerting?
I'm not going to look at the alerting section for now, but given that this will be a series remember to tune in and check for updates. Better yet, subscribe :)
Alright, that's it for now.