How to Set Up Service Monitoring for Python Projects with Grafana and Prometheus
In this article, we will explore how to set up service monitoring for Python projects with Prometheus and Grafana using Docker containers. This approach provides an efficient way to analyze specific events in a project, such as database calls, API interaction, and resource performance tracking. By implementing service monitoring, it's possible to quickly detect unusual behavior and find useful clues when troubleshooting issues.
To begin, we'll use Docker to run all our services locally. In larger companies, global service for Prometheus and Grafana can be used, which includes all microservice monitoring projects.
Real Case Scenario
We set up a temporary service to redirect requests from specific websites until Google completed indexing for those web pages. We monitored the redirect counts using service monitoring to track progress regularly. Once the number of redirects decreased, it indicated that the traffic had migrated to the target website. At this point, the service was no longer necessary and could be shut down.
For event-driven architectures, it is important to monitor queue workloads and instance resource usage to detect potential bottlenecks and more. In fact, monitoring is needed for every service to gain insights into their performance.
Setting up Docker containers
We will be running all of our services on Docker containers locally. In larger companies, there is often a global service for Prometheus and Grafana that includes all microservice monitoring projects. This means that you may not even need to write any deployment pipelines for service monitoring tools.
To start, we need to create a docker-compose file that includes the required services.
docker-compose.yml
version: "3.3"
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ${PWD}/prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
hostname: grafana
image: grafana/grafana
ports:
- 3000:3000
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- prometheus
ports:
- "8080:8080"
command: ["python3", "app/main.py"]
The configuration above outlines the necessary services needed to run a Python application with a Prometheus monitoring system. The prometheus.yml
file is an important aspect of this configuration, as it allows us to pull data (metrics) from the app
service. This file contains specific instructions for Prometheus to collect custom metrics, which are unique to each project.
It is important to note that without the prometheus.yml
file mounted in the Docker container, the custom metrics generated by the project will not be visible. Therefore, it is necessary to create a new prometheus.yml
file at the root level of your project, to ensure that your metrics are properly collected and displayed.
prometheus.yml
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: app # your project name
static_configs:
- targets:
- app:8000
Now, Prometheus will pull data from our project.
All other configurations in the compose file are self-explanatory and not very critical, as we mentioned for prometheus
.
Create a new Python project
Let's create a simple Python app that tracks time spent and requests made. First, create a new folder named app
at the root level of the project. Don't forget to include an __init__.py
file, which will mark it as a Python package.
Next, create a file named main.py
, which will hold the main program logic. Add the following code to the file:
app/main.py
from prometheus_client import start_http_server, Summary
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8000)
# Generate some requests.
while True:
process_request(random.random())
Here, we are using a python package named prometheus_client to interact with Prometheus. It easily allows the creation of different types of metrics that our project requires.
The code above is copied from the official documentation of prometheus_client which simply creates a new metric named request_processing_seconds
that measures the time spent on that particular request. We'll cover other types of metrics later in this post.
Now, let's create a Dockerfile
and requirements.txt
to build our project.
Dockerfile
FROM python:3.9-slim-buster
WORKDIR /app
RUN apt update
RUN pip3 install --upgrade pip
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY app app
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH=/app
CMD ["python3", "app/main.py"]
requirements.txt
prometheus-client
So, start running services to see it in action:
docker-compose up -d
Setting up Grafana
In this section, we will use Prometheus as a data source to display metrics in Grafana charts.
To begin, navigate to localhost:3000 to access the Grafana login page. Use admin
for both the username and password. After logging in, you will be prompted to add a new password. Because we are testing locally, we can keep it the same as the current one.
Upon successful login, you will see the default Grafana dashboard. From there, select Data Sources.
Next, select Prometheus as a data source:
Then it will require the URL that prometheus
service is running on which is going to be the docker service name that we created http://prometheus:9090.
And finally, click the button Save & Test to check the data source:
Great! Now our Grafana is ready to illustrate the metrics that come from Prometheus.
Let's now navigate to http://localhost:3000/dashboards to create a new dashboard and add a new panel. Click New Dashboard and then New Panel for initialization:
Next, we select code inside the Query panel and write request_processing_seconds
. You will be able to see 3 different types of suffixes with your custom metric data. Prometheus simply applies different types of calculations to your data by default.
Select one of the options and click Run query to see it in the chart:
Finally, we can see the metrics of our project illustrated by Grafana very nicely.
Other Type Metrics
There are many types of metrics available based on the project's requirements. If we need to count specific events, such as record updates in the database, we can use Counter()
.
If we have a message queue, such as Kafka or RabbitMQ, we can use Gauge()
to show the number of items waiting in the queue.
To add another metric in main.py
and connect Prometheus with Grafana, follow the same steps as before.
from prometheus_client import start_http_server, Summary, Counter
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
UPDATE_COUNT = Counter('update_count', 'Number of updates')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8000)
# Generate some requests.
while True:
process_request(random.random())
UPDATE_COUNT.inc(random.randint(1, 100))
Here, we added Counter()
to calculate the number of updates for the database. Don't forget to build the docker image again for all services:
docker-compose up -d --build
Source Code
Conclusion
Setting up service monitoring with Prometheus and Grafana using Docker containers is a great way to efficiently track specific events and analyze the performance of your project. By implementing service monitoring, you can easily detect unusual behavior and troubleshoot issues. We hope this article has provided useful insights into how to set up service monitoring for projects with Prometheus and Grafana.