how to setup prometheus in django rest framework and docker - django

I want to monitoring my database using prometheus, django rest framework and docker,
all is my local machine, the error is below:
well the error is the url http://127.0.0.1:9000/metrics, the http://127.0.0.1:9000 is the begging the my API, and I don't know what's the problem, my configuration is below
my requirements.txt
django-prometheus
my file docker: docker-compose-monitoring.yml
version: '2'
services:
prometheus:
image: prom/prometheus:v2.14.0
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
grafana:
image: grafana/grafana:6.5.2
ports:
- 3060:3060
my folder and file prometheus/prometheus.yml
global:
scrape_interval: 15s
rule_files:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- 127.0.0.1:9090
- job_name: monitoring_api
static_configs:
- targets:
- 127.0.0.1:9000
my file settings.py
INSTALLED_APPS=[
...........
'django_prometheus',]
MIDDLEWARE:[
'django_prometheus.middleware.PrometheusBeforeMiddleware',
......
'django_prometheus.middleware.PrometheusAfterMiddleware']
my model.py
from django_promethues.models import ExportMOdelOperationMixin
class MyModel(ExportMOdelOperationMixin('mymodel'), models.Model):
"""all my fields in here"""
my urls.py
url('', include('django_prometheus.urls')),
well the application is running well, when in the 127.0.0.1:9090/metrics, but just monitoring the same url, and I need monitoring different url, I think the problem is not the configuration except in the file prometheus.yml, because I don't know how to call my table or my api, please help me.
bye.

you need to change your config of prometheus and add python image in docker-compose like this:
config of prometheus(prometheus.yaml):
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: django_project # your project name
metrics_path: /metrics
static_configs:
- targets:
- web:8000
docker-compose file for prometheus and django , you can also include grafana image, I have installed grafana locally:
version: '3.7'
services:
web:
build:
context: . # context represent path of your dockerfile(dockerfile present in the root dir)
command: sh -c "python3 manage.py migrate &&
gunicorn webapp.route.wsgi:application --pythonpath webapp --bind 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml #prometheus.yaml present in the root dir
Dockerfile:
FROM python:3.8
COPY ./webapp/django /app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt*strong text*
For prometheus settings in django:
https://pypi.org/project/django-prometheus/
Hit django app api's.
Hit localhost:8000/metrics api.
Hit localhost:9090/ and search for the required metrics from the dropdown and click on execute it will generate result in console and create graph
To show graph in the grafana hit localhost:3000 and create new dashboard.

Related

How to deploy django based website running on a docker compose to my local home network?

I have the following setup:
docker-compose.yml
# Mentioning which format of dockerfile
version: "3.9"
# services or nicknamed the container
services:
# web service for the web
web:
# you should use the --build flag for every node package added
build: .
# Add additional commands for webpack to 'watch for changes and bundle it to production'
command: python manage.py runserver 0.0.0.0:8000
volumes:
- type: bind
source: .
target: /code
ports:
- "8000:8000"
depends_on:
- db
environment:
- "DJANGO_SECRET_KEY=django-insecure-m#x2vcrd_2un!9b4la%^)ou&hcib&nc9fvqn0s23z%i1e5))6&"
- "DJANGO_DEBUG=True"
expose:
- 8000
db:
image: postgres:13
#
volumes:
- postgres_data:/var/lib/postgresql/data/
# unsure of what this environment means.
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
# - "POSTGRES_USER=postgres"
# Volumes set up
volumes:
postgres_data:
and a settings file as
ALLOWED_HOSTS = ['0.0.0.0', 'localhost', '127.0.0.1']
#127.0.0.1 is my localhost address.
With my host's IP as 192.168.0.214
Can you please help me deploy the django site on my host's local network?
Do I have to set up something on my router?
Or could you direct me towards resources(understanding networking) which will help me understand the same.

Django + ElasticSearch + Docker - Connection Timeout no matter what hostname I use

I'm having issues connecting with my Elasticsearch container since day 1.
First I was using elasticsearch as the hostname, then I've tried the container name web_elasticsearch_1, and finally I'd set a Static IP address to the container and passed it in my configuration file.
PYPI packages:
django==3.2.9
elasticsearch==7.15.1
elasticsearch-dsl==7.4.0
docker-compose.yml
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
networks:
default:
ipv4_address: 172.18.0.10
settings.py
# Elasticsearch
ELASTICSEARCH_HOST = "172.18.0.10"
ELASTICSEARCH_PORT = 9200
service.py
from django.conf import settings
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(
hosts=[{"host": settings.ELASTICSEARCH_HOST, "port": settings.ELASTICSEARCH_PORT}],
use_ssl=False,
verify_certs=False,
connection_class=RequestsHttpConnection,
)
traceback
HTTPConnectionPool(host='172.18.0.10', port=9200): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f1973ebd6d0>, 'Connection to 172.18.0.10 timed out. (connect timeout=5)'))
By default Docker Compose uses a bridge network to provision inter-container communication. You can read more about this network at the Debian Wiki.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. So update your file:
version: "3.3"
services:
web:
build:
context: .
dockerfile: local/Dockerfile
image: project32439/python
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
env_file:
- local/python.env
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:7.10.1
environment:
- xpack.security.enabled=false
- discovery.type=single-node
And now you can connect with elasticsearch:9200 instead of 172.18.0.10 from your web container. For more info see this article.

Building elasticsearch indexes from another container

I have a Django project that uses django-elasticsearch-dsl. The project is dockerized, so elasticsearch and the web projects leave in separate containers.
Now my goal is to recreate and repopulate the indices running
python manage.py search_index --rebuild
In order to do that, I try to run the command from the container of the web service the following way:
docker-compose exec web /bin/bash
> python manage.py search_index --rebuild
Not surprsiginly, I get an error
Failed to establish a new connection: [Errno 111] Connection refused)
apparently because python tried to connect to elasticsearch using localhost:9200.
So the question is, how do I tell the management command the host where elasticsearch lives ?
Here's my docker-compose.yml file:
version: '2'
services:
web:
build: .
restart: "no"
command: ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
env_file: &envfile
- .env
environment:
- DEBUG=True
ports:
- "${DJANGO_PORT}:8000"
networks:
- deploy_network
depends_on:
- elasticsearch
- db
elasticsearch:
image: 'elasticsearch:2.4.6'
ports:
- "9200:9200"
- "9300:9300"
networks:
- deploy_network
db:
image: "postgres"
container_name: "postgres"
restart: "no"
env_file: *envfile
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
networks:
deploy_network:
driver: bridge
UPDATE:
In the Django project's settings I setup the elasticsearch dsl host:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'localhost:9200'
}
}
Since your Django project and Elasticsearch are in 2 separate containers, setting ELASTICSEARCH_DSL's host to 'localhost:9200' won't work, in this case localhost refers to localhost inside Django container.
So you need to set it like this:
# settings.py
ELASTICSEARCH_DSL = {
'default': {
'hosts': 'elasticsearch:9200'
}
}

django-redis connection error inside docker

django views.py
import redis
import jwt
from access import utils
import os
redis_url = os.environ['REDIS_URI']
R = redis.StrictRedis(redis_url)
def set(request):
R.set('foo', 'bar')
return JsonResponse({"code":200,"msg":"success"})
docker-compose
version: "3"
services:
rango:
container_name: rango
build: ./
command: python backend/manage.py runserver 0.0.0.0:8000
# command: npm start --prefix frontend/rango-frontend/
working_dir: /usr/src/rango
environment:
REDIS_URI: redis://redis_db:6379
ports:
- "8000:8000"
tty: true
links:
- elasticsearch
- node
- redis
#elastic search
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
#node
node:
image: node:10.13.0
#redis
redis:
image: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "6379:6379"
here i am connecting redis from django inside docker.
it is giving me exceptions connexctions refused.
Please have a look into my code and shared screenshot below
By default, docker compose makes containers discoverable with a hostname identical to the container name. Your redis container is thus discoverable via the hostname redis. However, your Django container is using the hostname redis_db.
Update your docker-compose.yml and change the REDIS_URI to reference the correct hostname:
REDIS_URI: redis://redis:6379

Browserify doesn't works in Docker container

I'm setting up my project and i set browserify to works with my front end assets and refresh the browser.
For the back end i'm using django, so, i made a proxy between both for works in the same time:
gulfile.js
// Start a server with BrowserSync to preview the site in
function server(done) {
browser.init({
// server: PATHS.dist, //port: PORT
proxy: 'localhost:8000',
notify: false
});
done();
}
But it doesn't works when i rise up the project with composer, simply doesn't show me anything when i rise up composer:
sass_1 | [BS] Proxying: http://localhost:8000
sass_1 | [BS] Access URLs:
sass_1 | -----------------------------------
sass_1 | Local: http://localhost:3000
sass_1 | External: http://172.18.0.7:3000
sass_1 | -----------------------------------
sass_1 | UI: http://localhost:3001
sass_1 | UI External: http://172.18.0.7:3001
sass_1 | -----------------------------------
sass_1 | [BS] Couldn't open browser (if you are using BrowserSync in a headless environment, you might want to set the open option to false)
It works fine when i rise up since my computer, without using Docker, but in docker it can open my browser and i can't get into 3000 port.
I got the same problem with django-debug-toolbar, but i solved it putting the internal ips got from docker configuration.
I tried changing the port inside gulpfile by the ips gateway(solution for before bug) but doesn't works.
My docker-composer file is:
version: '2'
services:
web:
build: .
image: uzman
command: python manage.py runserver 0.0.0.0:8000
ports:
- "3000:3000"
- "8000:8000"
volumes:
- .:/code
depends_on:
- npm
- bower
- sass
- migration
- db
- redis
- elasticsearch
db:
image: postgres:latest
volumes:
- .:/tmp/data/
npm:
image: uzman
command: npm install
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
bower:
image: uzman
command: bower install --allow-root
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Enetwork.host=0.0.0.0
ports:
- "9200:9200"
- "9300:9300"
redis:
image: redis:latest
sass:
image: uzman
command: npm start
volumes:
- ./uzman/static:/code/uzman/static
working_dir: /code/uzman/static
migration:
image: uzman
command: python manage.py migrate --noinput
volumes:
- .:/code
Can anyone help me with this?
The problem is that browserify doesn't running in the same time, for these reason is not possible get access to it.
So expose the ports for browserify:
ports:
- "3000:3000"
- "3001:3001"
- "8000:8000"
And execute npm:
docker exec -it uzman_web_1 bash -c "cd /code/uzman/static; exec npm start"
It's all.