How to connect redis server with django in dockerizing it? - django

Currently i am working on django app and trying to work with reddis server. i have add all configuration settings for reddis server in settings.py my settings are like this.
redis_host = os.environ.get('REDIS_HOST', 'my-ip-of-reddis-server')
# Channel layer definitions
# http://channels.readthedocs.org/en/latest/deploying.html#setting-up-a-channel-backend
CHANNEL_LAYERS = {
"default": {
# This example app uses the Redis channel layer implementation asgi_redis
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [(redis_host, 6380)],
},
"ROUTING": "multichat.routing.channel_routing",
},
}
its working fine when i run
python manage.py runser or python manage.py runworkers
but when i dockerize this django app, it does not make connection with reddis server. it gives following error.
redis.exceptions.ConnectionError: Error -2 connecting to redis:6380. Name or service not known.
2018-01-30 06:00:47,704 - ERROR - server - Error trying to receive messages: Error -2 connecting to redis:6380. Name or service not known.
201
My dockerfile is this.
# FROM directive instructing base image to build upon
FROM python:3-onbuild
RUN apt-get update
ENV PYTHONUNBUFFERED 1
ENV REDIS_HOST "redis"
# COPY startup script into known file location in container
COPY start.sh /start.sh
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
#RUN python manage.py runserver
RUN daphne -b 0.0.0.0 -p 8000 --ws-protocol "graphql-ws" --proxy-headers multichat.asgi:channel_layer
# CMD specifcies the command to execute to start the server running.
CMD ["/start.sh"]
# done!
and i have also tried this.
# FROM directive instructing base image to build upon
FROM python:3-onbuild
RUN apt-get update
ENV PYTHONUNBUFFERED 1
ENV REDIS_HOST "redis"
# COPY startup script into known file location in container
COPY start.sh /start.sh
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
RUN python manage.py runserver
RUN daphne -b 0.0.0.0 -p 8000 --ws-protocol "graphql-ws" --proxy-headers multichat.asgi:channel_layer
# CMD specifcies the command to execute to start the server running.
CMD ["/start.sh"]
# done!
But this is not making connection while containerizing my django app.
Can anybody please tell me, where i am wrong? how can i dockerize my django app that will make connection with redis server whose settings are places in settings.py.
Any help or suggestion will be highly appreciated.
Thanks.

Related

How to get to postgres database at localhost from Django in Docker container [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed last year.
I have a django in a docker container that must access a postgres database at my localhost. The Dockerfile works fine when accessing the database residing at an external host, but it can't find the database at my host.
It is a well known problem and there is a lot of documentation, but it doesn't work in my case. This question resembles another question but it did not solve my problem. I actually describe the correct solution of this problem as #Zeitounator pointed out, but it still did not work. It was thanks #Zeitounator that I realised two parts of the problem must be solved: the docker side and the PostgreSQL side. I did not find that solution in any of the answers I read. I did however read about the same frustration: getting a solution that did not work.
It all focuses on which internet address I transmit to the HOST key in the database driver dictionary in the django settings.py:
print('***', os.environ['POSTGRES_DB'], os.environ['POSTGRES_USER'],
os.environ['POSTGRES_HOST'], os.environ['POSTGRES_PORT'])
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # os.environ['ENGINE'],
'NAME': 'demo',
'USER': os.environ['POSTGRES_USER'],
'PASSWORD': os.environ['POSTGRES_PASSWORD'],
'HOST': os.environ['POSTGRES_HOST'],
}
}
And running the Dockerfile:
docker build -t dd .
docker run --name demo -p 8000:8000 --rm dd
When POSTGRES_HOST points to my external server 192.168.178.100 it works great. When running python manage.py runserver it finds the host and the database. The server starts and waits for commands. When pointing to 127.0.0.1 it fails (which actually is great too: the container really isolates).
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
But as I can connect to my server, I should be able to connect to the localhost IP and that fails just as well (I forgot to tell that the database really runs). When I use host.docker.internal it doesn't work either when running:
docker run --name demo -p 8000:8000 --rm --add-host host.docker.internal:host-gateway dd
It replaces host.docker.internal by 172.17.0.1. The only solution that works so far is:
# set POSTGRES_HOST at 127.0.0.1
docker run --name demo -p 8000:8000 --rm --network host dd
but that seems something I don't want. As far as I understand the documentation it makes the full host stack available to the container which defeats the idea of a container in the first place. But second: it doesn't work in docker-compose, though I specify:
extra_hosts:
- "host.docker.internal:host-gateway"
network_mode: host leads to a compile error, docker-compose refuses to run.
Is there a way to access a service running on my PC from a docker running on my PC as well from Dockerfile as well as docker-compose and can be deployed as well?
using Ubuntu 21.04 (latest version), Docker version 20.10.7, build 20.10.7-0ubuntu5.1, postgresql v14. My Dockerfile:
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
There are two parts in solving this problem: the docker part and the postgresql part as #Zeitounator pointed out. I was not aware of the postgres part. Thanks to his comment I could resolve this issue. And it works for Dockerfile as well as docker-compose where this Dockerfile is used.
One has to change two postgres configuration files, both are on /etc/postgresql/<version>/main:
postgresql.conf
Change the listen address. Initially it shows:
listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
I changed 'localhost' to '*'. It can be more specific, in this case to the docker address, but as a Proof of Concept it worked.
pg_hba.conf
In my case host.docker.internal resolves to 172.17.0.1. This seems some default docker gateway address as I noticed in most discussion regarding this subject.
Add two lines at the very end of the file:
host all all 172.17.0.1/0 md5
host all all ::ffff:ac11:1/0 md5

Running app with `eb local run` works, but cant actually connect to app

I've got a very straightforward Flask application running in a single docker container. When I run the app using eb local run it "works" in that the docker image is built, and I eventually see the log output from flask letting me know it's ready for requests. But when I actually attempt to query the running application, the requests fail immediately with errors saying 'the site cant be reached'. It seems like the app is running in the container, but somehow the ports aren't exposed correctly? Also, when I run it using docker run ... it works completely and I'm able to query the application.
Command I'm using:
eb local run --port 5000 --envvars APP_ENV=LOCAL
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}
My Dockerfile:
FROM python:3
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "application.py" ]
My .elasticbeanstalk/config.yml:
branch-defaults:
python3:
environment: fs-service-prod
environment-defaults:
fs-service-prod:
branch: null
repository: null
global:
application_name: followspot-service
default_ec2_keyname: null
default_platform: arn:aws:elasticbeanstalk:us-east-1::platform/Docker running on
64bit Amazon Linux/2.12.11
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
sc: git
workspace_type: Application
Output of eb local status:
Platform: 64bit Amazon Linux 2018.03 v2.12.11 running Docker 18.06.1-ce
Container name: 6739a687fa2d18f1c683926f024c88bed9f5c6c7
Container ip: 127.0.0.1
Container running: True
Exposed host port(s): 5000
Full local URL(s): 127.0.0.1:5000
Thanks so much for any help you can give me and let me know if there's a good way to go about getting more helpful info.
Figured it out! Turns out this was an issue with how I was starting my Flask app. Because flask by default runs on 127.0.0.1 and Docker by default runs tries to connect to 0.0.0.0, I needed to change how the app is started in my Dockerfile.
Instead of:
ENTRYPOINT [ "python" ]
CMD [ "application.py" ]
I had to change it to:
ENV FLASK_APP application.py
ENTRYPOINT ["python", "-m", "flask", "run", "--host=0.0.0.0"]
Then everything worked as expected.

Django - How to check if server is running in ASGI or in WSGI mode?

We are running the same django project in WSGI mode for handling HTTP requests and in ASGI mode for handling WebSockets. For WSGI mode we are using gunicorn3 server:
gunicorn3 --pythonpath . -b 0.0.0.0:8000 chat_bot.wsgi:application
For ASGI mode we are using daphne server:
daphne --root-path . -b 0.0.0.0 -p 8001 chat_bot.asgi:application
How to programatically detect which mode currently is running GreenUnicorn+WSGI or Daphne+ASGI?
One possibility:
Inside of your wsgi.py file you can set an environment variable to one value you won't be setting anywhere else:
os.environ.setdefault('SERVER_GATEWAY_INTERFACE', 'Web')
And then inside of asgi.py set it to a different variable:
os.environ.setdefault('SERVER_GATEWAY_INTERFACE', 'Asynchronous')
Then in other parts of your code, just check against the environment variable:
if os.environ.get('SERVER_GATEWAY_INTERFACE') == 'Web':
# WSGI, do something
else:
# ASGI, do something else

How to connect to redis with dokku and flask?

I wanted to use redis with dokku and flask. First issue was installing current version of dokku, i am using latest version from repo now.
Second problem is showing in Flask debugger:
redis.exceptions.ConnectionError
ConnectionError: Error 111 connecting to None:6379. Connection refused.
I set redis url and port in Flask:
app.config['REDIS_URL'] = 'IP:32768'
-----> Checking status of Redis
remote: Found image redis/landing
remote: Checking status...stopped.
remote: Launching redis/landing...COMMAND: docker run -v /home/dokku/.redis/volume-landing:/var/lib/redis -p 6379 -d redis/landing /bin/start_redis.sh
-----> Setting config vars
REDIS_URL: redis://IP:6379
REDIS_IP: IP
REDIS_PORT: 6379
Any idea? REDIS_URL should be set in different way?
This code works ok in localhost:
https://github.com/kwikiel/bounce
(with ['REDIS_IP'] = '172.17.0.13' set to 127.0.0.1)
Problem appears when i try to connect with redis dokku.
Steps to use redis with flask and dokku:
Install redis plugin:
cd /var/lib/dokku/plugins
git clone https://github.com/ohardy/dokku-redis redis
dokku plugins-install
Link your redis container to application container
dokku redis:create [name of app container]
You will receive info about environmental variables that you will have to set - for example:
Host: 172.17.0.91
Public port: 32771
Then set these settings in Flask (or other framework)
app.config['REDIS_URL'] = 'redis://172.17.0.91:6379'
app.config['REDIS_IP'] = '172.17.0.91'
app.config['REDIS_PORT'] = '6379'
Complete example of redis database used with Flask app (A/B testing in Flask):
https://github.com/kwikiel/bounce

Why I receive permission denied in Docker deployment?

I have created an application in Elastic Beanstalk to host a play framework 2 app there using instructions from this project.
I have packaged the project exactly like Docker needs but when I upload the final zip to the application I receive a permission denied error in this flow:
Environment update is starting.
Deploying new version to instance(s).
Successfully pulled dockerfile/java:latest
Successfully built aws_beanstalk/staging-app
Docker container quit unexpectedly after launch: Docker container quit unexpectedly on Fri Sep 12 23:32:44 UTC 2014: 2014/09/12 23:32:39 exec: "bin/my-sample-project": permission denied. Check snapshot logs for details.
I have spent hours on this without any success.
This is the content of my root Dockerfile:
FROM dockerfile/java
MAINTAINER Cristi Boariu <myemail>
EXPOSE 9000
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/mytweetalerts"]
CMD []
Any hint how to solve this issue?
Here's what I did to solve this same issue, though I'm not sure which part specifically solved it.
My DockerFile looks like:
FROM dockerfile/java
MAINTAINER yourNameHere
EXPOSE 9000 9443
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
# Make sure myApp is excutable
RUN ["chmod", "+x", "bin/myApp"]
USER daemon
# If running a t1.micro or other memory limited instance
# be sure to limit play memory. This assumes play 2.3.x
ENTRYPOINT ["bin/myApp", "-mem", "512", "-J-server"]
CMD []
See https://www.playframework.com/documentation/2.3.x/ProductionConfiguration for info on setting jvm memory.
My Dockerrun.aws.json (also required) looks like:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Finally my play application lives in files/opt/docker with the run script in docker/bin. All this is zipped up and sent to EB.
Add a chmod command to make your file executable:
RUN ["chmod", "+x", "bin/myApp"]
So your Dockerfile will be:
FROM dockerfile/java
MAINTAINER Cristi Boariu <myemail>
EXPOSE 9000
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
RUN ["chmod", "+x", "bin/myApp"]
ENTRYPOINT ["bin/mytweetalerts"]
CMD []