Deploying with docker - django

I am new to deploying with docker. Actually, I am running my django app in my computer inside docker container, and it is running successfully on port localhost:8080. Then I pulled the code to remote server and started docker-compose up, and app is running successfully there as well. What I want to ask is that, how can I see the app with the help of ip adress of the server? For example, if the ip adress is 123.45.67.89 I think the app should be running in 123.45.67.89:8080 but it does not run there. How can I access to the app running in container in remote server?
P.S. I have not used nginx, should I use it?
docker-compose.yml

The answer to this question greatly depends on where you are hosting your production application, and what type of services it provides you out of the box.
In general, production servers usually have some reverse proxy or application load balancer sitting in front of the containerized application(s).
Since you are starting with docker, and since I am assuming this is a personal or small scale app, I can recommend the following:
If you are flexible in terms of hosting providers, try Digital Ocean. They are very developer friendly, and cost effective, at least until a certain scale point.
Use the automated docker nginx-proxy. This tool lets you add a couple of lines to your docker-compose.yml file, and magically get a configured nginx proxy, without knowing anything about nginx.
I am using this approach to deploy multiple personal websites to a single, low cost server.
An example docker-compose.yml might look like this:
services:
nginx:
image: nginxproxy/nginx-proxy
ports: ["${PORT:-80}:80"]
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: www.yoursite.com
app:
depends_on: [nginx]
restart: always
image: your/image
environment:
VIRTUAL_HOST: myapp.localhost,www.yoursite.com
which basically tells the nginx-proxy to serve your app on both http://myapp.localhost and http://www.yoursite.com.
Of course, you will need to point your domains DNS to your digital ocean IP.

Technically, it should work the way you did it, but maybe the port 8080 is not open to the outside world.
You could change the port mapping in your docker-compose.yml file:
ports:
- "80":"8080"
You can then access your app from 123.45.67.89, without any port specified since 80 is the default. If it doesn't work, double check the ip address and your firewall rules.
However, using Nginx is almost always a good idea since the local web server you are using is not production ready (feature and security wise). I'll not explain how to implement Nginx here because it's a little bit off topic and there is a lot of resource available, but you should seriously consider it when you are deploying on a remote server.

Related

Plausible analytics on a server with a webapp

I have Django hosted with Nginx on DigitalOcean. Now I want to install Plausible Analytics. How do I do this? How do I change the Nginx config to get to the Plausible dashboard with mydomain/plausible for example?
Setup plausible by either running the software directly or in a docker container - let's say it runs on port 8080
Then in your nginx.conf - you should have a server block for your domain
Within that add a location block with the path you want plausible on and add a proxy pass directive to forward the requests to localhost:8080
Monitor access.log and error.log to debug any issues that may happen

NGINX Docker on Server with pre-existing NGINX on Ubuntu Server

I am currently running into an issue with one of my projects that will be running in Docker on my Ubuntu Server with a NGINX docker container to manage the reverse proxy for the Django Project. My issue I am running into is I already have previous Django projects running on that particular Ubuntu server so port 80 is already being used by a NGINX block running on the actual server.
Is there a workaround to running my Docker NGINX as well as the Ubuntu NGINX and have my docker image run as a "add on" site because the Django sites hosted there are clients websites, so I would prefer to not interfere with them if I dont have to.
My project needs HTTPS because it is serving data to a React-Native app running on Android APK 28 which for some reason has a security rule that blocks non HTTPS connections from happening in the app. If anyone else has run into an issue like this I would gladly appreciate the advice on how to tackle this issue.
I have tried running NGINX in Docker with port 81 instead of port 80 and that works perfectly, but I dont think there is a way to make a secure connection to port 81 is there?
Thanks in advance.
You can't just mess with default HTTP ports for endpoints - user browsers use 80 and 443 by default. If you change those, your users would have to connect to your.server.com:81 or something similar. Nobody would do that for a public server, but this can be an option for a private one.
I think a reasonable way out of this will be to use host's NGINX to proxy requests into Docker's NGINX (if there is sense in keeping it at all). You can handle HTTPS termination on host's NGINX and pass plain HTTP into Docker's one.
Another adequate option is to use another server, so that everything works with no dirty hacking involved.

Using Traefik with EC2, how does frontend rule change url

I am new to using proxy, so I didnt quite follow why this isn't working. Not sure if this a Trafeik question, or a simple "I dont know how routing works" question
I followed the Traefik tutorial as show on their website here : https://docs.traefik.io/
Their docker-compose.yml looks like this:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
whoami:
image: containous/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
So now I wanted to run this same yml file on my ec2 instance. I make a change to the last line so that it looks like this instead:
- "traefik.frontend.rule=Host:whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com"
So I assumed that if I visited http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com, I would see my whoami app's response. However, I get a response from my ISP that that wesbite does not exist. If I access http://<ec2-XXX>.<region>.compute.amazonaws.com:8080 I can see my Traefik console fine.
I think its got to do with web addresses, and you can only have two items before the website, like x.y.website.com, and the url I am using to access my ec2 is already using those two slots. I am unsure what to search for.
Do I need to register a site/ buy a domain first?
How would I connect this site to my ec2 instance?
Was I correct as to why http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com was not working?

Error 99 connecting to localhost:6379. Cannot assign requested address

Setup:
I have a virtual machine and in the virtual machine running three containers (an nginx proxy, a very minimalistic flask app and redis). Flask should be serving on port 5000 while redis on 6379.
Each of these containers are up and running just fine as stand a lone services, but also available via docker compose as a service.
In the flask app, my aim is to connect to redis and query for some keys.
The nginx container exposes port 80, flask port 5000 and redis port 6379.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
Running the flask app I am getting an error that the port cannot be used
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I am lost of clarity what could be causing this problem and any ideas would be appreciated.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
When your flask process runs in a container, localhost refers to the network interface of the container itself. It does not resolve to the network interface of your docker host.
So you need to replace localhost with the IP address of the container running redis.
In the context of a docker-compose.yml file, this is easy as docker-compose will make service names resolve to the correct container IP address:
version: "3"
services:
my_flask_service:
image: ...
my_redis_service:
image: ...
then in your flask app, use:
db = redis.Redis(host='my_redis_service', port=6379, decode_responses=True)
I had this same problem, except the service I wanted my container to access was remote and mapped via ssh tunnel to my Docker host. In other words, there was no docker-compose service for my code to find. I solved the problem by explicitly telling redis to look for my local host as a string:
pyredis.Redis(host='docker.for.mac.localhost', port=6379)
Anyone using only docker to run a container,
you can add --network=host in the command like docker run --network=host to make docker use the network of the host while running the container.
You can also use a host network for a swarm service, by passing --network host to the docker service create command.
Make sure you don't publish any port while doing this. like -p 80:8000
I am not sure if Docker compose supports this.
N.b. this is only supported in linux.

Vagrant to Docker communication via Docker IP address

Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!
In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs
I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.