Vagrant to Docker communication via Docker IP address - django

Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!

In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs

I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.

Related

Docker-compose - Cannot connect to Postgres

It looks like a common issue: can't connect to Postgres from a Django app in Docker Compose.
Actually, I tried several solution from the web, but probably I'm missing something I cannot see.
The error I got is:
django.db.utils.OperationalError: could not translate host name "db" to address: Try again
Where the "db" should be the name of the docker-compose service and which must setup in the .env.
My docker-compose.yml:
version: '3.3'
services:
web:
build: .
container_name: drf_app
volumes:
- ./src:/drf
links:
- db:db
ports:
- 9090:8080
env_file:
- /.env
depends_on:
- db
db:
image: postgres:13-alpine
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
volumes:
postgres_data:
My .env:
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=mydb
SQL_USER=myuser
SQL_PASSWORD=mypass
SQL_HOST=db #this one should match the service name
SQL_PORT=5432
As far as I know, web and db should automatically see each other in the same network, but this doesn't happens.
Inspecting the ip address with ifconfig on each container: django app has 172.17.0.2 and the db 172.19.0.2. They are not able to ping each other.
The result of docker ps command:
400879d47887 postgres:13-alpine "docker-entrypoint.s…" 38 minutes ago Up 38 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp backend_db_1
I really cannot figure out the issue, so am I missing something?
I write this to save anyone in future from the same issue.
After countless tries, I started thinking that nothing was wrong from the pure docker perspective: I was right.
SOLUTION: My only suspect was related to the execution inside a Virtual Machine, so executing the same docker image on the host worked like a charm!
The networking issue was related to the VM (VirtualBox Ubuntu 20.04)
I do not know if there is a way to work with docker-compose inside a VM, so any suggestion is appreciated.
You said in a comment:
The command I run is the following: docker run -it --entrypoint /bin/sh backend_web
Docker Compose creates several Docker resources, including a default network. If you separately docker run a container it doesn't see any of these resources. The docker run container is on the "default bridge network" and can't use present-day Docker networking capabilities, but the docker-compose container is on a separate "user-defined bridge network" named backend_default. That's why you're seeing a couple of the symptoms you do: the two networks have separate IPv4 CIDR ranges, and Docker's container DNS resolution only happens for the current network.
There's no reason to start a container with an interactive shell and then start your application within that (any more than you don't normally run python and then manually call main() from its REPL). Just start the entire application:
docker-compose up -d
If you do happen to need an interactive shell to debug your container setup or to run some manual tasks like database migrations, you can use docker-compose run for this. This honors most, but not all, of the settings in the docker-compose.yml file. In particular you can't docker-compose run an interactive shell and start your application from it, since it ignores ports:.
# Typical use: run database migrations
docker-compose run web \
./manage.py migrate
# For debugging: run an interactive shell
docker-compose run web bash

Deploying with docker

I am new to deploying with docker. Actually, I am running my django app in my computer inside docker container, and it is running successfully on port localhost:8080. Then I pulled the code to remote server and started docker-compose up, and app is running successfully there as well. What I want to ask is that, how can I see the app with the help of ip adress of the server? For example, if the ip adress is 123.45.67.89 I think the app should be running in 123.45.67.89:8080 but it does not run there. How can I access to the app running in container in remote server?
P.S. I have not used nginx, should I use it?
docker-compose.yml
The answer to this question greatly depends on where you are hosting your production application, and what type of services it provides you out of the box.
In general, production servers usually have some reverse proxy or application load balancer sitting in front of the containerized application(s).
Since you are starting with docker, and since I am assuming this is a personal or small scale app, I can recommend the following:
If you are flexible in terms of hosting providers, try Digital Ocean. They are very developer friendly, and cost effective, at least until a certain scale point.
Use the automated docker nginx-proxy. This tool lets you add a couple of lines to your docker-compose.yml file, and magically get a configured nginx proxy, without knowing anything about nginx.
I am using this approach to deploy multiple personal websites to a single, low cost server.
An example docker-compose.yml might look like this:
services:
nginx:
image: nginxproxy/nginx-proxy
ports: ["${PORT:-80}:80"]
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: www.yoursite.com
app:
depends_on: [nginx]
restart: always
image: your/image
environment:
VIRTUAL_HOST: myapp.localhost,www.yoursite.com
which basically tells the nginx-proxy to serve your app on both http://myapp.localhost and http://www.yoursite.com.
Of course, you will need to point your domains DNS to your digital ocean IP.
Technically, it should work the way you did it, but maybe the port 8080 is not open to the outside world.
You could change the port mapping in your docker-compose.yml file:
ports:
- "80":"8080"
You can then access your app from 123.45.67.89, without any port specified since 80 is the default. If it doesn't work, double check the ip address and your firewall rules.
However, using Nginx is almost always a good idea since the local web server you are using is not production ready (feature and security wise). I'll not explain how to implement Nginx here because it's a little bit off topic and there is a lot of resource available, but you should seriously consider it when you are deploying on a remote server.

AWS ECR, ECS based deployment: nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/conf.d/default.conf:2

I am struggling to configure AWS ECS Task definition to run simple PHP-FPM and Nginx based containers.
My "app" container is running at app:9000 port and Nginx is in 80 port. Nginx is forwarding the request to app container through a fastcgi_pass app-upstream;
All of these are running perfectly in local. The same configuration is running perfectly in DigitalOcean Docker Instance but fails in AWS ECS.
I am guessing some task-definition configuration issue, but I can't find it.
Error Logs:
I am getting this log from Nginx container
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/conf.d/default.conf:2
and this log from App (PHP-FPM) container
ECS
I've created a simple cluster with T2 Small instance, which is running perfectly.
Dockerfile
In this Github Repo, I've added the Dockerfile for App, Nginx image, docker-compose.yml file and task-defination.json file for reference if there is any mistake in docker files.
Source code in Github repo: https://github.com/arifulhb/docker-ecr-sample
Your issue is related to wrong upstream path mentioned in nginx configuration.
Do following things to investigate, also avoid using custom container names in docker-compose files if specifically not needed :-
Quick resolution would be :-
Remove container names from docker-compose files.
The service key name ( for example : test_app ) mentioned in docker-compose file is treated as container name automatically so use that.
The correct upstream path after making above changes should be test_app:9000
Proper and recommended way of building a docker-compose files :-
Create a custom docker network suppose with name "intranet"
Mention this network "intranet" in each service you create in your docker-compose file.
Follow steps mentioned in quick resolution
How does this helps you ? You have the ability to inspect this network you created, figure out if your containers are properly connected and identify the names used for connection.
Command : docker network inspect <network_name>
NOTE : Docker treats container names as host names by default for internal connections.
When using multiple container, the container name is very important to provide internal connectivity.
As I see your Docker compose file, the container name should match the name used in nginx conf.
version: '3'
services:
test_app:
container_name: app # not test_app_1
volumes:
- test-app-data:/var/www/app
test_nginx:
image: "xxxxxx.dkr.ecr.xx-xx-1.amazonaws.com/test-nginx"
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- test-app-data:/var/www/app
- test-nginx-log:/var/log/nginx
external_links:
- app # not test_app_1
depends_on:
- test_app
volumes:
test-app-data:
test-nginx-log:
I got the same problem too. My "blog" container is running at blog:8000 port and "nginx" container is at 80 port. "nginx" container is forwarding the request to "blog" container.
Because I didn't set "Links" in "NETWORK SETTINGS" for "nginx" container at all.
So I put the name of the back container "blog" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container).
Then, it was successful to run both "blog" and "nginx" containers properly.
So in your case, put the name of the back container "app" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container). It will work. I used Adiii's solution.
Don't forget to put "CMD-SHELL, curl -f http://localhost:9000/ || exit 1" in "Command" in "HEALTHCHECK" for "app" container.
Moreover, Don't forget to put "app" in "Container name" and select "HEALTHY" in "Condition" in "STARTUP DEPENDENCY ORDERING" for "nginx" container.

Unable to connect to Docker container: Connection Refused

I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app

Error 99 connecting to localhost:6379. Cannot assign requested address

Setup:
I have a virtual machine and in the virtual machine running three containers (an nginx proxy, a very minimalistic flask app and redis). Flask should be serving on port 5000 while redis on 6379.
Each of these containers are up and running just fine as stand a lone services, but also available via docker compose as a service.
In the flask app, my aim is to connect to redis and query for some keys.
The nginx container exposes port 80, flask port 5000 and redis port 6379.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
Running the flask app I am getting an error that the port cannot be used
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I am lost of clarity what could be causing this problem and any ideas would be appreciated.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
When your flask process runs in a container, localhost refers to the network interface of the container itself. It does not resolve to the network interface of your docker host.
So you need to replace localhost with the IP address of the container running redis.
In the context of a docker-compose.yml file, this is easy as docker-compose will make service names resolve to the correct container IP address:
version: "3"
services:
my_flask_service:
image: ...
my_redis_service:
image: ...
then in your flask app, use:
db = redis.Redis(host='my_redis_service', port=6379, decode_responses=True)
I had this same problem, except the service I wanted my container to access was remote and mapped via ssh tunnel to my Docker host. In other words, there was no docker-compose service for my code to find. I solved the problem by explicitly telling redis to look for my local host as a string:
pyredis.Redis(host='docker.for.mac.localhost', port=6379)
Anyone using only docker to run a container,
you can add --network=host in the command like docker run --network=host to make docker use the network of the host while running the container.
You can also use a host network for a swarm service, by passing --network host to the docker service create command.
Make sure you don't publish any port while doing this. like -p 80:8000
I am not sure if Docker compose supports this.
N.b. this is only supported in linux.