Can't connect to elasticache redis from docker container running in EC2 - amazon-web-services

As a part of my CI process, I am creating a docker-machine EC2 instance and running 2 docker containers inside of it via docker-compose. The server container test script attempts to connect to an AWS elasticache redis instance within the same VPC as the EC2. When the test script is run I get the following error:
1) Storage
check cache connection
should return seeded value in redis:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/src/app/test/scripts/test.js)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
Update: I can connect via redis-cli from the EC2 itself:
redis-cli -c -h ***.cache.amazonaws.com -p 6379 ping
> PONG
It looks like I cant connect to my redis instance because my docker container is using an IP that is not within the same VPC as my elasticache instance. How can I setup my docker config to use the same IP as the host machine while building my containers from remote images? Any help would be appreciated.
Relevant section of my docker-compose.yml:
version: '3.8'
services:
server:
build:
context: ./
dockerfile: Dockerfile
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
env_file: ./.env
Server container Dockerfile:
FROM node:12-alpine
# create app dir
WORKDIR /usr/src/app
# install dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 8080 6379
CMD ["npm", "run", "dev"]
Elasticache redis SG inbound rules:
EC2 SG inbound rules:

I solved the problem through extensive trial and error. The major hint that pointed me in the right direction was found in the Docker docs:
By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network...
Elasticache instances are only accessible internally from their respective VPC. Based on my config, the docker container and the ec2 instance were running on 2 different IP addresses but only the EC2 IP was whitelisted to connect to Elasticache.
I had to bind the docker container IP to the host EC2 IP in my docker.compose.yml by setting the container network_mode to "host":
version: '3.8'
services:
server:
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
network_mode: "host"
env_file: ./.env
...

Related

Set ports on a Docker compose file for Amazon ECS

I am following this tutorial to deploy my app to AWS using docker compose.
If I use docker compose up I get this error:
published port can't be set to a distinct value than container port: incompatible attribute
This is the docker-compose.yml:
version: "3"
services:
www:
image: my_image_path:latest
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
I have two services listening to port 80 in the container, so I cannot just use 80:80 in both of them.
Any ideas?
You need to change one of your docker images to listen on another port. Docker-compose deploys to AWS Fargate, and there are some restrictions in Fargate that are preventing your configuration from working:
Multiple containers in a single Fargate task have to listen on distinct ports.
The published port can't be different from the port the container is listening on. If you needed to change the port clients connect to, that can be done in the ALB/Target Group settings, instead of the container settings.
Since one of your images is phpmyadmin I suggest simply adding an environment variable to that image APACHE_PORT: 8000 which will change the port the Apache web server in that container listens on to port 8000. Then you can set the port mapping on that container to 8000:8000 and set the port mapping on your www container to 80:80

Unable to access chronograf UI running on EC2 instance port 8888 in a docker container

I started Chronograf on an EC2 instance on port 8888. My EC2 instance is running Ubuntu. When I ssh into the instance and do
curl <my_ec2_public_DNS>:8888
I get a valid response.
Now when I try to go the URL http://<my_ec2_public_DNS>:8888 from a browser on my computer, my request always times out. I am able to ping <my_ec2_public_DNS> from my computer.
My EC2 security inbound rules are as follows:
Please help.
Edit 1: I am running chronograf in a docker container.
chronograf:
image: chronograf:latest
volumes:
- ./my_dir/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- influxdb

How to connect to ElastiCache instance (Redis) from Django App running on local machine

I'm struggling to find a solution to properly set connexion between my Django app running on my local machine and my ElastiCache instance
Let me resume the situation.
CONFIG:
I have a Django App deployed on an AWS EC2 instance and running using a docker-compose-yml file. I'm using ElastiCache & Redis for my cache.
MY ISSUE:
I can successfully connect to my ElastiCache Instance from my EC2 instance. I can use Redis and create key etc.. Everything working perfectly.
I am able to connect to ElastiCache from my Django App when I run it with my docker-compose.yml file within my EC2 Instance.
I can also use Redis on my ElastiCache from my local machine by creating a sort of bridge with my EC2 instance using this command:
ssh -f -N -L 6379:{my_elasticache_amazon_url}:6379 ec2-user#{my_ec2_instance_url}
Then I run the following command and have access to redis:
redis-cli -h 127.0.0.1 -p 6379
127.0.0.1:6379>
But I can't access to ElastiCache from my Django App running on my local machine! I need to set this connexion for dev and test purpose, before deploying it on the EC2 instance.
WHERE I AM:
I tried to directly connect to ElastiCache using the URL in the Django App but access is not allowed since the security group in AWS is set to only accept connexion from the EC2 instance.
redis.exceptions.ConnectionError: Error 111 connecting to {my_elasticache_url}:6379. Connection refused.
I tried to put 127.0.0.1 & localhost as the URL for connexion since I made a link between my local machine and EC2 instance with the previous command, but it's not working, I have the same error:
redis.exceptions.ConnectionError: Error 111 connecting to 127.0.0.1:6379. Connection refused.
I already tried to set network_mode: "host" in my docker-compose file but it's not working since I have some port binding.
RESSOURCES:
Code line in my Django App to connect to Redis:
import redis
import os
r = redis.Redis(
host=os.environ.get('CLUSTER_HOST', default="127.0.0.1"), port=6379, db=0)
my_key = r.get('my_key')
Port Listening command on my local machine:
lsof -nP +c 15 | grep LISTEN
ssh 22744 username 7u IPv6 0x254006fa8c767523 0t0 TCP [::1]:6379 (LISTEN)
ssh 22744 username 8u IPv4 0x254006fa9cfbe91b 0t0 TCP 127.0.0.1:6379 (LISTEN)
Here's my docker-compose.yml file:
version: '3.9'
x-database-variables: &database-variables
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ALLOWED_HOSTS: ${ALLOWED_HOSTS}
x-app-variables: &app-variables
<<: *database-variables
POSTGRES_HOST: ${POSTGRES_HOST}
SPOTIPY_CLIENT_ID: ${SPOTIPY_CLIENT_ID}
SPOTIPY_CLIENT_SECRET: ${SPOTIPY_CLIENT_SECRET}
SECRET_KEY: ${SECRET_KEY}
CLUSTER_HOST: ${CLUSTER_HOST}
DEBUG: 1
services:
website:
build:
context: .
restart: always
volumes:
- static-data:/vol/web
environment: *app-variables
depends_on:
- postgres
postgres:
image: postgres
restart: always
environment: *database-variables
volumes:
- db-data:/var/lib/postgresql/data
proxy:
build:
context: ./proxy
restart: always
depends_on:
- website
ports:
- 80:80
- 443:443
- 6379:6379
volumes:
- static-data:/vol/static
- ./files/templates:/var/www/html
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
- ./etc/letsencrypt:/etc/letsencrypt
volumes:
static-data:
db-data:
Inbound Rules of the Security Group of my ElastiCache Instance:
Inbound Rules of the Security Group of my EC2 Instance:
I found a solution using AWS VPN as explain in this tutorial.
You just have to setup a VPN network linked to the ElastiCache, you can then set permissions to allow specific user to connect to it (or you can make it public). After that, you just have to connect to VPN from your local machine as explain at the end of the documentation.
You can then use Redis the same way I do on my EC2 instance.
From your machine:
redis-cli -c -h your-elasticache-name.l5ljap.0001.use2.cache.amazonaws.com -p 6379
From your Django App:
import redis
r = redis.Redis(
host=your-elasticache-name.l5ljap.0001.use2.cache.amazonaws.com, port=6379, db=0)
Tips: Don't forget to connect a Router that has access to Internet, otherwise you'll not be able to connect to internet from your machine as soon as you are connected to the VPN (that can be annoying when you are making API request in your app for exemple).

How to run prometheus on docker-compose and scrape django server running locally?

I am trying to setup prometheus to monitor my Django application using django-prometheus and Docker compose. I've been following some guides online but different from all the guides I've seen, I want to run Django locally for now so simply python manage.py runserver and run prometheus with docker-compose (and later add grafana). I want to do this to test it locally and later I will deploy it to Kubernetes but this is for another episode.
My issue is to make the local running django server communicate in the same network as the prometheus running container because I get this error in the /targets on prometheus dashboard:
Get "http://127.0.0.1:5000/metrics": dial tcp 127.0.0.1:5000: connect: connection refused
These are my docker-compose file and prometheus configuration:
docker-compose.yml
version: '3.6'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
prometheus.yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: django-app
static_configs:
- targets:
- localhost:8000
- 127.0.0.1:8000
If you want to run the Django app outside of (a container and outside of) Docker Compose then, when run, it will bind to one of the host's ports.
You need to get the Docker Compose prometheus service to bind to the host's network too.
You should be able to do this using network_mode: host under the prometheus service.
Then, prometheus will be able to access the Django app on the host port that it's using and prometheus will be accessible as localhost:9090 (without needing the ports section).

AWS EC2 Instance can't be reached even though docker containers running

i'm trying to get my laravel app running in ec2 with docker containers. I have two containers one of the app and then one for nginx. I have created the ec2 instance with docker-machine and i've also built the docker images successfully.
Running docker-compose up also runs successfully. If I run docker ps I see the two containers running.
So I have two containers running I would expect to go to the http://ec2-ip-addy-here.compute-1.amazonaws.com/ and see the app. My hunch is that something isn't setup on AWS side correctly, maybe the VPC? I'm a novice with AWS so I don't know what to look for. Any ideas?
I'm following this guide https://hackernoon.com/stop-deploying-laravel-manually-steal-this-docker-configuration-instead-da9ecf24cd2e
I'm also using the laradock nginx dockerfile and my own dockerfile for the app
EDIT:
It could be the networks that are created with docker-compose. I say that because I just checked and the network is being prepended with the service name. When I run docker network ls I see a network called php-fpm_backend. Here's my docker-compose.yml file
version: '3'
networks:
backend:
driver: bridge
services:
### PHP-FPM ##############################################
php-fpm:
image: php-fpm
container_name: php-fpm
build:
context: ../
dockerfile: ./laradock/php-fpm/Dockerfile-Prod
args:
- LARADOCK_PHP_VERSION=7.2
- INSTALL_PGSQL=true
- INSTALL_PG_CLIENT=true
- INSTALL_POSTGIS=true
expose:
- "9000"
networks:
- backend
### NGINX Server #########################################
nginx:
image: nginx
container_name: nginx
build:
context: ../
dockerfile: ./laradock/nginx/Dockerfile-Prod
args:
- http_proxy
- https_proxy
- no_proxy
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- backend
I figured this out. It was as I thought, I had to add a new security group with port 80/443 access for HTTP and HTTPS.