communicating from another docker container in ECS cluster - amazon-web-services

Let's take this part of docker-compose.yml
eurekaserver:
image: XXXXX
ports:
- "8761:8761"
zuulserver:
image: XXXXX
ports:
- "5555:5555"
- "5556:5556"
environment:
PROFILE: "default"
SERVER_PORT: "5555"
DEBUG_PORT: "5556"
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
EUREKASERVER_PORT: "8761"
In my local machine I can communicate from zuulserver to eurekaserver. I checked it via nc -z eurekaserver $EUREKASERVER_PORT, but when I try to execute nc -z eurekaserver $EUREKASERVER_PORT from inside zuulserver container in ECS cluster after importing docker images via ecs-cli compose --file docker-compose.yml up I get
nc: getaddrinfo: Name does not resolve

Related

Calling Service A from B is giving connecton refused using ECS CLI with docker compose. What might be the reason?

I have setup an ECS Cluster and I'm using ECS CLI to create services/tasks from my existing docker-compose file.
This is how my compose file looks like:
version: "3"
services:
service-a:
image: 537403345265.dkr.ecr.us-west-1.amazonaws.com/service-a:latest
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- "8100:8100"
service-b:
image: 537403345265.dkr.ecr.us-west-1.amazonaws.com/service-b:latest
environment:
- SPRING_PROFILES_ACTIVE=dev
- SERVICE_A_BASE=service-a:8100
ports:
- "8101:8101"
depends_on:
- service-a
and my ecs-params.yml:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
os_family: Linux
task_size:
mem_limit: 2GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-xx"
- "subnet-xy"
security_groups:
- "sg-xz"
assign_public_ip: ENABLED
where I'm using SERVICE_A_BASEas the base URL for calling service A from Service B.
The same compose file works fine in my local but it is not working inside my ECS Cluster.
I have set inbound rules to allow ports 8100 and 8101 in my security group.
What might be wrong or Is there an another way of doing this?

Cannot connect to docker-compose services in AWS CodeBuild

I am trying to run Cypress e2e tests by spinning up docker-compose services and checking them with Cypress. What ever I do, I am not able to connect to another service either outside of the service or inside of one.
I tried adding running curl 127.0.0.1 to check if service is reachable, but it returns "Failed to connect to 127.0.0.1 port 80: Connection refused"
My CodeBuild environment has privileged set to true.
This is my buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Test Execution starting...
- AWS_ACCOUNT_ID=$AWS_ACCOUNT_ID
- AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- docker-compose --file=deployment/docker.yml up -d main webserver
- docker-compose --file=deployment/docker.yml run cypress
post_build:
commands:
- docker-compose --file=deployment/docker.yml down
My docker-compose file is very simple. It get's images that have already been built.
version: '3'
services:
main:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/main:${ENVIRONMENT}
build:
context: ../main
container_name: main
ports:
- "3000:3000"
networks:
- network
webserver:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/webserver:${ENVIRONMENT}
build:
context: ../frontend
container_name: webserver
ports:
- "80:80"
depends_on:
- main
networks:
- network
cypress:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/cypress:${ENVIRONMENT}
build:
context: ../cypress
command: "--browser chrome"
depends_on:
- webserver
- main
environment:
- CYPRESS_baseUrl=http://webserver:80
networks:
- network
networks:
network:
driver: bridge
I had a similar issue. My cypress instance wasn´t able to connect with the service to test.
I solved it setting "network_mode" to "host" and the base url to "localhost" in the cypress service.
In your case, it would be something similar to this:
cypress:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/cypress:${ENVIRONMENT}
build:
context: ../cypress
command: "--browser chrome"
depends_on:
- webserver
- main
environment:
- CYPRESS_baseUrl=http://localhost:80
network_mode: "host"

Service to service Name does not resolve ECS Fargate

I tried to deploy the following project to AWS Fargate by docker compose CLI.
docker-compose.yml
version: "3"
x-aws-vpc: vpc-04db4c9a69a101123
services:
redis:
image: redis:alpine
container_name: redis
networks:
- default
nettool:
image: praqma/network-multitool:latest
container_name: nettool
depends_on:
- redis
command: >
/bin/bash -c "sleep 30 && ping redis"
networks:
- default
networks:
default:
external: true
name: sg-076b1c7d0cfaac123
AWS Context as administrator permission. I deploy the project by docker compose up. It deploys with load balancer being used and required 2 subnet masks, I fulfilled it.
ping redis at nettool service got Name does not resolve
I hope here is somebody who can help me!

Can't connect to elasticache redis from docker container running in EC2

As a part of my CI process, I am creating a docker-machine EC2 instance and running 2 docker containers inside of it via docker-compose. The server container test script attempts to connect to an AWS elasticache redis instance within the same VPC as the EC2. When the test script is run I get the following error:
1) Storage
check cache connection
should return seeded value in redis:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/src/app/test/scripts/test.js)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
Update: I can connect via redis-cli from the EC2 itself:
redis-cli -c -h ***.cache.amazonaws.com -p 6379 ping
> PONG
It looks like I cant connect to my redis instance because my docker container is using an IP that is not within the same VPC as my elasticache instance. How can I setup my docker config to use the same IP as the host machine while building my containers from remote images? Any help would be appreciated.
Relevant section of my docker-compose.yml:
version: '3.8'
services:
server:
build:
context: ./
dockerfile: Dockerfile
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
env_file: ./.env
Server container Dockerfile:
FROM node:12-alpine
# create app dir
WORKDIR /usr/src/app
# install dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 8080 6379
CMD ["npm", "run", "dev"]
Elasticache redis SG inbound rules:
EC2 SG inbound rules:
I solved the problem through extensive trial and error. The major hint that pointed me in the right direction was found in the Docker docs:
By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network...
Elasticache instances are only accessible internally from their respective VPC. Based on my config, the docker container and the ec2 instance were running on 2 different IP addresses but only the EC2 IP was whitelisted to connect to Elasticache.
I had to bind the docker container IP to the host EC2 IP in my docker.compose.yml by setting the container network_mode to "host":
version: '3.8'
services:
server:
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
network_mode: "host"
env_file: ./.env
...

AWS EC2 Instance can't be reached even though docker containers running

i'm trying to get my laravel app running in ec2 with docker containers. I have two containers one of the app and then one for nginx. I have created the ec2 instance with docker-machine and i've also built the docker images successfully.
Running docker-compose up also runs successfully. If I run docker ps I see the two containers running.
So I have two containers running I would expect to go to the http://ec2-ip-addy-here.compute-1.amazonaws.com/ and see the app. My hunch is that something isn't setup on AWS side correctly, maybe the VPC? I'm a novice with AWS so I don't know what to look for. Any ideas?
I'm following this guide https://hackernoon.com/stop-deploying-laravel-manually-steal-this-docker-configuration-instead-da9ecf24cd2e
I'm also using the laradock nginx dockerfile and my own dockerfile for the app
EDIT:
It could be the networks that are created with docker-compose. I say that because I just checked and the network is being prepended with the service name. When I run docker network ls I see a network called php-fpm_backend. Here's my docker-compose.yml file
version: '3'
networks:
backend:
driver: bridge
services:
### PHP-FPM ##############################################
php-fpm:
image: php-fpm
container_name: php-fpm
build:
context: ../
dockerfile: ./laradock/php-fpm/Dockerfile-Prod
args:
- LARADOCK_PHP_VERSION=7.2
- INSTALL_PGSQL=true
- INSTALL_PG_CLIENT=true
- INSTALL_POSTGIS=true
expose:
- "9000"
networks:
- backend
### NGINX Server #########################################
nginx:
image: nginx
container_name: nginx
build:
context: ../
dockerfile: ./laradock/nginx/Dockerfile-Prod
args:
- http_proxy
- https_proxy
- no_proxy
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- backend
I figured this out. It was as I thought, I had to add a new security group with port 80/443 access for HTTP and HTTPS.