I tried to deploy the following project to AWS Fargate by docker compose CLI.
docker-compose.yml
version: "3"
x-aws-vpc: vpc-04db4c9a69a101123
services:
redis:
image: redis:alpine
container_name: redis
networks:
- default
nettool:
image: praqma/network-multitool:latest
container_name: nettool
depends_on:
- redis
command: >
/bin/bash -c "sleep 30 && ping redis"
networks:
- default
networks:
default:
external: true
name: sg-076b1c7d0cfaac123
AWS Context as administrator permission. I deploy the project by docker compose up. It deploys with load balancer being used and required 2 subnet masks, I fulfilled it.
ping redis at nettool service got Name does not resolve
I hope here is somebody who can help me!
Related
I’m trying to deploy a docker compose mern stack app with aws ecs.
I finally got the cluster active with 3 services (node backend, react front and mongodb) using the docker compose interface with
docker context use myecscontext
docker compose up
Now how do I view the front end via a url?
I want to be proficient in deploying websites for freelance work, is this the way it's done?
I noticed there is a way to define an ip adress in docker-compose.yaml, would that be the way? basically like this:
x-aws-loadbalancer:
IpAddressType: "ipv4"
VpcId: "vpc-123456"
LoadBalancerArn: "arn:aws:elasticloadbalancing:us-east-
1:1234567890:loadbalancer/app/myloadbalancer/123abcd456"
DNSName: "myloadbalancer-123456.us-east-
1.elb.amazonaws.com"
where the DNSName is the url?
*edit i found the dns name in load balancer page on aws console but it won't load, this is my docker-compose.ymal maybe something is wrong here
version: "3.7"
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: anthonyss09/thebikeshop:bike-server
container_name: myapp-node-server
logging:
driver: awslogs
options:
awslogs-region: us-east-1
awslogs-group: bike-server-container
command: /usr/src/app/node_modules/.bin/nodemon server.js
volumes:
- my-data:/user/src/app
ports:
- "5000:5000"
depends_on:
- mongo
env_file: ./server/.env
environment:
- NODE_ENV=development
networks:
- app-network
mongo:
image: anthonyss09/thebikeshop:bike-mongo
volumes:
- my-data:/data/db
ports:
- "27017:27017"
networks:
- app-network
client:
build:
context: ./client
dockerfile: Dockerfile
image: anthonyss09/thebikeshop:bike-client
container_name: myapp-react-client
logging:
driver: awslogs
options:
awslogs-region: us-east-1
awslogs-group: bike-client-container
command: npm run build
volumes:
- my-data:/user/app
depends_on:
- server
ports:
- "80:80"
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
my-data:
node_modules:
web-root:
driver: local
I have setup an ECS Cluster and I'm using ECS CLI to create services/tasks from my existing docker-compose file.
This is how my compose file looks like:
version: "3"
services:
service-a:
image: 537403345265.dkr.ecr.us-west-1.amazonaws.com/service-a:latest
environment:
- SPRING_PROFILES_ACTIVE=dev
ports:
- "8100:8100"
service-b:
image: 537403345265.dkr.ecr.us-west-1.amazonaws.com/service-b:latest
environment:
- SPRING_PROFILES_ACTIVE=dev
- SERVICE_A_BASE=service-a:8100
ports:
- "8101:8101"
depends_on:
- service-a
and my ecs-params.yml:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
os_family: Linux
task_size:
mem_limit: 2GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-xx"
- "subnet-xy"
security_groups:
- "sg-xz"
assign_public_ip: ENABLED
where I'm using SERVICE_A_BASEas the base URL for calling service A from Service B.
The same compose file works fine in my local but it is not working inside my ECS Cluster.
I have set inbound rules to allow ports 8100 and 8101 in my security group.
What might be wrong or Is there an another way of doing this?
I am trying to run Cypress e2e tests by spinning up docker-compose services and checking them with Cypress. What ever I do, I am not able to connect to another service either outside of the service or inside of one.
I tried adding running curl 127.0.0.1 to check if service is reachable, but it returns "Failed to connect to 127.0.0.1 port 80: Connection refused"
My CodeBuild environment has privileged set to true.
This is my buildspec:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Test Execution starting...
- AWS_ACCOUNT_ID=$AWS_ACCOUNT_ID
- AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
- docker-compose --file=deployment/docker.yml up -d main webserver
- docker-compose --file=deployment/docker.yml run cypress
post_build:
commands:
- docker-compose --file=deployment/docker.yml down
My docker-compose file is very simple. It get's images that have already been built.
version: '3'
services:
main:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/main:${ENVIRONMENT}
build:
context: ../main
container_name: main
ports:
- "3000:3000"
networks:
- network
webserver:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/webserver:${ENVIRONMENT}
build:
context: ../frontend
container_name: webserver
ports:
- "80:80"
depends_on:
- main
networks:
- network
cypress:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/cypress:${ENVIRONMENT}
build:
context: ../cypress
command: "--browser chrome"
depends_on:
- webserver
- main
environment:
- CYPRESS_baseUrl=http://webserver:80
networks:
- network
networks:
network:
driver: bridge
I had a similar issue. My cypress instance wasn´t able to connect with the service to test.
I solved it setting "network_mode" to "host" and the base url to "localhost" in the cypress service.
In your case, it would be something similar to this:
cypress:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/cypress:${ENVIRONMENT}
build:
context: ../cypress
command: "--browser chrome"
depends_on:
- webserver
- main
environment:
- CYPRESS_baseUrl=http://localhost:80
network_mode: "host"
Let's take this part of docker-compose.yml
eurekaserver:
image: XXXXX
ports:
- "8761:8761"
zuulserver:
image: XXXXX
ports:
- "5555:5555"
- "5556:5556"
environment:
PROFILE: "default"
SERVER_PORT: "5555"
DEBUG_PORT: "5556"
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
EUREKASERVER_PORT: "8761"
In my local machine I can communicate from zuulserver to eurekaserver. I checked it via nc -z eurekaserver $EUREKASERVER_PORT, but when I try to execute nc -z eurekaserver $EUREKASERVER_PORT from inside zuulserver container in ECS cluster after importing docker images via ecs-cli compose --file docker-compose.yml up I get
nc: getaddrinfo: Name does not resolve
i'm trying to get my laravel app running in ec2 with docker containers. I have two containers one of the app and then one for nginx. I have created the ec2 instance with docker-machine and i've also built the docker images successfully.
Running docker-compose up also runs successfully. If I run docker ps I see the two containers running.
So I have two containers running I would expect to go to the http://ec2-ip-addy-here.compute-1.amazonaws.com/ and see the app. My hunch is that something isn't setup on AWS side correctly, maybe the VPC? I'm a novice with AWS so I don't know what to look for. Any ideas?
I'm following this guide https://hackernoon.com/stop-deploying-laravel-manually-steal-this-docker-configuration-instead-da9ecf24cd2e
I'm also using the laradock nginx dockerfile and my own dockerfile for the app
EDIT:
It could be the networks that are created with docker-compose. I say that because I just checked and the network is being prepended with the service name. When I run docker network ls I see a network called php-fpm_backend. Here's my docker-compose.yml file
version: '3'
networks:
backend:
driver: bridge
services:
### PHP-FPM ##############################################
php-fpm:
image: php-fpm
container_name: php-fpm
build:
context: ../
dockerfile: ./laradock/php-fpm/Dockerfile-Prod
args:
- LARADOCK_PHP_VERSION=7.2
- INSTALL_PGSQL=true
- INSTALL_PG_CLIENT=true
- INSTALL_POSTGIS=true
expose:
- "9000"
networks:
- backend
### NGINX Server #########################################
nginx:
image: nginx
container_name: nginx
build:
context: ../
dockerfile: ./laradock/nginx/Dockerfile-Prod
args:
- http_proxy
- https_proxy
- no_proxy
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- backend
I figured this out. It was as I thought, I had to add a new security group with port 80/443 access for HTTP and HTTPS.