How can I get traefik to work on my cloud architecture? - amazon-web-services

Okay, so spent a day on my EC2 with Traefik and Docker set up, but doesn't seem to be working as described in the docs. I can get the Whoami example running but that doesn't really illustrate what I'm looking for?
For my example I have three AWS API Gateway endpoints and I need to point them to my EC2 IP address that gets routed by my Traefik frontend set up and then uses some backend? Which I'm still uncertain of what kind of backend to use.
I can't seem to find a good YAML example that clearly illustrates something to suit my purpose and needs.
Can anyone point me in the right direction? Any good example Docker YAML examples, configuration set up for my example below? Thanks!

I had taken this article as a guide to provision docker installation with traefik.
EDIT: Before this, create a docker network called proxy.
$ docker network create proxy
version: '3'
networks:
proxy:
external: true
internal:
external: false
services:
reverse-proxy:
image: traefik:latest # The official Traefik docker image
command: --api --docker --acme.email="your-email" # Enables the web UI and tells Træfik to listen to docker
restart: always
labels:
- traefik.frontend.rule=Host:traefik.your-server.net
- traefik.port=8080
networks:
- proxy
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/traefik.toml:/etc/traefik/traefik.toml
- $PWD/acme.json:/acme.json
db:
image: mariadb:10.3
restart: always
environment:
MYSQL_ROOT_PASSWORD: r00tPassw0rd
volumes:
- vol-db:/var/lib/mysql
networks:
- internal # since you do not need to expose this via traefik, so just set it to internal network
labels:
- traefik.enable=false
api-1:
image: your-api-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api1.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
api-2:
image: your-api-2-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api2.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
Note: Use this if you want to enable SSL as well. Please note that, this might not work in local server as letsencrypt cannot complete the challenge for SSL setup.
create a blank file acme.json and set its permission to 0600
touch acme.json
chmod 0600 acme.json
After setting up everything,
docker-compose config # this is optional though.
and then,
docker-compose up
I have posted my traefik.toml here
I hope this helps.
Let me know if you face any issues.
Regards,
Kushal.

Related

No connection in a multicontainer docker environment

I am trying to deploy an application in Docker running on 64bit Amazon Linux 2. I am using a pipeline, which publishes images to a private repository on Dockerhub. Elastic beanstalk uses docker-compose to run containers, but so far I've had no success in accessing the application. I am not using a dockerrun.aws.json file, as v.3 does not support any container configuration, and as far as I know, it's not needed for docker compose.
My docker-compose file contains several services, one of which is a RabbitMQ message broker.
version: '3.9'
services:
Some.API:
image: ...
container_name: some-api
networks:
- my-network
ports:
- "9002:80"
Another.API:
image: ...
container_name: another-api
networks:
- my-network
ports:
- "9003:80"
rabbitmQ:
image: rabbitmq:3-management-alpine
container_name: rabbit-mq
labels:
NAME: rabbitmq
volumes:
- ./rabbitconfig/rabbitmq-isolated.conf:/etc/rabbitmq/rabbitmq.config
networks:
- my-network
ports:
- "4369:4369"
- "5671:5671"
- "5672:5672"
- "25672:25672"
- "15671:15671"
- "15672:15672"
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
networks:
my-network:
driver: bridge
Once the current version of the application is successfuly deployed to Beanstalk, I see that there is no successful communication in the bridge network.
In the eb-stdouterr.log I see that there are errors while establishing connection between the apis and the message broker:
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable.
The APIs are .NET Core applications, which use the Beanstalk's environment variables to determine the name of the broker service. In the Configuration/Software/Environment properties section there is a following entry:
RABBIT_HOSTNAME | rabbitmq
which should ensure that the services use a proper host name.
Yet, I get exceptions. Any advice?
It turned out that I needed to reference the automatically generated .env file in docker-compose.yml like so:
front-end:
image: ...
container_name: front-end
networks:
- my-network
ports:
- "9001:80"
env_file: <--- these
- .env <--- 2 lines
for each service. Only after doing this the Environment properties from AWS Beanstalk were passed to the containers.

Docker : compose file is incompatible with Amazon ECS

I am trying to deploy my docker image in AWS ECS. I have created the ECR repository and done all required steps till pushing the image to ECS.
My docker-compose.yaml looks like this
version: '3'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- nginx_network
- database1_network
depends_on:
- database1
nginx:
image: nginx:1.13
ports:
- 80:5000
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- nginx_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- database1_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
nginx_network:
driver: bridge
database1_network:
driver: bridge
volumes:
database1_volume:
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
I am trying to run the command:
docker ecs compose -n abc up
And i get the following error:
WARN[0000] services.build: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
ERRO[0000] published port can't be set to a distinct value than container port: incompatible attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] services.env_file: unsupported attribute
WARN[0000] services.volumes: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
WARN[0000] networks.driver: unsupported attribute
compose file is incompatible with Amazon ECS
I am using the latest version of docker i.e 19.03.08 and the latest aws-cli/2.0.39.
I am facing the same trouble and I was able to accomplish it by removing all attributes in docker-compose.yaml except for image.
In your djangoapp service under the image attribute, you set the value to a URI with the comment "the uri after pushing the image". Presumably this is the URI to a locally-built docker image of djangoapp, which was pushed to a ECR repository.
Since you already built and pushed the djangoapp image to ECR, just leave the image attribute and comment out all other attributes listed in the error message from docker-compose.yaml:
build
volumes
...
In my case it helped.
The list of supported attributes: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-parameters.html
Docker decided to not allow for this. This is what they said on a Slack convo: "We decided to not support this as this would only apply to ingress traffic, not service-to-service, which would be both confusing and inconsistent with local development".
I'm not sure what you're trying to achieve, but it looks like you want to have a django service backed by a postgres database, and you want to use nginx as a reverse proxy to forward requests to django?
If you are trying to do that, then just us a single network for your cluster, and get rid of the driver option(s) for the networks. Removing those driver options will get rid of the networks.driver errors. The networks in docker compose will map to EC2 security groups. They're not like private/bridged/nat networks you would use in VM management like VMWare or Hyper-V. Putting all of your services on the same network will allow them to communicate with each other if that's what you're looking for. They will also be able to communicate out to the internet, but only those that have ports set will be reachable directly from the internet.
Regarding ports, only symmetrical port mapping is supported in ECS. This means you can't map one external port to a different internal port. They have to be the same. So your nginx configuration must use either 80, or 5000 - not both. Fixing that will get rid of your 3rd error.
As #Maksym specified, you don't need the build option under your djangoapp service. You already have the image built and are just deploying it.
For your volumes, you're trying to mount a host path with .:/opt/services/djangoapp/src. When you use docker compose to deploy to ECS, it uses CloudFormation to deploy your stack and each of your services run "serverless", so there isn't a host to mount a path from. In your case, since you're building djangoapp image yourself, just update your Dockerfile to copy the desired contents to the /opt/services/djangoapp/src filter as part of the image build. Do the same with your nginx service - create your own nginx image and include the files you want in /etc/nginx/conf.d, push it to ECR, and then use that image in your compose yaml. One of those files should be a configuration to reverse proxy port 80 to your djangoapp:5000 port.
For your other errors, I'm not sure. The configuration for env_file looks fine, as does your named volume mappings like static_volume:/opt/services/djangoapp/static. I don't know if updating your version heading to 3.7 in the top of your file will help.
In the end, your file should look similar to this:
version: '3.7'
services:
djangoapp:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/abc:latest #uri after pushing the image
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
networks:
- app_network
depends_on:
- database1
nginx:
image: xxxxx.dkr.ecr.ca-central-1.amazonaws.com/my-nginx:latest
ports:
- 80
volumes:
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
depends_on:
- djangoapp
networks:
- app_network
database1:
image: postgres:10
env_file:
- config/db/database1_env
networks:
- app_network
volumes:
- database1_volume:/var/lib/postgresql/data
networks:
app_network:
name: app_network
volumes:
database1_volume:
name: app_database1_volume
static_volume:
name: app_static_volumne
media_volume:
name: app_media_volume

How to fix ”unable to prepare context: unable to evaluate symlinks in Dockerfile path” error in circleci

I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
 ├ .circleci
 ├ client
 ├ api
 └ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1
You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml

Selenium Grid Setup using Docker Compose on AWS ECS

Context :
I am trying to setup a selenium grid to run my UI tests on CI.CI is Jenkins 2.0 and it runs on AWS ECS.When I create a selenium grid using the docker compose and invoke the tests on my MAC (OS Sierra) , it works perfectly.
When run on the AWS ECS , it shows me an : java.awt.AWTError: Can't connect to X11 window server using '99.0' as the value of the DISPLAY variable.
The test code itself is in a container and using a bridge network I have added the container to the same network as the grid.
The docker compose looks something like this :
version: '3'
services:
chromenode:
image: selenium/node-chrome:3.4.0
volumes:
- /dev/shm:/dev/shm
- /var/run/docker.sock:/var/run/docker.sock
container_name: chromenode
hostname: chromenode
depends_on:
- seleniumhub
ports:
- "5900:5900"
environment:
- "HUB_PORT_4444_TCP_ADDR=seleniumhub"
- "HUB_PORT_4444_TCP_PORT=4444"
networks:
- grid_network
seleniumhub:
image: selenium/hub:3.4.0
ports:
- "4444:4444"
container_name: seleniumhub
hostname: seleniumhub
networks:
- grid_network
volumes:
- /var/run/docker.sock:/var/run/docker.sock
testservice:
build:
context: .
dockerfile: DockerfileTest
networks:
- grid_network
networks:
grid_network:
driver: bridge
Please let me know if more info is required.
unset DISPLAY This helped me to solve the problem
This helps in most cases (e.g. starting application servers or other java based tools) and avoids to modify all that many command lines.
It can also be comfortable to add it to the .bash_profile for a dedicated app-server/tools user.
Can you please try this
- no_proxy=""

Docker example for frontend and backend application

I am learning how to use Docker, and I am in a process of setting up a simple app with Frontend and Backend using Centos+PHP+MySQL.
I have my machine:
"example"
In machine "example" i have configured 2 docker containers:
frontend:
build: ./frontend
volumes:
- ./frontend:/var/www/html
- ./infrastructure/logs/frontend/httpd:/var/logs/httpd
ports:
- "80"
links:
- api
api:
build: ./api
volumes:
- ./api:/var/www/html
- ./infrastructure/logs/api/httpd:/var/logs/httpd
ports:
- "80"
links:
- mysql:container_mysql
The issue I am facing is when I access the docker container, I need to specify a port number for either FRONTEND (32771) or BACKEND (32772).
Is this normal or is there a way to create hostnames for the API and Frontend of the application?
How does this work on deployment to AWS?
Thanks in advance.
If you're running docker 1.9 or 1.10, and use the 2.0 format for your docker-compose.yml, you can directly access other services through either their "service" name, or "container" name. See my answer on this question, which has a basic example to illustrate this; https://stackoverflow.com/a/36245209/1811501
Because the connection between services goes through the private container-container network, you don't need to use the randomly assigned ports, so if a service publishes/exposes port 80, you can simply connect through port 80