Docker example for frontend and backend application - amazon-web-services

I am learning how to use Docker, and I am in a process of setting up a simple app with Frontend and Backend using Centos+PHP+MySQL.
I have my machine:
"example"
In machine "example" i have configured 2 docker containers:
frontend:
build: ./frontend
volumes:
- ./frontend:/var/www/html
- ./infrastructure/logs/frontend/httpd:/var/logs/httpd
ports:
- "80"
links:
- api
api:
build: ./api
volumes:
- ./api:/var/www/html
- ./infrastructure/logs/api/httpd:/var/logs/httpd
ports:
- "80"
links:
- mysql:container_mysql
The issue I am facing is when I access the docker container, I need to specify a port number for either FRONTEND (32771) or BACKEND (32772).
Is this normal or is there a way to create hostnames for the API and Frontend of the application?
How does this work on deployment to AWS?
Thanks in advance.

If you're running docker 1.9 or 1.10, and use the 2.0 format for your docker-compose.yml, you can directly access other services through either their "service" name, or "container" name. See my answer on this question, which has a basic example to illustrate this; https://stackoverflow.com/a/36245209/1811501
Because the connection between services goes through the private container-container network, you don't need to use the randomly assigned ports, so if a service publishes/exposes port 80, you can simply connect through port 80

Related

How does the Client in a docker image know the ip address of a server which is in another docker image?

I am using React Client, Django for the backend and Postgres for the db. I am preparing docker images of the client, server and the db.
My docker-compose.yml looks like this:
version: '3'
services:
client:
build: ./client
stdin_open: true
ports:
- "3000:3000"
depends_on:
- server
server:
build: ./server
ports:
- "8000:8000"
depends_on:
- db
db:
image: "postgres:12-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: bLah6laH614h
Because the docker images can be deployed anywhere separately, I am unsure how to access the server from the client code and that of the db's ip in the server. I am new to react,django and dockers. Please help!
Using your docker-compose.yml file configuration as basis 4 things will happen, as per the docs:
A network called myapp_default is created. (Let's say that your project folder is name myapp)
A container is created using db’s configuration. It joins the network myapp_default under the name db.
A container is created using server’s configuration. It joins the network myapp_default under the name server.
A container is created using client’s configuration. It joins the network myapp_default under the name client.
Now to send an HTTP request from client to server you should use this URL:
http://server:8000
because of item 3 and because the server configured port is 8000.

Using Traefik with EC2, how does frontend rule change url

I am new to using proxy, so I didnt quite follow why this isn't working. Not sure if this a Trafeik question, or a simple "I dont know how routing works" question
I followed the Traefik tutorial as show on their website here : https://docs.traefik.io/
Their docker-compose.yml looks like this:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
whoami:
image: containous/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
So now I wanted to run this same yml file on my ec2 instance. I make a change to the last line so that it looks like this instead:
- "traefik.frontend.rule=Host:whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com"
So I assumed that if I visited http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com, I would see my whoami app's response. However, I get a response from my ISP that that wesbite does not exist. If I access http://<ec2-XXX>.<region>.compute.amazonaws.com:8080 I can see my Traefik console fine.
I think its got to do with web addresses, and you can only have two items before the website, like x.y.website.com, and the url I am using to access my ec2 is already using those two slots. I am unsure what to search for.
Do I need to register a site/ buy a domain first?
How would I connect this site to my ec2 instance?
Was I correct as to why http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com was not working?

How can I get traefik to work on my cloud architecture?

Okay, so spent a day on my EC2 with Traefik and Docker set up, but doesn't seem to be working as described in the docs. I can get the Whoami example running but that doesn't really illustrate what I'm looking for?
For my example I have three AWS API Gateway endpoints and I need to point them to my EC2 IP address that gets routed by my Traefik frontend set up and then uses some backend? Which I'm still uncertain of what kind of backend to use.
I can't seem to find a good YAML example that clearly illustrates something to suit my purpose and needs.
Can anyone point me in the right direction? Any good example Docker YAML examples, configuration set up for my example below? Thanks!
I had taken this article as a guide to provision docker installation with traefik.
EDIT: Before this, create a docker network called proxy.
$ docker network create proxy
version: '3'
networks:
proxy:
external: true
internal:
external: false
services:
reverse-proxy:
image: traefik:latest # The official Traefik docker image
command: --api --docker --acme.email="your-email" # Enables the web UI and tells Træfik to listen to docker
restart: always
labels:
- traefik.frontend.rule=Host:traefik.your-server.net
- traefik.port=8080
networks:
- proxy
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD/traefik.toml:/etc/traefik/traefik.toml
- $PWD/acme.json:/acme.json
db:
image: mariadb:10.3
restart: always
environment:
MYSQL_ROOT_PASSWORD: r00tPassw0rd
volumes:
- vol-db:/var/lib/mysql
networks:
- internal # since you do not need to expose this via traefik, so just set it to internal network
labels:
- traefik.enable=false
api-1:
image: your-api-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api1.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
api-2:
image: your-api-2-image
restart: always
networks:
- internal
- proxy
labels:
- "traefik.docker.network=proxy"
- "traefik.enable=true"
- "traefik.frontend.rule=Host:api2.yourdomain.com"
- "traefik.port=80"
- "traefik.protocol=http"
Note: Use this if you want to enable SSL as well. Please note that, this might not work in local server as letsencrypt cannot complete the challenge for SSL setup.
create a blank file acme.json and set its permission to 0600
touch acme.json
chmod 0600 acme.json
After setting up everything,
docker-compose config # this is optional though.
and then,
docker-compose up
I have posted my traefik.toml here
I hope this helps.
Let me know if you face any issues.
Regards,
Kushal.

Vagrant to Docker communication via Docker IP address

Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!
In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs
I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.

Symfony Application on AWS ECS with a data-only container - Is this the right direction?

I have a dockerized Symfony2 Application consisting of four containers:
php-fpm
nginx
mysql
code (data container with a volume)
On my local machine this setup runs without problem with docker-compose:
code:
image: ebc9f7b635b3
nginx:
build: docker/nginx
ports:
- "80:80"
links:
- php
volumes_from:
- code
php:
build: docker/php
volumes_from:
- code
links:
- mysql
mysql:
image: mysql
ports:
- "5000:3306"
command: mysqld --sql_mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
environment:
- MYSQL_ROOT_PASSWORD=xyz
- MYSQL_DATABASE=xyz
- MYSQL_USER=xyz
- MYSQL_PASSWORD=xyz
I wanted to deploy my application on AWS ECS, therefore I prebuilt all the images and pushed them to the AWS Container Registry, created a new cluster with a new service and translated my local docker-compose.yml to a TaskDefinition.
Since yesterday I tried to get it running, but after following the Official Dokumentation
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html
and searching for hours I can't find a way to get it working.
Either the service get stucked in a PENDING state without bringing up a container (except for the mysql container) or if I attach a volume to the task definition the containers coming up but the data is not mapped.
Is it that I must reference the data only container in a special syntax in the volumesFrom section of the Task Definition?
Is there a solution for this by now other than using EFS?
Thanks!