I am new to using proxy, so I didnt quite follow why this isn't working. Not sure if this a Trafeik question, or a simple "I dont know how routing works" question
I followed the Traefik tutorial as show on their website here : https://docs.traefik.io/
Their docker-compose.yml looks like this:
version: '3'
services:
reverse-proxy:
image: traefik # The official Traefik docker image
command: --api --docker # Enables the web UI and tells Traefik to listen to docker
ports:
- "80:80" # The HTTP port
- "8080:8080" # The Web UI (enabled by --api)
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
whoami:
image: containous/whoami # A container that exposes an API to show its IP address
labels:
- "traefik.frontend.rule=Host:whoami.docker.localhost"
So now I wanted to run this same yml file on my ec2 instance. I make a change to the last line so that it looks like this instead:
- "traefik.frontend.rule=Host:whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com"
So I assumed that if I visited http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com, I would see my whoami app's response. However, I get a response from my ISP that that wesbite does not exist. If I access http://<ec2-XXX>.<region>.compute.amazonaws.com:8080 I can see my Traefik console fine.
I think its got to do with web addresses, and you can only have two items before the website, like x.y.website.com, and the url I am using to access my ec2 is already using those two slots. I am unsure what to search for.
Do I need to register a site/ buy a domain first?
How would I connect this site to my ec2 instance?
Was I correct as to why http://whoami.docker.<ec2-XXX>.<region>.compute.amazonaws.com was not working?
Related
I am new to deploying with docker. Actually, I am running my django app in my computer inside docker container, and it is running successfully on port localhost:8080. Then I pulled the code to remote server and started docker-compose up, and app is running successfully there as well. What I want to ask is that, how can I see the app with the help of ip adress of the server? For example, if the ip adress is 123.45.67.89 I think the app should be running in 123.45.67.89:8080 but it does not run there. How can I access to the app running in container in remote server?
P.S. I have not used nginx, should I use it?
docker-compose.yml
The answer to this question greatly depends on where you are hosting your production application, and what type of services it provides you out of the box.
In general, production servers usually have some reverse proxy or application load balancer sitting in front of the containerized application(s).
Since you are starting with docker, and since I am assuming this is a personal or small scale app, I can recommend the following:
If you are flexible in terms of hosting providers, try Digital Ocean. They are very developer friendly, and cost effective, at least until a certain scale point.
Use the automated docker nginx-proxy. This tool lets you add a couple of lines to your docker-compose.yml file, and magically get a configured nginx proxy, without knowing anything about nginx.
I am using this approach to deploy multiple personal websites to a single, low cost server.
An example docker-compose.yml might look like this:
services:
nginx:
image: nginxproxy/nginx-proxy
ports: ["${PORT:-80}:80"]
restart: always
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEFAULT_HOST: www.yoursite.com
app:
depends_on: [nginx]
restart: always
image: your/image
environment:
VIRTUAL_HOST: myapp.localhost,www.yoursite.com
which basically tells the nginx-proxy to serve your app on both http://myapp.localhost and http://www.yoursite.com.
Of course, you will need to point your domains DNS to your digital ocean IP.
Technically, it should work the way you did it, but maybe the port 8080 is not open to the outside world.
You could change the port mapping in your docker-compose.yml file:
ports:
- "80":"8080"
You can then access your app from 123.45.67.89, without any port specified since 80 is the default. If it doesn't work, double check the ip address and your firewall rules.
However, using Nginx is almost always a good idea since the local web server you are using is not production ready (feature and security wise). I'll not explain how to implement Nginx here because it's a little bit off topic and there is a lot of resource available, but you should seriously consider it when you are deploying on a remote server.
I am trying to add a tensorboard container to an existing microservice structure running behind traefik. Unfortunately, the traefik version is 1.5 so a lot of recent articles are not helpful.
Since there is a default service on www.my-server.com/, I am trying to have traefik redirect to the tensorboard service from www.my-server.com/tensorboard/. Here is my docker-compose (the part relevant for tensorboard)
tensorboard:
build: "./docker/build/images/tensorflow"
container_name: tensorboard
command: tensorboard --logdir=runs --port=8888 --host=0.0.0.0
labels:
- log.level=debug
- traefik.enable=true
- traefik.frontend.rule=Host:www.my-server.com;PathPrefix:/tensorboard/
volumes:
- ./machine_learning:/opt/src
ipc: host
When I visit www.my-server.com/tensorboard/ I get "Not Found". If I remove the host argument from the command I get "Bad Gateway". I don't understand what either of these mean but I think one of them is being able to reach the service but the service is getting the request with the prefix tensorboard and is complaining.
How do I make this work?
Turns out that the following command will solve this problem
tensorboard --logdir mylogdir --bind_all --path_prefix=/tensorboard
I am struggling to configure AWS ECS Task definition to run simple PHP-FPM and Nginx based containers.
My "app" container is running at app:9000 port and Nginx is in 80 port. Nginx is forwarding the request to app container through a fastcgi_pass app-upstream;
All of these are running perfectly in local. The same configuration is running perfectly in DigitalOcean Docker Instance but fails in AWS ECS.
I am guessing some task-definition configuration issue, but I can't find it.
Error Logs:
I am getting this log from Nginx container
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/conf.d/default.conf:2
and this log from App (PHP-FPM) container
ECS
I've created a simple cluster with T2 Small instance, which is running perfectly.
Dockerfile
In this Github Repo, I've added the Dockerfile for App, Nginx image, docker-compose.yml file and task-defination.json file for reference if there is any mistake in docker files.
Source code in Github repo: https://github.com/arifulhb/docker-ecr-sample
Your issue is related to wrong upstream path mentioned in nginx configuration.
Do following things to investigate, also avoid using custom container names in docker-compose files if specifically not needed :-
Quick resolution would be :-
Remove container names from docker-compose files.
The service key name ( for example : test_app ) mentioned in docker-compose file is treated as container name automatically so use that.
The correct upstream path after making above changes should be test_app:9000
Proper and recommended way of building a docker-compose files :-
Create a custom docker network suppose with name "intranet"
Mention this network "intranet" in each service you create in your docker-compose file.
Follow steps mentioned in quick resolution
How does this helps you ? You have the ability to inspect this network you created, figure out if your containers are properly connected and identify the names used for connection.
Command : docker network inspect <network_name>
NOTE : Docker treats container names as host names by default for internal connections.
When using multiple container, the container name is very important to provide internal connectivity.
As I see your Docker compose file, the container name should match the name used in nginx conf.
version: '3'
services:
test_app:
container_name: app # not test_app_1
volumes:
- test-app-data:/var/www/app
test_nginx:
image: "xxxxxx.dkr.ecr.xx-xx-1.amazonaws.com/test-nginx"
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- test-app-data:/var/www/app
- test-nginx-log:/var/log/nginx
external_links:
- app # not test_app_1
depends_on:
- test_app
volumes:
test-app-data:
test-nginx-log:
I got the same problem too. My "blog" container is running at blog:8000 port and "nginx" container is at 80 port. "nginx" container is forwarding the request to "blog" container.
Because I didn't set "Links" in "NETWORK SETTINGS" for "nginx" container at all.
So I put the name of the back container "blog" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container).
Then, it was successful to run both "blog" and "nginx" containers properly.
So in your case, put the name of the back container "app" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container). It will work. I used Adiii's solution.
Don't forget to put "CMD-SHELL, curl -f http://localhost:9000/ || exit 1" in "Command" in "HEALTHCHECK" for "app" container.
Moreover, Don't forget to put "app" in "Container name" and select "HEALTHY" in "Condition" in "STARTUP DEPENDENCY ORDERING" for "nginx" container.
Here's my situation. We are slowly migrating our VMs from Vagrant to Docker but we are mostly still Docker newbs. Some of our newer system's development environments have already been moved to Docker. We have test code that runs on an older Vagrant VM and used to communicate with a Vagrant running a Django Restful API application in order to run integration tests. This Django API is now in a Docker container. So now we need the test code that runs in a vagrant to be able to make requests to the API running in docker. Both the docker container and the vagrant are running side by side on a host machine (MacOS). We are using Docker compose to initialize the docker container the main compose yaml file is shown below.
services:
django-api:
ports:
- "8080:8080"
build:
context: ..
dockerfile: docker/Dockerfile.bapi
extends:
file: docker-compose-base.yml
service: django-api
depends_on:
- db
volumes:
- ../:/workspace
command: ["tail", "-f", "/dev/null"]
env_file:
- ${HOME}/.pam_environment
environment:
- DATABASE_URL=postgres://postgres:password#db
- PGHOST=db
- PGPORT=5432
- PGUSER=postgres
- PGPASSWORD=password
- CLOUDAMQP_URL=amqp://rabbitmq
db:
ports:
- "5432"
extends:
file: docker-compose-base.yml
service: db
volumes:
- ./docker-entrypoint-initdb.d/init-postgress-db.sh:/docker-entrypoint-initdb.d/init-postgress-db.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: django-api-dev
I would like the tests that are running on the vagrant to still be able to communicate with the django application that's now running on docker, similar to the way it could communicate with the api when it was running in a vagrant. I have tried several different types of network configurations within the docker compose file but alas networking is not my strong suit and I'm really just shooting in the dark here.
Is there a way to configure my docker container and/or my vagrant so that they can talk to each other? I need to expose my docker container's IP address so that my vagrant can access it.
Any help/tips/guidance here would be greatly appreciated!
In your Vagrantfile, make sure you have a private host only network. I usually use them with a fixed IP
config.vm.network "private_network", ip: "192.168.33.100"
Now both the VMs will get a static IP in the host only network. When you run docker-compose up -d in your Django VM. Your VM will map port 8080 to the container 8080. So you can use 192.168.33.100:8080 in the other VM for testing the APIs
I would like the tests that are running on the vagrant to still be
able to communicate with the django application that's now running on
docker, similar to the way it could communicate with the api when it
was running in a vagrant.
As you say, you are using docker-compose, so exposing ports would do the purpose you are looking for. In the yml file, where django application is defined, create a port mapping which would bind the port on host to the port in container. You can do this by including this:
ports:
"<host_port_where_you_want_to_access>:<container_port_where_application_is_running>"
Is there a way to configure my docker container and/or my vagrant so
that they can talk to each other? I need to expose my docker
container's IP address so that my vagrant can access it.
It is. If both the containers are in the same network (when services are declared in same compose file, all services are in same network by default), then one container can talk to other container by calling to their service name.
Example: In the yml file specified in question, django-api can access db at http://db:xxxx/ where xxxx can be any port inside container. xxxx need not be mapped to host or need not be exposed.
I am learning how to use Docker, and I am in a process of setting up a simple app with Frontend and Backend using Centos+PHP+MySQL.
I have my machine:
"example"
In machine "example" i have configured 2 docker containers:
frontend:
build: ./frontend
volumes:
- ./frontend:/var/www/html
- ./infrastructure/logs/frontend/httpd:/var/logs/httpd
ports:
- "80"
links:
- api
api:
build: ./api
volumes:
- ./api:/var/www/html
- ./infrastructure/logs/api/httpd:/var/logs/httpd
ports:
- "80"
links:
- mysql:container_mysql
The issue I am facing is when I access the docker container, I need to specify a port number for either FRONTEND (32771) or BACKEND (32772).
Is this normal or is there a way to create hostnames for the API and Frontend of the application?
How does this work on deployment to AWS?
Thanks in advance.
If you're running docker 1.9 or 1.10, and use the 2.0 format for your docker-compose.yml, you can directly access other services through either their "service" name, or "container" name. See my answer on this question, which has a basic example to illustrate this; https://stackoverflow.com/a/36245209/1811501
Because the connection between services goes through the private container-container network, you don't need to use the randomly assigned ports, so if a service publishes/exposes port 80, you can simply connect through port 80