I have deployed a nginx container, and I exposed port 8080:80, but when I do a curl localhost:8080, I get "Recv failure: Connection reset by peer". I have allowed an inbound rule with port 8080 to allow incoming traffic to go through to the container.
Welcome Hugo Calderon,
I didn't find any code from you but I'd like to add a good example here, explaining how to start a simple Nginx server.
My structure directory
|____nginx
| |____Dockerfile
| |____default.conf
|____docker-compose.yml
./docker-compose.yml
version: '3'
services:
nginx:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '8080:80'
nginx/default.conf
server {
listen 80;
location / {
return 200 'Hello world!';
}
}
nginx/Dockerfile
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
Execute the following commands.
docker-compose up -d
The previous command will run a nginx container
curl http://localhost:8080
After execute curl you should get a message like the following.
Hello world!
If you need to change the message or add new logic in default.conf file make sure to run docker-compose build command, after that, you should run again docker-compose up -d, finally the new change will be added in the container.
I hope will be useful to you and other users!
Related
Both my backend (localhost:8000) and frontend (locahost:5000) containers spin up and are accessible through the browser, but I can't access the backend container from the frontend container.
From within frontend:
/usr/src/nuxt-app # curl http://localhost:8000 -v
* Trying 127.0.0.1:8000...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8000 failed: Connection refused
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Failed to connect to localhost port 8000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8000: Connection refused
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
If I make a change to index.vue, the frontend reloads and then in the console it displays:
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
I have already setup django-cors-headers (included it in INSTALLED_APPS, and set ALLOWED_HOSTS = ['*'] and CORS_ALLOW_ALL_ORIGINS = True).
In my nuxt.config.js I have set
axios: {
headers : {
"Access-Control-Allow-Origin": ["*"],
}
},
I'm stuck as to what is going wrong. I think it's likely my docker-compose or Dockerfile.
docker-compose.yml
backend:
build: ./backend
volumes:
- ./backend:/srv/app
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- db
networks:
- main
frontend:
build:
context: ./frontend
volumes:
- ./frontend:/usr/src/nuxt-app
- /usr/src/nuxt-app/node_modules
command: >
sh -c "yarn build && yarn dev"
ports:
- "5000:5000"
depends_on:
- backend
networks:
- main
networks:
main:
driver: bridge
Dockerfile
FROM node:15.14.0-alpine3.10
WORKDIR /usr/src/nuxt-app
RUN apk update && apk upgrade
RUN npm install -g npm#latest
COPY package*.json ./
RUN npm install
EXPOSE 5000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=5000
What am I missing?
I think you have 2 different errors.
The first one.
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
This is SSR requests from nuxt to django. Nuxt app inside the container cannot connect to localhost:8000. But you can connect to django container via http://django_container:8000/api/qc/ where django_container is name of you django container.
In nuxt config you can set up different URLs for server and client side like this. So SSR requests go to docker django container directly and client side requests go to the localhost port.
nuxt.config.js
export default {
// ...
// Axios module configuration: https://go.nuxtjs.dev/config-axios
axios: {
baseURL: process.browser ? 'http://localhost:8000' : 'http://django_container:8000'
},
// ...
}
The second one.
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
This is client side request from your browser to django. I think it's better to set CORS_ORIGIN_WHITELIST explicitly. Also you can allow CORS_ALLOW_CREDENTIALS. I can't guarantee it, but I hope it helps.
CORS_ALLOW_CREDENTIALS = True
CORS_ORIGIN_WHITELIST = ['http://localhost:5000', 'http://127.0.0.1:5000']
I am struggling to configure AWS ECS Task definition to run simple PHP-FPM and Nginx based containers.
My "app" container is running at app:9000 port and Nginx is in 80 port. Nginx is forwarding the request to app container through a fastcgi_pass app-upstream;
All of these are running perfectly in local. The same configuration is running perfectly in DigitalOcean Docker Instance but fails in AWS ECS.
I am guessing some task-definition configuration issue, but I can't find it.
Error Logs:
I am getting this log from Nginx container
nginx: [emerg] host not found in upstream "app:9000" in /etc/nginx/conf.d/default.conf:2
and this log from App (PHP-FPM) container
ECS
I've created a simple cluster with T2 Small instance, which is running perfectly.
Dockerfile
In this Github Repo, I've added the Dockerfile for App, Nginx image, docker-compose.yml file and task-defination.json file for reference if there is any mistake in docker files.
Source code in Github repo: https://github.com/arifulhb/docker-ecr-sample
Your issue is related to wrong upstream path mentioned in nginx configuration.
Do following things to investigate, also avoid using custom container names in docker-compose files if specifically not needed :-
Quick resolution would be :-
Remove container names from docker-compose files.
The service key name ( for example : test_app ) mentioned in docker-compose file is treated as container name automatically so use that.
The correct upstream path after making above changes should be test_app:9000
Proper and recommended way of building a docker-compose files :-
Create a custom docker network suppose with name "intranet"
Mention this network "intranet" in each service you create in your docker-compose file.
Follow steps mentioned in quick resolution
How does this helps you ? You have the ability to inspect this network you created, figure out if your containers are properly connected and identify the names used for connection.
Command : docker network inspect <network_name>
NOTE : Docker treats container names as host names by default for internal connections.
When using multiple container, the container name is very important to provide internal connectivity.
As I see your Docker compose file, the container name should match the name used in nginx conf.
version: '3'
services:
test_app:
container_name: app # not test_app_1
volumes:
- test-app-data:/var/www/app
test_nginx:
image: "xxxxxx.dkr.ecr.xx-xx-1.amazonaws.com/test-nginx"
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
ports:
- "80:80"
- "443:443"
volumes:
- test-app-data:/var/www/app
- test-nginx-log:/var/log/nginx
external_links:
- app # not test_app_1
depends_on:
- test_app
volumes:
test-app-data:
test-nginx-log:
I got the same problem too. My "blog" container is running at blog:8000 port and "nginx" container is at 80 port. "nginx" container is forwarding the request to "blog" container.
Because I didn't set "Links" in "NETWORK SETTINGS" for "nginx" container at all.
So I put the name of the back container "blog" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container).
Then, it was successful to run both "blog" and "nginx" containers properly.
So in your case, put the name of the back container "app" in the front container "nginx"("Links" in "NETWORK SETTINGS" for "nginx" container). It will work. I used Adiii's solution.
Don't forget to put "CMD-SHELL, curl -f http://localhost:9000/ || exit 1" in "Command" in "HEALTHCHECK" for "app" container.
Moreover, Don't forget to put "app" in "Container name" and select "HEALTHY" in "Condition" in "STARTUP DEPENDENCY ORDERING" for "nginx" container.
He everyone. I'm working with docker and trying to dockerize a simple django application that does an external http connect to a web page (real website)
so when I set in the Docker file the address of my django server that should work in the container - 127.0.0.1:8000. my app wasn't working because of the impossibility to do an external connection to the website.
but when I set the port for my server: 0.0.0.0:8000 it started to work.
So my question is: Why it behaves like that? What is the difference in this particular case? I just want to understand it.
I read some articles about 0.0.0.0 and it's like a 'generic' or 'placeholder' port that allows to use the OC default port.
127.0.0.1 is like a host that redirects the request to the current machine. I knew it.
But when I run the app at my localmachine (host: 127.0.0.0:8000) everything was working and the app could do the connection to the real website but in case of docker it stopped to work.
Thanks for any help!
Here are my sources:
Docker file
FROM python:3.6
RUN mkdir /code
WORKDIR /code
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
EXPOSE 8000
ENTRYPOINT [ "python", "manage.py" ]
CMD [ "runserver", "127.0.0.1:8000" ] # doesn't work
# CMD [ "runserver", "0.0.0.0:8000" ] - works
docker-compose.yml
version: "3"
services:
url_rest:
container_name: url_keys_rest
build:
context: .
dockerfile: Dockerfile
image: url_keys_rest_image
stdin_open: true
tty: true
volumes:
- .:/var/www/url_keys
ports:
- "8000:8000"
here is the http error that I received in case of 127.0.0.1. Maybe it will be useful.
http: error: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/urls (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10cd51e80>: Failed to establish a new connection: [Errno 61] Connection refused')) while doing GET request to URL: http://127.0.0.1:8000/api/urls
You must set a container’s main process to bind to the special 0.0.0.0 “all interfaces” address, or it will be unreachable from outside the container.
In Docker 127.0.0.1 almost always means “this container”, not “this machine”. If you make an outbound connection to 127.0.0.1 from a container it will return to the same container; if you bind a server to 127.0.0.1 it will not accept connections from outside.
One of the core things Docker does is to give each container its own separate network space. In particular, each container has its own lo interface and its own notion of localhost.
At a very low level, network services call the bind(2) system call to start accepting connections. That takes an address parameter. It can be one of two things: either it can be the IP address of some system interface, or it can be the special 0.0.0.0 “all interfaces” address. If you pick an interface, it will only accept connections from that interface; if you have two network cards on a physical system, for example, you can use this to only accept connections from one network but not the other.
So, if you set a service to bind to 127.0.0.1, that’s the address of the lo interface, and the service will only accept connections from that interface. But each container has its own lo interface and its own localhost, so this setting causes the service to refuse connections unless they’re initiated from within the container itself. It you set it to bind to 0.0.0.0, it will also accept connections from the per-container eth0 interface, where all connections from outside the container arrive.
My understanding is that docker is randomly assigning IP address to each container instead of localhost(127.*.*.*). So using 0.0.0.0 to listen inside the docker application will work. I tried to connect local database inside a docker file before with localhost. It doesn't work as well. I guess it is due to this reason. Correct me if I am wrong plz!
Update: I attach an intuitive image to show how docker interact with those ip addresses. Hope this will help to understand.
Am deploying two containers uwsgi and nginx to AWS ECS repository.
am using fargate to deploy and setup the containers but am getting an error in connections and communication between the containers.
error:No host not found in upstream "flask_app" in /etc/nginx/conf.d/nginx.conf.
Docker compose yml file.
version: "3"
services:
db:
container_name: db
image: postgres
restart: always
environment:
POSTGRES_DB: XXXXX
POSTGRES_USER: XXXX
POSTGRES_PASSWORD: XXXXX
ports:
- "54321:5432"
flask_app:
container_name: flask_app
image: XXXXXXXXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/YYYYY:flask
build:
context: ./
dockerfile: ./docker/Dockerfile-flask
volumes:
- .:/app
depends_on:
- db
ports:
- 5000:5000
links:
- db
nginx:
container_name: nginx
image: XXXXXXXXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/YYYYY:backend
env_file:
- ./docker/users.variables.env
build:
context: .
dockerfile: ./docker/Dockerfile-nginx
ports:
- 8080:80
depends_on:
- flask_app
links:
- flask_app
Nginx (nginx.conf):
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
resolver 169.254.169.253;
include uwsgi_params;
proxy_pass http://flask_app:5000/;
proxy_set_header Host "localhost";
}
}
UWSGI.ini:
[uwsgi]
protocol = http
; This is the name of our Python file
; minus the file extension
module = start
; This is the name of the variable
; in our script that will be called
callable = app
master = true
; Set uWSGI to start up 5 workers
processes = 5
; We use the port 5000 which we will
; then expose on our Dockerfile
socket = 0.0.0.0:5000
vacuum = true
die-on-term = trueS
Error 2019/08/23 12:27:13 [emerg] 1#1: host not found in upstream "flask_app" in /etc/nginx/conf.d/nginx.conf:8
In your example you are relying on the container name (flask_app) that is set in Docker compose for communicating with the other container. This works fine for Docker compose in a local environment, but not in ECS using Fargate.
Deployments using Fargate use the awsvpc network mode. This allows these containers to communicate with each other over localhost provided that they belong to the same task - see the AWS documentation: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
If you are trying to configure an nginx reverse proxy sidecar then these containers will belong to the same task. I realise the example below uses the uwsgi protocl rather than http, but it does demonstrate communication over localhost (unless you have to use http, you can easily modify your config to use uwsgi). Here is a working configuration for nginx and uwsgi as a Fargate deployment using the uwsgi protocol:
nginx.conf:
server {
listen 80;
server_name localhost 127.0.0.1;
root /usr/share/nginx/html;
location / {
include uwsgi_params;
uwsgi_pass localhost:5000;
}
and uwsgi.ini like this:
[uwsgi]
module = wsgi:app
uid = www-data
gid = www-data
master = true
processes = 3
socket=localhost:5000
vacuum = true
die-on-term = true
(assuming you use port 5000 as in your example)
This configuration will not work in you local environment with Docker compose and a bridge network - but if you are on Linux you can simulate it using a host network - instructions on setting up the local test environments can be found here:
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
Hope this goes some way to resolving the issue.
Based on image 'python:2.7', I created two containers: container1, container2
Dockerfile for test1:
FROM python:2.7
EXPOSE 6789
CMD ["bash", "-c", "'while true;do sleep 1000;done;'"]
Dockerfile for test2:
FROM python:2.7
EXPOSE 9876
CMD ["bash", "-c", "'while true;do sleep 1000;done;'"]
Then I built two new images with the dockerfiles above, named: test1, test2
Container1:
docker run --name container1 test1
And I also setup a Django server on port 6789 in container1 with:
#In Django workspace
./manage.py runserver 6789
Container2:
docker run --name container2 --link container1:container1 test2
And I also setup a Django server on port 9876 in container2 with:
#In Django workspace
./manage.py runserver 9876
In container2, when I run
curl container1_ip:6789
I got connection refused error.
How can I configure it to make it work?
I also created a container with official nginx image, it has two default ports (80, 443) exposed, and then created another container linked to nginx container, in the container, I did the same thing
curl nginx_ip:80 #successful
and
curl nginx_ip:443 #connection refused
Why this happen? Why 80 works well, and 443 doesn't work?
What are you trying to do by "curling" port 6789/9876 of your container ? is there anything (a ftp server or something else?) behind these ports ?
Before trying to reach it from the other container, you should try reaching it from the container itself
In container1 :
curl container1_ip:6789
I think you can access these ports, but there is just nothing listening to them on your container.
EDIT : If you downvote me, please comment explaining why, so i can improve my answers, thanks.
The problem is not how to configure docker, it is some config issue about Django. Django is listening localhost:port by default, we should specify a ip address to listen, or with '0.0.0.0' to make it listen all ip address.