Both my backend (localhost:8000) and frontend (locahost:5000) containers spin up and are accessible through the browser, but I can't access the backend container from the frontend container.
From within frontend:
/usr/src/nuxt-app # curl http://localhost:8000 -v
* Trying 127.0.0.1:8000...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8000 failed: Connection refused
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Failed to connect to localhost port 8000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8000: Connection refused
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
If I make a change to index.vue, the frontend reloads and then in the console it displays:
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
I have already setup django-cors-headers (included it in INSTALLED_APPS, and set ALLOWED_HOSTS = ['*'] and CORS_ALLOW_ALL_ORIGINS = True).
In my nuxt.config.js I have set
axios: {
headers : {
"Access-Control-Allow-Origin": ["*"],
}
},
I'm stuck as to what is going wrong. I think it's likely my docker-compose or Dockerfile.
docker-compose.yml
backend:
build: ./backend
volumes:
- ./backend:/srv/app
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- db
networks:
- main
frontend:
build:
context: ./frontend
volumes:
- ./frontend:/usr/src/nuxt-app
- /usr/src/nuxt-app/node_modules
command: >
sh -c "yarn build && yarn dev"
ports:
- "5000:5000"
depends_on:
- backend
networks:
- main
networks:
main:
driver: bridge
Dockerfile
FROM node:15.14.0-alpine3.10
WORKDIR /usr/src/nuxt-app
RUN apk update && apk upgrade
RUN npm install -g npm#latest
COPY package*.json ./
RUN npm install
EXPOSE 5000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=5000
What am I missing?
I think you have 2 different errors.
The first one.
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
This is SSR requests from nuxt to django. Nuxt app inside the container cannot connect to localhost:8000. But you can connect to django container via http://django_container:8000/api/qc/ where django_container is name of you django container.
In nuxt config you can set up different URLs for server and client side like this. So SSR requests go to docker django container directly and client side requests go to the localhost port.
nuxt.config.js
export default {
// ...
// Axios module configuration: https://go.nuxtjs.dev/config-axios
axios: {
baseURL: process.browser ? 'http://localhost:8000' : 'http://django_container:8000'
},
// ...
}
The second one.
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
This is client side request from your browser to django. I think it's better to set CORS_ORIGIN_WHITELIST explicitly. Also you can allow CORS_ALLOW_CREDENTIALS. I can't guarantee it, but I hope it helps.
CORS_ALLOW_CREDENTIALS = True
CORS_ORIGIN_WHITELIST = ['http://localhost:5000', 'http://127.0.0.1:5000']
Related
I am trying to do a connection to a websocket from a html page in django. This works when i run it outside a container but it stops working inside of it .
My server inside my docker compose.
server:
stdin_open: true # docker run -i
tty: true # docker run -t
build:
dockerfile: server.Dockerfile
context: ./
volumes:
- /home/$USER/.ssh:/root/.ssh
ports:
- '8000:8000'
networks:
drone_net:
ipv4_address: 10.10.10.2
the url I use in my html page
let url = `ws://localhost:8000/ws/socket-server/`
The error i get: WebSocket connection to 'ws://localhost:8000/ws/socket-server/' failed:
This is my routing for the websocket :
websocket_urlpatterns=[
re_path(r'ws/socket-server/',consumers.ChatConsumer.as_asgi())
]
I first thought it was localhost not working but my http request works in the containe as well.
I tried to change the value of the url with different ones.
I thought it was localhost that wasnt working properly but i also use local host for http request and they were fine in the container.
I was expecting it to work.
I found the solution. Weirldy it was a problem of package channel Version. In the docker file, I took the most recent version of channels: 4.0.0. While my local computer was using channels.version 3.0.5. When I downgraded the version, it solved my problem.
I'm running a simple Django application without any complicated setup (most of the default, Django allauth & Django Rest Framework).
The infrastructure for running both locally and remotely is in a docker-compose file:
version: "3"
services:
web:
image: web_app
build:
context: .
dockerfile: Dockerfile
command: gunicorn my_service.wsgi --reload --bind 0.0.0.0:80 --workers 3
env_file: .env
volumes:
- ./my_repo/:/app:z
depends_on:
- db
environment:
- DOCKER=1
nginx:
image: nginx_build
build:
context: nginx
dockerfile: Dockerfile
volumes:
- ./my_repo/:/app:z
ports:
- "7000:80"
... # db and so on
as you see, I'm using Gunicorn the serve the application and Nginx as a proxy (for static files and Let's Encrypt setup. The Nginx container has some customizations:
FROM nginx:1.21-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
And the nginx.conf file is a reverse proxy with a static mapping:
server {
listen 80;
location / {
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /app/my_repo/static/;
}
}
Running this on the server after setting up let's encrypt in the Nginx container works without any issue, but locally I get the "CSRF verification failed. Request aborted." error every time I submit a form (e.g. create a dummy user in Django Admin). I exposed the web port and used it to submit the forms and it worked.
Because of that, I deduce that there is something missing in the Nginx config or something to "tell" Django how to handle it. So, what I'm missing and how should I investigate this?
Since you're using a proxy that translates https requests into http, you need to configure Django to allow POST requests from a different scheme (since Django 4.0) by adding this to settings.py:
CSRF_TRUSTED_ORIGINS = ["https://yourdomain.com", "https://www.yourdomain.com"]
If this does not solve your problem, you can temporarily set DEBUG = True in production and try again. On the error page, you will see a "Reason given for failure" that you can post here.
I have deployed a nginx container, and I exposed port 8080:80, but when I do a curl localhost:8080, I get "Recv failure: Connection reset by peer". I have allowed an inbound rule with port 8080 to allow incoming traffic to go through to the container.
Welcome Hugo Calderon,
I didn't find any code from you but I'd like to add a good example here, explaining how to start a simple Nginx server.
My structure directory
|____nginx
| |____Dockerfile
| |____default.conf
|____docker-compose.yml
./docker-compose.yml
version: '3'
services:
nginx:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- '8080:80'
nginx/default.conf
server {
listen 80;
location / {
return 200 'Hello world!';
}
}
nginx/Dockerfile
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
Execute the following commands.
docker-compose up -d
The previous command will run a nginx container
curl http://localhost:8080
After execute curl you should get a message like the following.
Hello world!
If you need to change the message or add new logic in default.conf file make sure to run docker-compose build command, after that, you should run again docker-compose up -d, finally the new change will be added in the container.
I hope will be useful to you and other users!
I am using React Client, Django for the backend and Postgres for the db. I am preparing docker images of the client, server and the db.
My docker-compose.yml looks like this:
version: '3'
services:
client:
build: ./client
stdin_open: true
ports:
- "3000:3000"
depends_on:
- server
server:
build: ./server
ports:
- "8000:8000"
depends_on:
- db
db:
image: "postgres:12-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: bLah6laH614h
Because the docker images can be deployed anywhere separately, I am unsure how to access the server from the client code and that of the db's ip in the server. I am new to react,django and dockers. Please help!
Using your docker-compose.yml file configuration as basis 4 things will happen, as per the docs:
A network called myapp_default is created. (Let's say that your project folder is name myapp)
A container is created using db’s configuration. It joins the network myapp_default under the name db.
A container is created using server’s configuration. It joins the network myapp_default under the name server.
A container is created using client’s configuration. It joins the network myapp_default under the name client.
Now to send an HTTP request from client to server you should use this URL:
http://server:8000
because of item 3 and because the server configured port is 8000.
He everyone. I'm working with docker and trying to dockerize a simple django application that does an external http connect to a web page (real website)
so when I set in the Docker file the address of my django server that should work in the container - 127.0.0.1:8000. my app wasn't working because of the impossibility to do an external connection to the website.
but when I set the port for my server: 0.0.0.0:8000 it started to work.
So my question is: Why it behaves like that? What is the difference in this particular case? I just want to understand it.
I read some articles about 0.0.0.0 and it's like a 'generic' or 'placeholder' port that allows to use the OC default port.
127.0.0.1 is like a host that redirects the request to the current machine. I knew it.
But when I run the app at my localmachine (host: 127.0.0.0:8000) everything was working and the app could do the connection to the real website but in case of docker it stopped to work.
Thanks for any help!
Here are my sources:
Docker file
FROM python:3.6
RUN mkdir /code
WORKDIR /code
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
EXPOSE 8000
ENTRYPOINT [ "python", "manage.py" ]
CMD [ "runserver", "127.0.0.1:8000" ] # doesn't work
# CMD [ "runserver", "0.0.0.0:8000" ] - works
docker-compose.yml
version: "3"
services:
url_rest:
container_name: url_keys_rest
build:
context: .
dockerfile: Dockerfile
image: url_keys_rest_image
stdin_open: true
tty: true
volumes:
- .:/var/www/url_keys
ports:
- "8000:8000"
here is the http error that I received in case of 127.0.0.1. Maybe it will be useful.
http: error: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/urls (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10cd51e80>: Failed to establish a new connection: [Errno 61] Connection refused')) while doing GET request to URL: http://127.0.0.1:8000/api/urls
You must set a container’s main process to bind to the special 0.0.0.0 “all interfaces” address, or it will be unreachable from outside the container.
In Docker 127.0.0.1 almost always means “this container”, not “this machine”. If you make an outbound connection to 127.0.0.1 from a container it will return to the same container; if you bind a server to 127.0.0.1 it will not accept connections from outside.
One of the core things Docker does is to give each container its own separate network space. In particular, each container has its own lo interface and its own notion of localhost.
At a very low level, network services call the bind(2) system call to start accepting connections. That takes an address parameter. It can be one of two things: either it can be the IP address of some system interface, or it can be the special 0.0.0.0 “all interfaces” address. If you pick an interface, it will only accept connections from that interface; if you have two network cards on a physical system, for example, you can use this to only accept connections from one network but not the other.
So, if you set a service to bind to 127.0.0.1, that’s the address of the lo interface, and the service will only accept connections from that interface. But each container has its own lo interface and its own localhost, so this setting causes the service to refuse connections unless they’re initiated from within the container itself. It you set it to bind to 0.0.0.0, it will also accept connections from the per-container eth0 interface, where all connections from outside the container arrive.
My understanding is that docker is randomly assigning IP address to each container instead of localhost(127.*.*.*). So using 0.0.0.0 to listen inside the docker application will work. I tried to connect local database inside a docker file before with localhost. It doesn't work as well. I guess it is due to this reason. Correct me if I am wrong plz!
Update: I attach an intuitive image to show how docker interact with those ip addresses. Hope this will help to understand.