I am using MUP to deploy a meteor app to an EC2 instance running Ubuntu 18. My deployment seems to work, but when I try to access the public URL of the instance in my browser, I get "connection refused." I'm going crazy with this one!
I assume this would be an AWS issue like a port not open, but my EC2 inbound rules seem like they should work:
I SSH'ed into the instance to see if everything is working, and I think it is. For starters, the docker container seems to be running fine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b70717ce5c9 mup-oil-pricing:latest "/bin/sh -c 'exec $M…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp oil-pricing
While still SSH'ed in, when I hit curl localhost:80 I get back HTML in the console, which suggests the app (a Meteor app) is running fine.
I checked to see if the Ubuntu firewall is active, and I don't think it is:
ubuntu#ip-172-30-1-118:~$ sudo ufw status verbose
Status: inactive
My ports also seem fine (as far as I can tell):
ubuntu#ip-172-30-1-118:~$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 10.0.3.1:53 0.0.0.0:* LISTEN 3230/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 344/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 7903/sshd: /usr/sbi
tcp6 0 0 :::22 :::* LISTEN 7903/sshd: /usr/sbi
tcp6 0 0 :::80 :::* LISTEN 13597/docker-proxy
But when I go to Chrome on my local machine and try to access the site using the EC2 instance via the Elastic IP I've assigned (34.231.39.181) or via the EC2 address (https://ec2-34-231-39-181.compute-1.amazonaws.com/) I get :
This site can’t be reached
ec2-34-231-39-181.compute-1.amazonaws.com refused to connect.
I don't think it's a MUP issue, but here's the MUP config just in case that matters:
module.exports = {
servers: {
one: {
host: '34.231.39.181',
username: 'ubuntu',
pem: [[MY PEM FILE]]
}
},
hooks: {
'pre.deploy': {
remoteCommand: 'docker system prune -a --force' // PRUNE DOCKER IMAGES
},
},
app: {
name: 'oil-pricing',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'https://ec2-34-231-39-181.compute-1.amazonaws.com/',
MONGO_URL: [[MY MONGO URL]]
PORT: 80,
},
docker: {
image: 'abernix/meteord:node-8.15.1-base', // per: https://github.com/zodern/meteor-up/issues/692
},
enableUploadProgressBar: true
},
};
When I run mup deploy everything checks out:
Started TaskList: Pushing Meteor App
[34.231.39.181] - Pushing Meteor App Bundle to the Server
[34.231.39.181] - Pushing Meteor App Bundle to the Server: SUCCESS
[34.231.39.181] - Prepare Bundle
[34.231.39.181] - Prepare Bundle: SUCCESS
Started TaskList: Configuring App
[34.231.39.181] - Pushing the Startup Script
[34.231.39.181] - Pushing the Startup Script: SUCCESS
[34.231.39.181] - Sending Environment Variables
[34.231.39.181] - Sending Environment Variables: SUCCESS
Started TaskList: Start Meteor
[34.231.39.181] - Start Meteor
[34.231.39.181] - Start Meteor: SUCCESS
[34.231.39.181] - Verifying Deployment
[34.231.39.181] - Verifying Deployment: SUCCESS
I'm using Meteor 1.8.1 if that matters.
Any help would be greatly appreciated!
Your sudo netstat -tulpn | grep LISTEN shows that you are listening on port 80. But you are using HTTPS in:
https://ec2-34-231-39-181.compute-1.amazonaws.com
This will connect to port 443, which nothing listens to. So either change your app to listen for HTTPS connections on port 443 (will require proper ssl certificates), or use HTTP which will go to port 80 (unencrypted):
http://ec2-34-231-39-181.compute-1.amazonaws.com
Related
Both my backend (localhost:8000) and frontend (locahost:5000) containers spin up and are accessible through the browser, but I can't access the backend container from the frontend container.
From within frontend:
/usr/src/nuxt-app # curl http://localhost:8000 -v
* Trying 127.0.0.1:8000...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8000 failed: Connection refused
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Trying ::1:8000...
* TCP_NODELAY set
* Immediate connect fail for ::1: Address not available
* Failed to connect to localhost port 8000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8000: Connection refused
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
If I make a change to index.vue, the frontend reloads and then in the console it displays:
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
I have already setup django-cors-headers (included it in INSTALLED_APPS, and set ALLOWED_HOSTS = ['*'] and CORS_ALLOW_ALL_ORIGINS = True).
In my nuxt.config.js I have set
axios: {
headers : {
"Access-Control-Allow-Origin": ["*"],
}
},
I'm stuck as to what is going wrong. I think it's likely my docker-compose or Dockerfile.
docker-compose.yml
backend:
build: ./backend
volumes:
- ./backend:/srv/app
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- db
networks:
- main
frontend:
build:
context: ./frontend
volumes:
- ./frontend:/usr/src/nuxt-app
- /usr/src/nuxt-app/node_modules
command: >
sh -c "yarn build && yarn dev"
ports:
- "5000:5000"
depends_on:
- backend
networks:
- main
networks:
main:
driver: bridge
Dockerfile
FROM node:15.14.0-alpine3.10
WORKDIR /usr/src/nuxt-app
RUN apk update && apk upgrade
RUN npm install -g npm#latest
COPY package*.json ./
RUN npm install
EXPOSE 5000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=5000
What am I missing?
I think you have 2 different errors.
The first one.
My nuxt app (frontend) is using axios to call http://localhost:8000/preview/api/qc/. When the frontend starts up, I can see axios catching errorError: connect ECONNREFUSED 127.0.0.1:8000. In the console it says [HMR] connected though.
This is SSR requests from nuxt to django. Nuxt app inside the container cannot connect to localhost:8000. But you can connect to django container via http://django_container:8000/api/qc/ where django_container is name of you django container.
In nuxt config you can set up different URLs for server and client side like this. So SSR requests go to docker django container directly and client side requests go to the localhost port.
nuxt.config.js
export default {
// ...
// Axios module configuration: https://go.nuxtjs.dev/config-axios
axios: {
baseURL: process.browser ? 'http://localhost:8000' : 'http://django_container:8000'
},
// ...
}
The second one.
access to XMLHttpRequest at 'http://localhost:8000/preview/api/qc/' from origin 'http://localhost:5000' has been blocked by CORS policy: Request header field access-control-allow-origin is not allowed by Access-Control-Allow-Headers in preflight response. VM11:1 GET http://localhost:8000/preview/api/qc/ net::ERR_FAILED
This is client side request from your browser to django. I think it's better to set CORS_ORIGIN_WHITELIST explicitly. Also you can allow CORS_ALLOW_CREDENTIALS. I can't guarantee it, but I hope it helps.
CORS_ALLOW_CREDENTIALS = True
CORS_ORIGIN_WHITELIST = ['http://localhost:5000', 'http://127.0.0.1:5000']
He everyone. I'm working with docker and trying to dockerize a simple django application that does an external http connect to a web page (real website)
so when I set in the Docker file the address of my django server that should work in the container - 127.0.0.1:8000. my app wasn't working because of the impossibility to do an external connection to the website.
but when I set the port for my server: 0.0.0.0:8000 it started to work.
So my question is: Why it behaves like that? What is the difference in this particular case? I just want to understand it.
I read some articles about 0.0.0.0 and it's like a 'generic' or 'placeholder' port that allows to use the OC default port.
127.0.0.1 is like a host that redirects the request to the current machine. I knew it.
But when I run the app at my localmachine (host: 127.0.0.0:8000) everything was working and the app could do the connection to the real website but in case of docker it stopped to work.
Thanks for any help!
Here are my sources:
Docker file
FROM python:3.6
RUN mkdir /code
WORKDIR /code
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . ./
EXPOSE 8000
ENTRYPOINT [ "python", "manage.py" ]
CMD [ "runserver", "127.0.0.1:8000" ] # doesn't work
# CMD [ "runserver", "0.0.0.0:8000" ] - works
docker-compose.yml
version: "3"
services:
url_rest:
container_name: url_keys_rest
build:
context: .
dockerfile: Dockerfile
image: url_keys_rest_image
stdin_open: true
tty: true
volumes:
- .:/var/www/url_keys
ports:
- "8000:8000"
here is the http error that I received in case of 127.0.0.1. Maybe it will be useful.
http: error: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: /api/urls (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10cd51e80>: Failed to establish a new connection: [Errno 61] Connection refused')) while doing GET request to URL: http://127.0.0.1:8000/api/urls
You must set a container’s main process to bind to the special 0.0.0.0 “all interfaces” address, or it will be unreachable from outside the container.
In Docker 127.0.0.1 almost always means “this container”, not “this machine”. If you make an outbound connection to 127.0.0.1 from a container it will return to the same container; if you bind a server to 127.0.0.1 it will not accept connections from outside.
One of the core things Docker does is to give each container its own separate network space. In particular, each container has its own lo interface and its own notion of localhost.
At a very low level, network services call the bind(2) system call to start accepting connections. That takes an address parameter. It can be one of two things: either it can be the IP address of some system interface, or it can be the special 0.0.0.0 “all interfaces” address. If you pick an interface, it will only accept connections from that interface; if you have two network cards on a physical system, for example, you can use this to only accept connections from one network but not the other.
So, if you set a service to bind to 127.0.0.1, that’s the address of the lo interface, and the service will only accept connections from that interface. But each container has its own lo interface and its own localhost, so this setting causes the service to refuse connections unless they’re initiated from within the container itself. It you set it to bind to 0.0.0.0, it will also accept connections from the per-container eth0 interface, where all connections from outside the container arrive.
My understanding is that docker is randomly assigning IP address to each container instead of localhost(127.*.*.*). So using 0.0.0.0 to listen inside the docker application will work. I tried to connect local database inside a docker file before with localhost. It doesn't work as well. I guess it is due to this reason. Correct me if I am wrong plz!
Update: I attach an intuitive image to show how docker interact with those ip addresses. Hope this will help to understand.
In my local development setup, I'm using Django as the webserver (a-la python manage.py runserver 8000) and I'm able to kubectl port-forward <django_pod> 8000 my traffic to my local dev machine so I can interact with my site by browsing to http://localhost:8000/testpage. I've configured my ALLOWED_HOSTS to include localhost to enable this.
However, I'd like to avoid using port-forward and go the more proper route of running my traffic through the Kubernetes ingress controller and service. On the same Minikube cluster, I have ingress configured to point certain traffic urls to a very rudimentary nginx pod to confirm that my ingress+service+pod networking is working properly. All other urls should route to my Django app. The only difference is the nginx traffic is all on port 80.
In my ingress controller logs, I can see the traffic being sent to the k8s service for my Django app:
192.168.64.1 - [192.168.64.1] - - [22/Nov/2019:03:50:52 +0000] "GET /testpage HTTP/2.0" 502 565 "-" "browser" 24 0.002 [default-django-service-8000] [] 172.17.0.5:8000, 172.17.0.5:8000, 172.17.0.5:8000 0, 0, 0 0.000, 0.000, 0.000 502, 502, 502 aa2682896e4d7a2052617f7d12b1a02b
When I look at the Django logs, I don't see any traffic hitting it.
My ingress servicePort is 8000, the django-service port is 8000 (I've just let it default to ClusterIP), the pod's spec.containers.ports.containerPort is 8000, and the process has been set to listen on port 8000 as I mentioned before.
When I check kubectl get endpoints, it correctly shows me an endpoint is connected on port 8000 (and it correctly changes to the new pods' IPs when I restart them).
I've used the following guides to try to debug:
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
https://medium.com/#ManagedKube/kubernetes-troubleshooting-ingress-and-services-traffic-flows-547ea867b120
My guess is it might be a problem with ALLOWED_HOSTS but I added a * wildcard to the list and it's still not working.
What is wrong with my setup?
You need to instruct the server to listen on all interfaces by running python manage.py runserver 0.0.0.0:<port>.
Django's runserver listens on the local loopback interface (localhost/127.0.0.1) by default so, if you run python manage.py runserver 8000, you can can only reach the server on port 8000 from the machine on which the server is running.
There is a lot of documentation around this, but here are just a couple examples:
About IP 0.0.0.0 in Django
http://www.holeintheceiling.com/blog/2012/06/21/django-and-runserver-0-0-0-0/
I am trying to run a django-channels project locally using https (the app has a facebook login that requires https).
I have followed the instructions for generating a key and certificate using mkcert ( https://github.com/FiloSottile/mkcert ) and have attempted to use the key and certificate by running daphne -e ssl:443:privateKey=localhost+1-key.pem:certKey=localhost+1.pem django_project.asgi:application -p 8000 -b 0.0.0.0
The server seems to be starting OK however when I try to visit https://0.0.0.0:8000 nothing happens and eventually I get a 'took too long to respond' message.
No new output is added to the standard daphne output that appears when I start up the server:
2019-07-16 19:23:27,818 INFO HTTP/2 support enabled
2019-07-16 19:23:27,818 INFO Configuring endpoint ssl:8443:privateKey=../sec/localhost+1-key.pem:certKey=../sec/localhost+1.pem
2019-07-16 19:23:27,823 INFO Listening on TCP address 0.0.0.0:8443
2019-07-16 19:23:27,823 INFO Configuring endpoint tcp:port=8000:interface=0.0.0.0
2019-07-16 19:23:27,824 INFO Listening on TCP address 0.0.0.0:8000
Can anyone help with this?
You should map the 8000 host port to port 443 of the container while runnig the server.
docker run ... -p 8000:443 ...
Turns out that setting up the Twisted ssl stuff overrides the port that you're setting up in daphne, so in the example above, the site would be shown on port 443
I have an AWS EC2 Instance running Ubuntu.
I've installed on it a Parse Server from github, using these commands:
$ npm install -g parse-server mongodb-runner
$ mongodb-runner start
$ parse-server --appId APPLICATION_ID --masterKey MASTER_KEY
When I started the server, I got this output:
appId: APPLICATION_ID
masterKey: ***REDACTED***
port: 1337
mountPath: /parse
maxUploadSize: 20mb
serverURL: http://localhost:1337/parse
parse-server running on http://localhost:1337/parse
I've opened another terminal, and I've checked what services are listening on my ports using sudo netstat -plnt
and this is the results:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 937/sshd
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 924/mongod
tcp6 0 0 :::22 :::* LISTEN 937/sshd
As you can see, there is no Parse server running on port 1337.
What can I do in order to solve it? Maybe it something wrong with it's installation?
I just made it and connected to the server. Apparently it isn't matter if it is listening or isn't.