I am currently working on a project using the technologies mentioned in the thread title.
I got it all running from the browser (the application is hosted on heroku), but when I try to connect to the websockets from my Ionic 2 application, I always get an error establishing handshake.
2016-09-17T15:02:03.200133+00:00 app[web.1]: 2016-09-17 15:02:03,199 DEBUG Connection http.response!uvRVDyvolYEG did not get successful WS handshake.
2016-09-17T15:02:03.200498+00:00 app[web.1]: 2016-09-17 15:02:03,200 DEBUG WebSocket closed before handshake established
2016-09-17T15:02:03.169206+00:00 heroku[router]: at=info method=GET path="/1/" host=musicmashup-jukebox.herokuapp.com request_id=c46960d7-bb8f-45bf-b8be-5a934c771d96 fwd="212.243.230.222" dyno=web.1 connect=0ms service=7ms status=400 bytes=74
Now one idea was, that it could be a CORS problem. So I installed django-cors-middleware in hope this could solve the problem - well it did not.
But I think the app does not add any headers to the Daphne server at all.
At the moment I have no idea anymore, if the problem is on the client or on the server side.
Has anyone experienced similar problems?
EDIT:
Found out that websockets and CORS do not have anything to do with each other Why is there no same-origin policy for WebSockets? Why can I connect to ws://localhost?
So my guess is, that the server may reject the origin header sent by the client. I will see if I can get my hands on the headers being sent
This issue was fixed in Daphne v.1.0.3
https://github.com/django/daphne/commit/07dd777ef11f5091931fbb22bdacb9e4aefea7da
You need to also update channels and asgi-redis if it's used.
Alright, the problem was somehow related to the origin header. Ionic seems to be sending a origin header containing "file://..", which was getting rejected / blocked by the websocket server.
Unfortunately I did not find a way to configure the webserver on heroku to either ignore this or set another origin header on incoming packets.
My Procfile on heroku:
web: daphne app.asgi:channel_layer --port $PORT--bind 0.0.0.0 -v2
worker: python manage.py runworker -v2
What I did then, was moving the whole application to a self hosted Ubuntu server and putting an nginx in front of Daphne, where I created a rule to override the origin header of incoming packets.
That's how it can be done.. I hope this can help some people.
Thank you platzhersch,
it worked for me with the following nginx rule:
proxy_set_header Origin http://$host;
Related
I am running my server on a ec2 instance with gunicorn and nginx. My site and project works but it gives error connecting with websockets. I am using Websocket in js to connect with Django channels. On my local server everything was working correctly but after deploying my site. Websockets are not working and giving me this error
Error during WebSocket handshake: Unexpected response code: 502.
What I have tried uptil now...
I have installed redis and daphne. Configured daphne setting in nginx configuration.
My Server is running on 80 port and I have set daphne on 8001 port.
When I start daphne service it runs for some time and then disconnect/fail automatically but when it is running my websockets still cant connect.
Any kind of help will be great for me.
I am using Django-channels to connect to the server, but it always show the error like this:
reconnectwebsockets.js WebSocket connection to 'ws:xxx' failed:
Error during WebSocket handshake: Unexpected response code: 200
Also, I am using docker, this may be a issue of docker container configuration?
Any ideas what could be possibly wrong?
Question details
Hello, did you try to use channels library? It will give you same powerful as Django-channels one. Here you can find necessary documentation for it.
I recommend you use it because it is give you more flexibility then Django-channels one.
Channels library
Quick start
You can read how to work with it at Tutorial.
Errors and solutions
Unexpected response code: 200 (or other code XXX) (solved):
Be sure you include your application and channels via settings (mysite.settings) and use asgi application:
INSTALLED_APPS = [
'channels',
'chat',
...
]
...
ASGI_APPLICATION = "mysite.asgi.application"
Be sure you use channel layers (mysite.settings).
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels.layers.InMemoryChannelLayer',
},
}
According Documentation you should use database for production, but for local environment you may use channels.layers.InMemoryChannelLayer.
Be sure you run asgi server (not wsgi) because you need asynchronous behaviour. Also, for deployment you should use daphne instead of gunicorn. daphne is included in channels library by default, so you don't need to install it manually.
Basic run server will look like (Terminal):
daphne -b 0.0.0.0 -p $PORT mysite.asgi:application
where $PORT is specific port (for UNIX system is 5000). (That format used for heroku application, you can change it manually).
Error in connection establishment: net::ERR_SSL_PROTOCOL_ERROR and same errors with using https connection (solved):
Difference between ws and wss?
You may think about to use your server via wss protocol:
replace ws://... with wss://... or use following template in your html (chat/templates/chat/room.html):
(window.location.protocol === 'https:' ? 'wss' : 'ws') + '://'
Hope this answer is useful for channels with Django.
I am working on a flask-socketio server which is getting stuck in a state where only 504s (gateway timeout) are returned. We are using AWS ELB in front of the server. I was wondering if anyone wouldn't mind giving some tips as to how to debug this issue.
Other symptoms:
This problem does not occur consistently, but once it begins happening, only 504s are received from requests. Restarting the process seems to fix the issue.
When I run netstat -nt on the server, I see many entries with rec-q's of over 100 stuck in the CLOSE_WAIT state
When I run strace on the process, I only see select and clock_gettime
When I run tcpdump on the server, I can see the valid requests coming into the server
AWS health checks are coming back succesfully
EDIT:
I should also add two things:
flask-socketio's server is used for production (not gunicorn or uWSGI)
Python's daemonize function is used for daemonizing the app
It seemed that switching to gunicorn as the wsgi server fixed the problem. This legitimately might be an issue with the flask-socketio wsgi server.
I'm having trouble getting an ALB -> uWSGI container setup working in AWS. I want to leave nginx out of the stack if possible.
Assume security groups aren't an issue - I have confirmed ELB can reach the containers on the dynamically-allocated host ports.
From the uWSGI docs, --http is the way to go to make this work, but I must be missing something. Relevant ini:
[uwsgi]
socket = /tmp/uwsgi.sock
http-to = /tmp/uwsgi.sock
http = 0.0.0.0:8000
Is this correct? How should I configure uWSGI to receive http traffic from ALB?
Figured it out. Its actually http-socket that I needed. Uwsgi was indeed receiving traffic, but I was seeing this strange issue where the subdomain was being stripped off and the resulting site getting a 404.
For example, http://www.example.com was being immediately redirected to http://example.com and failing.
This was happening because of django, not uwsgi. Our subdomain.middleware was configured in such a way that a wildcard subdomain caused it to bail, and in doing so chop off that subdomain and redirect to http://example.com.
This was specific to our app, not uwsgi + django, but I thought I'd leave it here if it might move someone in the right direction.
I have a Django site that uses Gunicorn and Nginx. Occasionally, I'll have a problem that I need to debug. In the past, I would shut down Gunicorn and Nginx, go to my Django project directory and start the Django development server ("python ./manage.py runserver 0:8000"), and then restart Nginx. I could then insert set_trace() commands and do my debugging. When I fixed the problem I'd shut down Nginx and then restart Gunicorn and Nginx. I'm pretty sure this was working.
Recently, though, I've begun having problems. What happens now is that when I've stopped at a breakpoint, after a couple of minutes the web page that I've stopped on will change and display "404 Not Found" and if I take another step in the debugger, I'll see this error:
- Broken pipe from ('127.0.0.1', 43742)
This happens on my development, staging, and production servers which I'm accessing via their domain names, e.g. "web01.example.com" (not really example).
What is the correct way to debug my Django application on my remote servers?
Thanks.
I figured out the problem. First I observed that when I stopped at a breakpoint, the page always timed out after exactly one minute which suggested that the Nginx connection to the web server was timing out if the web server took more than 60 seconds to respond. I then found an Nginx proxy_read_timeout directive which defines this timeout. Then it was merely a matter of changing the length of the timeout in my Nginx config file:
# /etc/nginx/sites-enabled/example.conf
http {
server {
...
location #django {
...
# Set timeout to 1 hour
proxy_read_timeout 3600s;
...
}
...
}
}
Once you've made this change you need to reload Nginx, not restart it, in order to this change to take effect. Then you start Django as I indicated above and you can now debug your Django application without it timing out. Just be sure to remove the timeout setting when you're done debugging, reload Nginx again, and restart Gunicorn.