I make my project on Django, it has Gunicorn on WSGI, Daphne on ASGI. ASGI server needed only for handling Websocket protocol. Using Channels in Django for Websocket routing and handling. Nginx on static and proxy. Database is Mysql.
Generally: is there a way to synchronise variable values in memory between ASGI and WSGI app without writing to database?
TLDR:
HTTP (wsgi) works for major interacting with database (for now, creating instances of models).
Websocket (asgi) is planned to work with user controls (for now, connect to rooms, in future, would be in-game controls? rotate piece etc. The project is Tetris multiplayer, where users can create rooms, for example, for 2 or 4 players (parallel tetris fields), when created other players can connect into that rooms.)
'Under the hood' there is 'engine' (some data is stored in memory when the server runs):
# engine/status.py
active_rooms = {}
when creating a new room, HTTP controller (from views.py) calls function:
import engine.status as status
from engine.Room import Room
def create_room(id, size):
new_room = Room(size)
...
status.active_rooms[id] = new_room
...
So, it writes a new key-value pair into dict (status.active_rooms), whers key is number(id), value is instance of class 'Room'.
When other player clicks on eg.'connect' button in room, Javascript on client sends special message by Websocket protosol.
Websocket handler calls function:
def make_connect(data):
id = data['room_id']
...
if int(id) not in status.active_rooms:
msg = 'No room'
return {'type': 'info', 'msg': msg}
else:
msg = 'Room exists'
...
so it checks if exists the room with this id in memory.
The problem is:
The dict is always empty when check! Seems like ASGI and WSGI apps have each own instance of 'engine'.
It means, client can not see the actual status on server.
I tried to make dumps into database, but the class has some specific fields which can not be pickled.
My idea now is, to make 'creating rooms' with ASGI app (thru Websocket not HTTP).
Maybe i am missing something? Is there some another way to share data between ASGI and WSGI apps?
Just for information: i managed to make WS parallel request at the same time where goes HTTP request.
WSGI application writes to DB, ASGI application creates objects in memory with specific keys which could be used to access data from DB.
On next WS requests, ASGI reads keys from these objects in memory and call function which loads data from DB. Overall, ASGI and WSGI do not use identical environment, but using unique keys which were the same in first parallel HTTP and WS requests, ASGI can access data which was received by WSGI
Related
I have a Flask app for http and web socket (Flask-SocketIO) communication between client and server using gevent. I also use server side session with Flask-Session extension for the app. I run a background task using SocketIO.start_background_task. And from this task, I need to access session information which will be used to emit message using socketio. I get error when accessing session from the task "RuntimeError: Working outside of request context." This typically means that you attempted to use functionality that needed an active HTTP request.
Socket IO instance is created as below-
socket_io = SocketIO(app, async_mode='gevent', manage_session=False)
Is there any issue with this usage. How this issue could be addressed?
Thanks
This is not related to Flask-SocketIO, but to Flask. The background task that you started does not have request and application contexts, so it does not have access to the session (it doesn't even know who the client is).
Flask provides the copy_current_request_context decorator duplicate the request/app contexts from the request handler into a background task.
The example from the Flask documentation uses gevent.spawn() to start the task, but this would be the same for a task started with start_background_task().
import gevent
from flask import copy_current_request_context
#app.route('/')
def index():
#copy_current_request_context
def do_some_work():
# do some work here, it can access flask.request or
# flask.session like you would otherwise in the view function.
...
gevent.spawn(do_some_work)
return 'Regular response'
I have a serverless database cluster that spins down when it's out of use. Cold starts take a minute or so, during which django just waits and waits to send the http request object back.
Before I code something complicated, I'm looking for recommendations for an existing middleware/signal integration that would allow me to render an interim view while django attempts to get the database connection (for instance, if the database query takes longer than a second to complete, render to this view instead.)
You could create a custom middleware that tests for DB connectivity on every request. Bear in mind that this will attempt to create a new DB connection on every request
from django.db import connection
from django.db.utils import OperationalError
from django.shortcuts import render_to_response
def db_is_up_middleware(get_response):
def middleware(request):
try:
c = connection.cursor()
except OperationalError:
return render_to_response('your_template.html')
else:
return get_response(request)
return middleware
Partial solution:
I reduced the RESTful gateway's query timeout to 3 seconds. At the end of timeout, I return a 504 Error with a nice message that tells the user that the server has gone to sleep but it will be booting back up shortly. Inside the text/html response of the 504 Error, I included a refresh meta tag, so browsers will automatically try to reload the view.
I took all database calls off the public-facing site and put them behind an authentication layer. This means that authenticating will be the step most likely to time out, and users will (I hope) naturally try to reauthenticate after receiving the 504 Error above.
I added an AJAX jquery call in document.ready() to pull a random database record. This query will time out and nothing will happen (as intended). It forces the database server to begin booting up immediately whenever a user hits a public-facing page.
I am writing tests for my django application views and i am a beginner at this. I know that before running tests a new database is generated which only contains data that is being created at the time of running of tests but in my view's tests i am making API calls by url on my server which is using my default database not the test database in following way.
def test_decline_activity_valid_permission(self):
url = 'http://myapp:8002/api/v1/profile/' + self.profileUUID + '/document/' + \
self.docUUID + '/decline/'
response = requests.post(
url,
data=json.dumps(self.payload_valid_permission),
headers=self.headers,
)
self.assertEquals(response.status_code, status.HTTP_201_CREATED)
i want to know that if we can use test database for our testing our views or not. And what is difference between using request and using Client?
You could try using Django's LiveServerTestCase. That works like TransactionTestCase but will start up a server on localhost pointing at the test database. It gets started/stopped at the beginning/end of each test.
You could then configure the URL in your test to point at that local server. Django provides self.live_server_url for accessing the URL of the server.
As mentioned in the comments, Django's test client allows you to test views without making real HTTP requests. Whereas the requests library that you're using in your test, will send and receive real HTTP request and responses.
Is it possible to get the http request as bytestring like it gets transferred over the wire if you have a django request object?
Of course the plain text (not encrypted if https gets used).
I would like to store the bytestring to analyze it later.
At best I would like to access the real bytestring. Creating a bytestring from request.META, request.GET and friends will likely not be the same like the original.
Update: it seems that it is impossible to get to the original bytes. Then the question is: how to construct a bytestring which roughly looks like the original?
As others pointed out it is not possible because Django doesn't interact with raw requests.
You could just try reconstructing the request like this.
def reconstruct_request(request):
headers = ''
for header, value in request.META.items():
if not header.startswith('HTTP'):
continue
header = '-'.join([h.capitalize() for h in header[5:].lower().split('_')])
headers += '{}: {}\n'.format(header, value)
return (
'{method} HTTP/1.1\n'
'Content-Length: {content_length}\n'
'Content-Type: {content_type}\n'
'{headers}\n\n'
'{body}'
).format(
method=request.method,
content_length=request.META['CONTENT_LENGTH'],
content_type=request.META['CONTENT_TYPE'],
headers=headers,
body=request.body,
)
NOTE this is not a complete example only proof of concept
The basic answer is no, Django doesn't have access to the raw request, in fact it doesn't even have code to parse raw HTTP request.
This is because Django's (like many other Python web frameworks) HTTP request/response handling is, in it's core, a WSGI application (WSGI specification).
It's the job of the frontend/proxy server (like Apache or nginx) and application server (like uWSGI or gunicorn) to "massage" the request (like transforming and stripping headers) and convert it into an object that can be handled by Django.
As an experiment you can actually wrap Django's WSGI application yourself and see what Django gets to work with when a request comes in.
Edit your project's wsgi.py and add some extremely basic WSGI "middleware":
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
class MyMiddleware:
def __init__(self, app):
self._app = app
def __call__(self, environ, start_response):
import pdb; pdb.set_trace()
return self._app(environ, start_response)
# Wrap Django's WSGI application
application = MyMiddleware(get_wsgi_application())
Now if you start your devserver (./manage.py runserver) and send a request to your Django application. You'll drop into a debugger.
The only thing of interest here is the environ dict. Poke around it and you'll see that it's pretty much the same as what you'll find in Django's request.META. (The contents of the environ dict is detailed in this section of the WSGI spec.)
Knowing this, the best thing you can get is piecing together items form the environ dict to something that remotely resembles an HTTP request.
But why? If you have an environ dict, you have all the information you need to replicate a Django request. There's no actual need to translate this back to a HTTP request.
In fact, as you now known, you don't need a HTTP request at all to call Django's WSGI application. All you need is a environ dict with the required keys and a callable so that Django can relay the response.
So, to analyze requests (and even be able to replay them) you only need to be able to recreate a valid environ dict.
To do so in Django the easiest option would be to serialize request.META and request.body to a JSON dict.
If you really need something that resembles an HTTP request (and you are unable to go a level up to e.g. the webserver to log this information) you'll just have to piece this together from the information available in request.META and request.body, with the caveats that this is not a realistic representation of the original HTTP request.
I have a Django server that communicates with a NodeJS server on another address (REMOTE_SOCKET_ADDRESS).
In Django, I have a line of code that goes like this:
requests.post(settings.REMOTE_SOCKET_ADDRESS, params=query_params)
I would like my Django server to not wait for the response from the NodeJS server before proceeding with the code. Just send the POST and go on, so that even if NodeJS needs 10 minutes to do whatever it is doing, it won't affect the Django Server.
How can I achieve this "fire and forget" behavior?
Additional info: I am on a shared hosting, so I cannot use workers.
I achieved this by using the requests-futures package: https://github.com/ross/requests-futures