Does django-sorcery support connection pooling? - django

I'm playing around with django-sorcery, and so far it looks to me like I'm only getting one persistent connection into the database.
Does django-sorcery support connection pooling? If so, how do I control the number of open connections available in the pool?

The design itself is similar to Flask-SQLAlchemy, except for the part that django_sorcery.db.sqlalchemy.SQLAlchemy itself is a scoped session where by default it will use threadlocal scoped session.
If you're using the django_sorcery.db.middleware.SQLAlchemyMiddleware, you're all set for session per request, you'll get one session per request and middleware will remove it at the end of the request.
As for the connection pool, its managed by sqlalchemy itself. You can override connection pool configuration via connection url querystring or using ALCHEMY_OPTIONS["engine_options"] in database config, which will pass those to create_engine

Related

Flask-SocketIO access session from background task

I have a Flask app for http and web socket (Flask-SocketIO) communication between client and server using gevent. I also use server side session with Flask-Session extension for the app. I run a background task using SocketIO.start_background_task. And from this task, I need to access session information which will be used to emit message using socketio. I get error when accessing session from the task "RuntimeError: Working outside of request context." This typically means that you attempted to use functionality that needed an active HTTP request.
Socket IO instance is created as below-
socket_io = SocketIO(app, async_mode='gevent', manage_session=False)
Is there any issue with this usage. How this issue could be addressed?
Thanks
This is not related to Flask-SocketIO, but to Flask. The background task that you started does not have request and application contexts, so it does not have access to the session (it doesn't even know who the client is).
Flask provides the copy_current_request_context decorator duplicate the request/app contexts from the request handler into a background task.
The example from the Flask documentation uses gevent.spawn() to start the task, but this would be the same for a task started with start_background_task().
import gevent
from flask import copy_current_request_context
#app.route('/')
def index():
#copy_current_request_context
def do_some_work():
# do some work here, it can access flask.request or
# flask.session like you would otherwise in the view function.
...
gevent.spawn(do_some_work)
return 'Regular response'

Django - How to share data between ASGI and WSGI applications?

I make my project on Django, it has Gunicorn on WSGI, Daphne on ASGI. ASGI server needed only for handling Websocket protocol. Using Channels in Django for Websocket routing and handling. Nginx on static and proxy. Database is Mysql.
Generally: is there a way to synchronise variable values in memory between ASGI and WSGI app without writing to database?
TLDR:
HTTP (wsgi) works for major interacting with database (for now, creating instances of models).
Websocket (asgi) is planned to work with user controls (for now, connect to rooms, in future, would be in-game controls? rotate piece etc. The project is Tetris multiplayer, where users can create rooms, for example, for 2 or 4 players (parallel tetris fields), when created other players can connect into that rooms.)
'Under the hood' there is 'engine' (some data is stored in memory when the server runs):
# engine/status.py
active_rooms = {}
when creating a new room, HTTP controller (from views.py) calls function:
import engine.status as status
from engine.Room import Room
def create_room(id, size):
new_room = Room(size)
...
status.active_rooms[id] = new_room
...
So, it writes a new key-value pair into dict (status.active_rooms), whers key is number(id), value is instance of class 'Room'.
When other player clicks on eg.'connect' button in room, Javascript on client sends special message by Websocket protosol.
Websocket handler calls function:
def make_connect(data):
id = data['room_id']
...
if int(id) not in status.active_rooms:
msg = 'No room'
return {'type': 'info', 'msg': msg}
else:
msg = 'Room exists'
...
so it checks if exists the room with this id in memory.
The problem is:
The dict is always empty when check! Seems like ASGI and WSGI apps have each own instance of 'engine'.
It means, client can not see the actual status on server.
I tried to make dumps into database, but the class has some specific fields which can not be pickled.
My idea now is, to make 'creating rooms' with ASGI app (thru Websocket not HTTP).
Maybe i am missing something? Is there some another way to share data between ASGI and WSGI apps?
Just for information: i managed to make WS parallel request at the same time where goes HTTP request.
WSGI application writes to DB, ASGI application creates objects in memory with specific keys which could be used to access data from DB.
On next WS requests, ASGI reads keys from these objects in memory and call function which loads data from DB. Overall, ASGI and WSGI do not use identical environment, but using unique keys which were the same in first parallel HTTP and WS requests, ASGI can access data which was received by WSGI

Django - Render HTTP before database connection

I have a serverless database cluster that spins down when it's out of use. Cold starts take a minute or so, during which django just waits and waits to send the http request object back.
Before I code something complicated, I'm looking for recommendations for an existing middleware/signal integration that would allow me to render an interim view while django attempts to get the database connection (for instance, if the database query takes longer than a second to complete, render to this view instead.)
You could create a custom middleware that tests for DB connectivity on every request. Bear in mind that this will attempt to create a new DB connection on every request
from django.db import connection
from django.db.utils import OperationalError
from django.shortcuts import render_to_response
def db_is_up_middleware(get_response):
def middleware(request):
try:
c = connection.cursor()
except OperationalError:
return render_to_response('your_template.html')
else:
return get_response(request)
return middleware
Partial solution:
I reduced the RESTful gateway's query timeout to 3 seconds. At the end of timeout, I return a 504 Error with a nice message that tells the user that the server has gone to sleep but it will be booting back up shortly. Inside the text/html response of the 504 Error, I included a refresh meta tag, so browsers will automatically try to reload the view.
I took all database calls off the public-facing site and put them behind an authentication layer. This means that authenticating will be the step most likely to time out, and users will (I hope) naturally try to reauthenticate after receiving the 504 Error above.
I added an AJAX jquery call in document.ready() to pull a random database record. This query will time out and nothing will happen (as intended). It forces the database server to begin booting up immediately whenever a user hits a public-facing page.

Torquebox stomplet session empty

I'm trying to implement user authentication for web sockets in Torquebox, and according to everything on the internet, I should be able to access the HTTP session from within a stomplet if I'm running the web app along side the stomp server, which I am.
My configuration looks something like this
web do
context '/'
host 'localhost'
end
stomp do
host 'localhost'
end
stomplet GlobalStomplet do
route '/live/socket'
end
I've tried also commenting out the web and stomp blocks but nothing changes.
Basically, the sockets are working, I can connect, and subscribe. In my stomplet, the on_subscribe method has a few debug lines
Rails.logger.debug "SESSION = #{subscriber.session}"
Rails.logger.debug "SESSION 2 = #{subscriber.getSession.getAttributeNames}"
Rails.logger.debug "SOCKET SESSION = #{TorqueBox::Session::ServletStore.load_session_data(subscriber.getSession)}"
And any other combination of these sort of things, but in every case I am given an empty session. The only exception, is when I explicitly load the session (as in the last debug line above) my session contains a session ID and something like TORQUEBOX_INITIAL_KEYS, but the session ID is not the HTTP session, and is simply something like session-1 and nothing useful.
I have an initialiser in the rails app setting up the torque box session store
MyApp::Application.config.session_store :torquebox_store, {
key: '_app_key'
}
I don't receive any exceptions from anything so I assume there are no obvious problems, but I've tried everything I can think of and still don't have a session that I can use.
What am I doing wrong?
I'm using Torquebox 3.1.0, Rails 4, and jRuby 1.7.11
Well, it seems I wasn't doing anything wrong per-se, but there seems to be an underlying bug in Torquebox (filing a bug report now)
It seems as though torque box web apps are quite happy with me assigning a custom key for the session store, and every works as expected. Unfortunately, it seems as though the stomplets are looking for the normal JSESSIONID only, and ignore the custom defined key.
To confirm, I remove the custom key, and it worked. I then reintroduced it, and again it stopped working. With the key still in place, I manually set the JSESSIONID cookie value, and reconnected and suddenly my session appeared.

Restrict access to a Django view, only from the server itself (localhost)

I want to create a localhost-only API in Django and I'm trying to find a way to restrict the access to a view only from the server itself (localhost)? I've tried using:
'HTTP_HOST',
'HTTP_X_FORWARDED_FOR',
'REMOTE_ADDR',
'SERVER_ADDR'
but with no luck.
Is there any other way?
You could configure your webserver (Apache, Nginx etc) to bind only to localhost.
This would work well if you want to restrict access to all views, but if you want to allow access to some views from remote users, then you'd have to configure a second Django project.
The problem is a bit more complex than just checking a variable. To identify the client IP address, you'll need
request.META['REMOTE_ADDR'] -- The IP address of the client.
and then to compare it with the request.get_host(). But you might take into account that the server might be started on 0.0.0.0:80, so then you'll probably need to do:
import socket
socket.gethostbyaddr(request.META['REMOTE_ADDR'])
and to compare this with let's say
socket.gethostbyaddr("127.0.0.1")
But you'll need to process lots of edge-cases with these headers and values.
A much simpler approach could be to have a reverse proxy in front of your app, that sends let's say some custom_header like X_SOURCE=internet. Then you can setup the traffic from internet to goes through the proxy, while the local traffic(in your local network) to go directly to the web server. So then if you want to have access to a specific view only from your local network, just check this header:
if 'X_SOURCE' in request.META:
# request is coming from internet, and not local network....
else:
# presumably we have a local request...
But again - this is the 'firewall approach', and it will require a some more setup, and to be sure that there is no possible access to the app from outside, that doesn't go through the reverse proxy..