I have a Flask app for http and web socket (Flask-SocketIO) communication between client and server using gevent. I also use server side session with Flask-Session extension for the app. I run a background task using SocketIO.start_background_task. And from this task, I need to access session information which will be used to emit message using socketio. I get error when accessing session from the task "RuntimeError: Working outside of request context." This typically means that you attempted to use functionality that needed an active HTTP request.
Socket IO instance is created as below-
socket_io = SocketIO(app, async_mode='gevent', manage_session=False)
Is there any issue with this usage. How this issue could be addressed?
Thanks
This is not related to Flask-SocketIO, but to Flask. The background task that you started does not have request and application contexts, so it does not have access to the session (it doesn't even know who the client is).
Flask provides the copy_current_request_context decorator duplicate the request/app contexts from the request handler into a background task.
The example from the Flask documentation uses gevent.spawn() to start the task, but this would be the same for a task started with start_background_task().
import gevent
from flask import copy_current_request_context
#app.route('/')
def index():
#copy_current_request_context
def do_some_work():
# do some work here, it can access flask.request or
# flask.session like you would otherwise in the view function.
...
gevent.spawn(do_some_work)
return 'Regular response'
Related
I have an endpoint I'd like to make available on both HTTP (for the API) and on the websocket.
For instance, adding a new message could be done via a Socket "send" event, that will be handled on the server to process the request (checks the rights, create the necessary elements, etc).
These same actions could be possible by doing a POST request to /api/messages/ and would behave the same.
Since it's the same purpose and result, is there an efficient way to make the two works the same using Flask and Flask-SocketIO?
Thank you in advance.
The Socket.IO events don't have a request and a response like the HTTP methods have, so the inputs and outputs are different, making it impossible to use the same function for both.
But you could abstract the actual logic of your actions into auxiliary functions that you call from your HTTP route and your Socket.IO event handlers, that actually makes perfect sense if you want to offer your API over HTTP and Socket.IO both.
I'm not entirely sure if what I did was the same as what you're trying to do but here's what I did (trying to mock a connection to Graphite) which you could modify for your case:
from flask import Flask, jsonify
from flask_socketio import SocketIO, emit
app = Flask(__name__)
socketio = SocketIO(app)
# You probably need to define more functions here (for connect, disconnect, etc)
#socketio.on('my meaningful name', namespace='/endpoint')
def endpoint_socket():
emit([1, 2, 3])
#app.route("/endpoint/", methods=["GET", "POST"])
def endpoint_http():
return jsonify([1, 2, 3])
socketio.run(app, host="127.0.0.1", port=8000, debug=True)
I make my project on Django, it has Gunicorn on WSGI, Daphne on ASGI. ASGI server needed only for handling Websocket protocol. Using Channels in Django for Websocket routing and handling. Nginx on static and proxy. Database is Mysql.
Generally: is there a way to synchronise variable values in memory between ASGI and WSGI app without writing to database?
TLDR:
HTTP (wsgi) works for major interacting with database (for now, creating instances of models).
Websocket (asgi) is planned to work with user controls (for now, connect to rooms, in future, would be in-game controls? rotate piece etc. The project is Tetris multiplayer, where users can create rooms, for example, for 2 or 4 players (parallel tetris fields), when created other players can connect into that rooms.)
'Under the hood' there is 'engine' (some data is stored in memory when the server runs):
# engine/status.py
active_rooms = {}
when creating a new room, HTTP controller (from views.py) calls function:
import engine.status as status
from engine.Room import Room
def create_room(id, size):
new_room = Room(size)
...
status.active_rooms[id] = new_room
...
So, it writes a new key-value pair into dict (status.active_rooms), whers key is number(id), value is instance of class 'Room'.
When other player clicks on eg.'connect' button in room, Javascript on client sends special message by Websocket protosol.
Websocket handler calls function:
def make_connect(data):
id = data['room_id']
...
if int(id) not in status.active_rooms:
msg = 'No room'
return {'type': 'info', 'msg': msg}
else:
msg = 'Room exists'
...
so it checks if exists the room with this id in memory.
The problem is:
The dict is always empty when check! Seems like ASGI and WSGI apps have each own instance of 'engine'.
It means, client can not see the actual status on server.
I tried to make dumps into database, but the class has some specific fields which can not be pickled.
My idea now is, to make 'creating rooms' with ASGI app (thru Websocket not HTTP).
Maybe i am missing something? Is there some another way to share data between ASGI and WSGI apps?
Just for information: i managed to make WS parallel request at the same time where goes HTTP request.
WSGI application writes to DB, ASGI application creates objects in memory with specific keys which could be used to access data from DB.
On next WS requests, ASGI reads keys from these objects in memory and call function which loads data from DB. Overall, ASGI and WSGI do not use identical environment, but using unique keys which were the same in first parallel HTTP and WS requests, ASGI can access data which was received by WSGI
I'm playing around with django-sorcery, and so far it looks to me like I'm only getting one persistent connection into the database.
Does django-sorcery support connection pooling? If so, how do I control the number of open connections available in the pool?
The design itself is similar to Flask-SQLAlchemy, except for the part that django_sorcery.db.sqlalchemy.SQLAlchemy itself is a scoped session where by default it will use threadlocal scoped session.
If you're using the django_sorcery.db.middleware.SQLAlchemyMiddleware, you're all set for session per request, you'll get one session per request and middleware will remove it at the end of the request.
As for the connection pool, its managed by sqlalchemy itself. You can override connection pool configuration via connection url querystring or using ALCHEMY_OPTIONS["engine_options"] in database config, which will pass those to create_engine
I am trying to write unittests for my Flask API endpoints. I want the test cases to connect to the dev server which is running on a different port 5555.
Here is what I am doing in setUp() to make a test_client.
import flask_app
flask_app.app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+mysqldb://root:#localhost/mvp_test_db'
flask_app.app.config['TESTING'] = True
flask_app.app.config['SERVER_NAME'] = '192.168.2.2' //the IP of the dev server
flask_app.app.config['SERVER_PORT'] = 5555
self.app_client = flask_app.app.test_client()
Then when I make a request using app_client like -
r = self.app_client.post('/API/v1/dummy_api', data = {'user_id' : 1})
I get a 404 when I print r and the request never comes to the dev server (no logs printed). I am not able to inspect the URL to which the connection is being attempted above. Any ideas?
It (app.test_client) does not send requests through network interfaces. All requests are simulated. These are processed inside werkzeug routing system. To process "/API/v1/dummy_api" url you need register a view for. If it is registered, connect it in the import section. Application settings and test settings almost always have almost equal settings.
Flask documentation says that there are 2 local context: application context, and request context. Both are created on request and torn down when it finishes.
So, what's the difference? What are the use cases for each? Are there any conditions when only one of these are created?
Both are created on request and torn down when it finishes.
It is true in the request lifecycle. Flask create the app context, the request context, do some magic, destroy request context, destroy app context.
The application context can exist without a request and that is the reason you have both. For example, if I'm running from a shell, I can create the app_context, without the request and has access to the ´current_app` proxy.
It is a design decision to separate concerns and give you the option to not create the request context. The request context is expensive.
In old Flask's (0.7?), you had only the request context and created the current_app with a Werkzeug proxy. So the application context just create a pattern.
Some docs about appcontext, but you probably already read it: http://flask.pocoo.org/docs/appcontext/