Models_committed not firing a function on model commit - FlaskSQLAlchemy - flask

I'm trying to get a signal - models_committed - to fire a function when my models are committed. Currently just does a standard print(), but I can't get the function to fire. Tried the decorator method and models_commited.connect(func, app) method.
What I'm expecting to happen
I commit some data to my database (into a model), then signal_thing() (located in init.py) prints 'hello - is this working' to the flask run console.
What is actually happening
Data is committed to the database (shows up in my web app) but nothing is printed to console, it seems signal_thing() does not fire.
I can't find much information about how to get Signals working properly on Flask?
__init__.py
from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy, models_committed, before_models_committed
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello - is this working')
models_committed.connect(signal_thing, app)
before_models_committed.connect(signal_thing, app)
decorator method
#models_commited.connect(app)
def signal_thing(sender, changes, **kwargs):
print('hello - is this working?')
sender.print('hello this worked')
config
SQLALCHEMY_TRACK_MODIFICATIONS' is set to True.
<Config {'ENV': 'production', 'DEBUG': False, 'TESTING': False, 'PROPAGATE_EXCEPTIONS': None, 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'SECRET_KEY': 'shh', 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'USE_X_SENDFILE': False, 'SERVER_NA
ME': None, 'APPLICATION_ROOT': '/', 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_SECURE': False, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_REFRESH_EACH_REQUEST': True,
'MAX_CONTENT_LENGTH': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'PREFERRED_URL_SCHEME': 'http', 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS':
True, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, 'MAX_COOKIE_SIZE': 4093, 'SQLALCHEMY_DATABASE_URI': 'sqlite:///C:\\Users\\\\ZigBot\\app.db', 'SQLALCHEMY_TRACK_MODIFICATIONS': True}>

Related

Testing a jsonify response - python

I have the following method in Flask that return a jsonify response, so when hitting http://127.0.0.1:5000/status route, it should renderize in a browser
[
{
user: "admin"
},
{
result: "OK - Healthy"
}
]
The method is this:
#app.route('/status')
def health_check():
response = [
{'user': 'admin'},
{'result': 'OK - Healthy'}
]
return jsonify(response)
I am trying to build a test case that examine the content of the jsonify(response) object returned:
class HealthStatusCase(unittest.TestCase):
def test_health_check(self):
response = health_check()
self.assertEqual(response, ['200 OK'])
But I don't know how to check the content of a jsonify output, the above test is misleading.
When I check the value of the jsonify(response) I get
pdb> jsonify(response)
<Response 71 bytes [200 OK]>
ipdb>
But that I am interested in is to access to the content of the response list, such as:
{'result': 'OK - Healthy'} and compare that key value pair.
UPDATE
I've followed the pytest approach suggested, by working with a fixture to make a request to /status endpoint, so the test case is now like this:
#pytest.fixture
def test_health_check(client):
response = client.get('/status')
assert response.json == [
{'user': 'admin'},
{'result': 'OK - Healthy'}
]
When I execute python -m pytest tests/test_health_check.py the test pass:
> python -m pytest tests/test_health_check.py
============================================================== test session starts ===============================================================
platform linux -- Python 3.10.6, pytest-7.2.0, pluggy-1.0.0
rootdir: /home/../../
plugins: flask-1.2.0
collected 1 item
tests/test_health_check.py . [100%]
=============================================================== 1 passed in 0.11s ================================================================
But then something that I miss is that if I modified the assert response.json content to let's say like this:
#pytest.fixture
def test_health_check(client):
response = client.get('/status')
assert response.json == [
{'user': 'admin'},
{'result': 'OKdsdsd - Healthy'}
]
The test also pass
I know a feature is intended to ilustrate a behavior and run it inside a test, but is there a way to make a relationship with the original values of the json in my orginal response list? I feel that this test is non meaningful.
Better use pytest library for testing.
With it your code will be super simple
def test_health_check(client):
response = client.get('/status')
assert response.json == [
{'user': 'admin'},
{'result': 'OK - Healthy'}
]
Based on Flask docs article about testing: https://flask.palletsprojects.com/en/2.2.x/testing/
Docs for pytest: https://docs.pytest.org/
Update:
In order to make the test above to work you should add this to tests/conftest.py
import pytest
from my_project import create_app
#pytest.fixture()
def app():
app = create_app()
app.config.update({
"TESTING": True,
})
yield app
#pytest.fixture()
def client(app):
return app.test_client()
#pytest.fixture()
def runner(app):
return app.test_cli_runner()
Or if you not have smth as a create_app function
import pytest
from my_project import app
#pytest.fixture()
def client(app):
return app.test_client()
#pytest.fixture()
def runner(app):
return app.test_cli_runner()
Copied from: https://flask.palletsprojects.com/en/2.2.x/testing/#fixtures

flask env config.py don t find database : unable to get repr for class 'flask_sqlalcwmy.SQLAlchemy'

I create a flask app with a postures db in cloud with .env file. when I run server, it seems it does not find database.
init.py:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate
from config import Config
def init_app():
"""Construct core Flask application with embedded Dash app."""
app = Flask(__name__, instance_relative_config=False)
app.config.from_object(Config())
db = SQLAlchemy(app)
migrate = Migrate(app, db)
with app.app_context():
# Import parts of our core Flask app
from . import routes
config.py:
import os
from dotenv import dotenv_values
basedir = os.path.abspath(os.path.dirname(__file__))
configuration = dotenv_values(".env")
class Config(object):
DEBUG = False
TESTING = False
CSRF_ENABLED = True
SECRET_KEY = 'this-really-needs-to-be-changed'
SQLALCHEMY_DATABASE_URI = os.environ['DATABASE_URL']
after step by step debug: for app, I have a app.config like this:
<Config {'ENV': 'development', 'DEBUG': False, 'TESTING': False, 'PROPAGATE_EXCEPTIONS': None, 'PRESERVE_CONTEXT_ON_EXCEPTION': None, 'SECRET_KEY': 'this-really-needs-to-be-changed', 'PERMANENT_SESSION_LIFETIME': datetime.timedelta(days=31), 'USE_X_SENDFILE': False, 'SERVER_NAME': None, 'APPLICATION_ROOT': '/', 'SESSION_COOKIE_NAME': 'session', 'SESSION_COOKIE_DOMAIN': None, 'SESSION_COOKIE_PATH': None, 'SESSION_COOKIE_HTTPONLY': True, 'SESSION_COOKIE_SECURE': False, 'SESSION_COOKIE_SAMESITE': None, 'SESSION_REFRESH_EACH_REQUEST': True, 'MAX_CONTENT_LENGTH': None, 'SEND_FILE_MAX_AGE_DEFAULT': datetime.timedelta(seconds=43200), 'TRAP_BAD_REQUEST_ERRORS': None, 'TRAP_HTTP_EXCEPTIONS': False, 'EXPLAIN_TEMPLATE_LOADING': False, 'PREFERRED_URL_SCHEME': 'http', 'JSON_AS_ASCII': True, 'JSON_SORT_KEYS': True, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'JSONIFY_MIMETYPE': 'application/json', 'TEMPLATES_AUTO_RELOAD': None, 'MAX_COOKIE_SIZE': 4093, 'CSRF_ENABLED': True, 'SQLALCHEMY_DATABASE_URI': 'DATABASE_URL=postgres://mqyl:XXXXXXXXXXXXXXXXX#queenie.db.XXXXXX.com:5432/rulXXXX', 'SQLALCHEMY_BINDS': None, 'SQLALCHEMY_NATIVE_UNICODE': None, 'SQLALCHEMY_ECHO': False, 'SQLALCHEMY_RECORD_QUERIES': None, 'SQLALCHEMY_POOL_SIZE': None, 'SQLALCHEMY_POOL_TIMEOUT': None, 'SQLALCHEMY_POOL_RECYCLE': None, 'SQLALCHEMY_MAX_OVERFLOW': None, 'SQLALCHEMY_COMMIT_ON_TEARDOWN': False, 'SQLALCHEMY_TRACK_MODIFICATIONS': None, 'SQLALCHEMY_ENGINE_OPTIONS': {}}>
WSGI.py:
from application import init_app
app = init_app()
if __name__ == "__main__":
app.run(debug=True)
trace error:
no trace only on debug mode value of db = SQLAlchemy(app) is
db: Unable to get repr for class flask_sqlalchemy.SQLAlchemy
find the solution...
DATABASE_URL=postgres:// is deprecate
I should use DATABASE_URL=postgresql://

Why is django TestCase not creating a test database?

I am writing a Django test inheriting from django.test.TestCase. Everywhere, in the docs, this tutorial and even in this accepted SO accepted answer is stated that when using Django TestCase, a test db will be automatically created.
Previously I have worked with DRF APITestCases and all worked well. Here I am using the very standard approach but the setUpTestData class method is using my production db.
What do I do wrong and what has to be done so a test db is spawned and used for the test?
Please see my code bellow.
from django.test import TestCase
from agregator.models import AgregatorProduct
from django.db.models import signals
def sample_product_one():
sample_product_one = {
# "id": 1,
"name": "testProdOne",
"dph": 21,
"updated": datetime.now(),
"active": True,
"updatedinstore": False,
"imagechanged": False,
"isVirtualProduct": False,
}
return sample_product_one
class TestCreateProdToCategory(TestCase):
"""
Test for correct creation of records
"""
#classmethod
#factory.django.mute_signals(signals.pre_save, signals.post_save)
def setUpTestData(cls):
AgregatorProduct.objects.create(
**sample_product_one()
)
def test_create_prod_to_cat(self):
product = AgregatorProduct.objects.get(id=1)
self.assertEqual(product.id, 1)
DB set up:
DATABASES = {
'agregator': {
'NAME': 'name',
'ENGINE': 'sql_server.pyodbc',
'HOST': 'my_ip',
'USER': 'my_user',
'PASSWORD': 'my_pwd',
'OPTIONS': {
'driver': 'ODBC Driver 17 for SQL Server',
'isolation_level': 'READ UNCOMMITTED',
},
}
}
The test results in
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\xevos\xevosadmin\agregator\tests\test_admin_actions\test_products_to_categories_admin_action.py", line 64, in test_create_prod_to_cat
product = AgregatorProduct.objects.get(id=1)
File "C:\xevos\xevosadmin\.venv\lib\site-packages\django\db\models\manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\xevos\xevosadmin\.venv\lib\site-packages\django\db\models\query.py", line 397, in get
raise self.model.DoesNotExist(
agregator.models.AgregatorProduct.DoesNotExist: AgregatorProduct matching query does not exist.
----------------------------------------------------------------------
which is a result of id being autoincrementing and given there are products in the production db already, it gets id of eg 151545
(AgregatorProduct matching query does not exist. is a result of the fact that the product which used to have id=1 was deleted a long time ago in the production db.)
So the test writes to the existing database and the data persist there even after the test is finished.
To create a test database, use the setUp method inside TestCase and run it using python manage.py test
from django.test import TestCase
from myapp.models import Animal
class AnimalTestCase(TestCase):
def setUp(self):
Animal.objects.create(name="lion", sound="roar")
Animal.objects.create(name="cat", sound="meow")
def test_animals_can_speak(self):
"""Animals that can speak are correctly identified"""
lion = Animal.objects.get(name="lion")
cat = Animal.objects.get(name="cat")
self.assertEqual(lion.speak(), 'The lion says "roar"')
self.assertEqual(cat.speak(), 'The cat says "meow"')
The database will be created and deleted automatically after tests are done
https://docs.djangoproject.com/en/3.1/topics/testing/overview/#writing-tests

Why Celery tasks don't work asynchronously?

I am trying to run asynchronously basic debug_task from celery but it runs always synchronously.
I have created a new project with django-cookiecutter template.
I made sure that redis is working and all env variables are valid.
I launch celery, and when it is ready to receive tasks, I launch the console (shell_plus) and invoke the task asynchronously.
In [1]: from project.taskapp.celery import debug_task
In [2]: debug_task.delay()
Request: <Context: {'id': '87b4d96e-9708-4ab2-873e-0118b30f7a6b', 'retries': 0, 'is_eager': True, 'logfile': None, 'loglevel': 0, 'hostname': 'hostname', 'callbacks': None, 'errbacks': None, 'headers': None, 'delivery_info': {'is_eager': True}, 'args': (), 'called_directly': False, 'kwargs': {}}>
Out[2]: <EagerResult: 87b4d96e-9708-4ab2-873e-0118b30f7a6b>
As you can see param is_eager == True -> so it worked sync.
Also I tried to call task as debug_task.apply_async()
Here are setting from cookiecutter template for celery:
import os
from celery import Celery
from django.apps import apps, AppConfig
from django.conf import settings
if not settings.configured:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.local')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
class CeleryAppConfig(AppConfig):
name = 'project.taskapp'
verbose_name = 'Celery Config'
def ready(self):
installed_apps = [app_config.name for app_config in apps.get_app_configs()]
app.autodiscover_tasks(lambda: installed_apps, force=True)
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
As many commenters have pointed out: turn off eager processing when you configure celery:
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.conf.task_always_eager = False

Flask socketio debug with eventlet and Redis spawns extra greenthreads?

I'm trying to put together a simple Flask / socketio / eventlet server that subscribes to Redis events. The behavior I'm seeing is that with Flask debug enabled, every time Werkzeug detects changes and restarts socketio, another one of my redis listeners is started as well (except the old listener doesn't exit).
Here's a working version with all of the socketio handlers removed:
import json
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
from flask.ext.redis import FlaskRedis
import eventlet
eventlet.monkey_patch()
with open('config/flask.json') as f:
config_flask = json.load(f)
app = Flask(__name__, static_folder='public', static_url_path='')
app.config.update(
DEBUG= True,
PROPAGATE_EXCEPTIONS= True,
REDIS_URL= "redis://localhost:6379/0"
)
redis_cache = FlaskRedis(app)
socketio = SocketIO(app)
#app.route('/')
def index():
cache = {}
return render_template('index.html', **cache)
def redisReader():
print 'Starting Redis subscriber'
pubsub = redis_cache.pubsub()
pubsub.subscribe('msg')
for msg in pubsub.listen():
print '>>>>>', msg
def runSocket():
print "Starting webserver"
socketio.run(app, host='0.0.0.0')
if __name__ == '__main__':
pool = eventlet.GreenPool()
pool.spawn(redisReader)
pool.spawn(runSocket)
pool.waitall()
Throw in some manual redis-cli publishing (PUBLISH msg himom)
This produces the following output:
Starting Redis subscriber
Starting webserver
* Restarting with stat
>>>>> {'pattern': None, 'type': 'subscribe', 'channel': 'msg', 'data': 1L}
Starting Redis subscriber
Starting webserver
* Debugger is active!
* Debugger pin code: 789-323-740
(22252) wsgi starting up on http://0.0.0.0:5000
>>>>> {'pattern': None, 'type': 'subscribe', 'channel': 'msg', 'data': 1L}
>>>>> {'pattern': None, 'type': 'message', 'channel': 'msg', 'data': 'himom'}
>>>>> {'pattern': None, 'type': 'message', 'channel': 'msg', 'data': 'himom'}
Why is the Redis listener getting started multiple times? If I make changes and save them, Werkzeug will start another one every time. How do I deal with this correctly?
Here's a list of the packages involved and their versions:
Python 2.7.6
Flask 0.10.1
Werkzeug 0.11.4
eventlet 0.18.4
greenlet 0.4.9
Flask-Redis 0.1.0
Flask-SocketIO 2.2
** UPDATE **
I now have a partial solution. Everything above stays the same, except the pool behavior has been moved into Flask's 'before_first_request' function:
def setupRedis():
print "setting up redis"
pool = eventlet.GreenPool()
pool.spawn(redisReader)
def runSocket():
print "Starting Webserver"
socketio.run(app, host='0.0.0.0')
if __name__ == '__main__':
app.before_first_request(setupRedis)
print app.before_first_request_funcs
runSocket()
The remaining issue is that 'before_first_request' does not handle the case where there are existing websockets, but that is a separate question.
Add simple print(os.getpid()) and observe ps aux. You should notice there are two Python processes. Change DEBUG=False, and the "problem" is not reproducing, also there is one Python process now.
So the problem is that your code creates redisReader no matter if it's a "worker manager" or an actual "request handler" process. So, don't create and start that pool unconditionally. Consult Werkzeug documentation where is your "application init" event, and only start redisReader there.
The solution here was to put my threading into a function called by Flask:
def setupRedis():
pool = eventlet.GreenPool()
pool.spawn(redisReader)
...
app.before_first_request(setupRedis)
This solved the extra threads left behind on Werkzeug restarts.