Working on flask app (factory structure), that has quite heavy background order processing with its own config defined in app/orders/parser_config class MyCustomConfig.
I need to access some configuration from this class inside models.py, but got circular import error. Decided to import and inherit MyCustomConfig in application config.py:
from app.orders.parser_config import MyCustomConfig
...
class Config(MyCustomConfig):
...
Which works fine, however I can't find information whether current_app.config is sent with each request? I can access variables from MyCustomConfig inside template with {{ config.VARIABLE_INSIDE_MYCUSTOMCONFIG }} as shown in this question answers. Does this mean jinja pulls directly from server when rendering, or is this within request context?
Related question does not answer this.
Question in short: does flask config size impact client load time on each request?
Related
I would like to exchange data between two django apps that are coming from a single big app that due to the increasing size and functionalities we decided to split.
Specifically I need to retrive only a string in one app from the other. Litterally a 10 character string.
The only thing that I found at the moment that satisfies me, since I would like to avoid import stuff from the other app (to me it seems not a clean way to do it, in the other case please change my mind), is making an http request from one app to the other.
Anyway I found it overkill.
Is there a clean way to achive this without using http request or imports?
If you mean constant app configuration data, that lives in the project settings.py adnd doesn't change except at a server re-start
from django.conf import settings
more
It's possible to make an import fail soft, if you just want to handle the case where the other app is not installed. For example
try:
from other_app.models import Foo
except ModuleNotFoundError:
from .models import Foo_Stub as Foo
(obviously you can get as sophisticated as you want with your stub models and other methods that aren't available from the other app).
I'm building a large, complex Flask application. It has lots of route functions (which need to do database calls) and lots of services (which also need to do database calls).
I'm using flask_sqlalchemy to do the database calls in the route functions as normal with model syntax (User.query).
I would also like to make database calls in the services. However, I can't do this without setting an app context:
with app.app_context():
User.query
which requires importing the app, which leads to circular imports (because the app imports the routes, which import the services, which have to import the app).
QUESTION 1: Do you know a way to get round this circular import problem?
Another approach could be to use flask_sqlalchemy in the routes and sqlalchemy in the services. This means nothing needs to be shared except for the database url and.... the models.
QUESTION 2: Do you know a way to use the same model files with both flask_sqlalchemy and normal sqlalchemy?
I've tried this: https://dev.to/nestedsoftware/flask-and-sqlalchemy-without-the-flask-sqlalchemy-extension-3cf8
...however it breaks my relationships.
I am following the flask app pattern shown here: https://github.com/sloria/cookiecutter-flask/blob/master/%7B%7Bcookiecutter.app_name%7D%7D/README.rst (application factory)
Thanks,
Louise
I didn't get it clearly from Flask docs. Also, I can see similar stackoverflow questions but I still didn't get my answer, hence asking.
I have a flask application served using gunicorn+gevent. Gunicorn worker process, on start, creates a Flask application. Then it imports some files that setup a few global things, like a udp connection to a statsd server, etc. The setup needs to be done only once i.e. on worker process start and not with every client request. The setup code in the imported files needs access to config variables.
I know that while serving a request I can use the current_app proxy, but not outside a request.
One way can be: put Flask app creation code in a separate file and include it wherever you need access to config.
Ex:
file: mywsgi.py
from flask import Flask
application = Flask(__name__)
application.config.from_pyfile('myconfig.cfg')
file: mygunicornapp.py
from mywsgi import application
import file1
import file2
# import more files
file: file1.py
from mywsgi import application
# use the application config to setup something
file: file2.py
from mywsgi import application
# use the application config to setup something
Is this the preferred way?
Flask doc says I can create application context explicitly.
Can I push application context, just after creating my flask app, and never pop it. So that the application context is always there as long as my process runs and the current_app proxy will be available application wide even when no request being served?
Ex:
from flask import Flask
application = Flask(__name__)
application.config.from_pyfile('myconfig.cfg')
application.app_context().push()
Now I should be able to use the current_app proxy anywhere in my code. Thoughts, please!
== Update ==
The files file1.py, file2.py etc are imported for adding routes to the application. They provide the functions that handle my api requests. So the file mygunicornapp.py looks more like:
file: mygunicornapp.py
from mywsgi import application
from file1 import API1
#application.route("/api1")
def handle_api1():
return API1.handler()
from file2 import API2
#application.route("/api2")
def handle_api2():
return API2.handler()
# many more routes
Now file1 imports many other files and they, in turn, import many more files. Any of these imported files may need access to a config parameter that I have set on the application object. The question is: How do I make the application object available to all these files? Do you suggest that I pass the application object to each file?
Is it possible to just delay adding routes? I mean set routes after current_app context local is available. That means the files will be imported after current_app is available. I tried adding routes to the current_app context local in 'before_first_request' callback. The problem with that is, the very first request returns 404. Subsequent returns give a correct response.
Why don't you make functions in file1 and file2, and pass the argument app into them? Then you can call these functions in your setup code in mywsgi.py, using as an argument the app object you just created.
This should work much better than some of the other things you suggested. The different files importing each other is close to a circular import. Pushing an app context is also something that leaves you likely to end up with difficult to understand bugs.
If you create the object app in one file and import it from that file everywhere, you basically have a global variable (using a namespace). This is going to cause problem when you want to test your app setup code, or create more than one version of your app for another reason. There is also the issue that you won't be able to import any of file1, file2 without creating an app object. While testing these, or possibly re-using some of that code outside of Flask, this will be a pain.
It's much better to create the app object once and pass it around. Having a function which returns the newly created app, which can be imported and called from anywhere, is a common way of organizing a flask app. This file is often called factory.py. It makes it easier to create zero, one or more copies of the app, rather than making it more difficult.
In my work we want to run a server with multiple databases. The databases switching should occur when you acces a url like http://myapp.webpage.com or http://other.webpage.com. We want to run only one server instance and at the moment of the HTTP request switch the database and return the corresponding response.
We've been looking for a mantainable and 'Django-friendly' solution. In our investigation we have found possible ways to do this, but we have not enough information about.
Option 1: Django middleware
The django middleware runs each time the server receive a HTTP request.
Making a database switch here could be the best option but using django database routers as far as I know only allow to change the database for a model or group or models.
Another option is to set a django model manager instance in the middleware and force all models to re-assign the objects attribute from an added attribute in the custom middleware.
My last option is to create a new attribute in the request object received by the middleware that return the database alias from settings.py and in each model query use the using method.
Option 2: Class-based View Mixin
Create a mixin that use the past three options, but I the mixin must be set in ALL the Class-based views. If a programmer forget to set the mixin and it comes to a production server, the data could be (or stop being) in the right database, and I don't wanna take the risk.
Option 3: Changing the database settings in runtime
This option works but Is not recommended and is too risky.
UPDATE:
How this works?
middlewares.py
import django.conf as conf
import os.path
class SelectDB(object):
def process_request(self, request):
print request.META['HTTP_REFERER']
file_database = open("booklog/database.txt", "r")
database = file_database.read(10)
file_database.close()
if database != 'default':
conf.settings.DATABASES['default']['NAME'] = database
Any information that help us to solve will be greatly appreciated.
Answer (it worked for me)
The question was already answered here, in stackoverflow. I'd love this functionality were in django. It was a bit hard to find the way to make this possible.
I think that is important to comment the great work that Wilduck made with the django plugin django-dynamic-db-router, it's a great plugin that makes possible this operation in (a bit) different way.
Thanks a lot to #JL Peyret and #ire_and_curses.
And as an answer to #ire_and_curses. At least in this moment, in the project I'm working it's what we need. In previous projects we needed a similar behavior and made one server per instance was terrible to mantain and update each server, even automating the process.
I have a Django application, and need to deal with the following:
One of my views, needs to make a POST request to another URL endpoint of the same application.
In order to do so, I use the requests module. I assemble the URL of the endpoint I need to call, dump the POST parameters, and perform the call.
This works fine for the most part, however fails miserably when testing, since the view that corresponds to the URL that I talk to, knows nothing about the state of the testing environment.
The code is similar to this:
from django.conf import settings
import json
def view1(request, *args, **kwargs):
url = 'http://api.%s/view2/' % settings.DOMAIN
r = requests.post(
url,
data=json.dumps({'key': 'value'}),
)
// Notice that the ``url`` is a url of the actual deployed application,
// and therefore knows nothing about testing and its state. That's where
// it goes wrong.
The question is, is there a way that this can behave correctly in testing? I use the django.test.client.Client class for creating my test requests. As far as I know, instances of this class talk directly to the URL mapper. Therefore the url that I construct in the view is simply an external http request to the deployed application, instead of the tested application.
Thanks.
One way to solve this is to mock the response from the URL for the purposes of the tests. I'd suggest using a lightweight mocking library, such as this:
http://blog.moertel.com/posts/2011-11-07-a-flyweight-mocking-helper-for-python.html
See the example code. It's very similar to your situation. I've used both of these in combination with requests and flask, but not django.