Use flask admin to set config parameters - flask

As title says, I have small web app, without using database and models.
I'd like interface to change some of Flask own config parameters, and thought that flask-admin may bring me there quickly. Is this easily possible?

You can't generally change configuration after starting the application without restarting the server.
The application (at least in production) will be served with multiple processes, possibly even on multiple servers. Changes to the configuration will only effect the process that handled the request, until the other processes are reaped and re-start. Even then, they may fork from a time after the configuration was read.
Extensions are not consistent about how they read configuration. Some read the configuration from current_app every request. Some only read it during init_app and store their own copy, so changing the configuration wouldn't change their copy.
Even if the configuration is read each time, some configuration just can't be changed, or requires other steps as well. For example, if you change databases, you should probably make sure you also close all connections to the old database, which the config knows nothing about. Another example, you could change debug mode but it won't do anything, because most of the logging is set up ahead of time.
The web app might not be the only thing relying on the configuration, so even if you could restart it automatically when configuration changed, you'd also need to restart dependent services such as Celery. And those services also might be on completely different machines or as different users.
Configuration is typically stored in Python files, so you'd need to create a serializer that can dump valid Python code, or write a config loader for a different format.
Flask-Admin might be able to be used to create a user interface for editing the configuration, but it wouldn't otherwise help with any of these issues.
It's not really worth it to try and change Flask.config after starting the application. It's just not designed for that. Design a config system specifically for the config you need if that's something you need, but don't expect to be able to generally change Flask.config.

Related

How to run two separate django instances on same server/domain?

To elaborate, we have one server we have setup to run django. Issue is that we need to establish "public" test server that our end-users can test, before we push the changes to the production.
Now, normally we would have production.domain.com and testing.domain.com and run them separately. However, due to conditions outside our control we only have access to one domain. We will call it program.domain.com for now.
Is there a way to setup two entirely separete django intances (AKA we do not want admin of production version to be able to access demo data, and vice versa) in such a way we have program.domain.com/production and program.domain.com/development enviroments?
I tried to look over Djangos "sites"-framework but as far as I can see, all it can do is separate the domains, not paths, and it has both "sites" able to access same data.
However, as I stated, we want to keep our testing data and our production data separate. Yet, we want to give our end-user testers access to version they can tinker around, keeping separation of production, public test and local development(runserver command) versions.
I would say you use the /production or /development path to select which database to use. You can read more about multitenancy from here https://books.agiliq.com/projects/django-multi-tenant/en/latest/

High response time when setting value for Django settings module inside a middleware

In a Django project of mine, I've written a middleware that performs an operation for every app user.
I've noticed that the response time balloons up if I write the following at the start of the middleware module:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE","myproject.settings")
It's about 10 times less if I omit these lines. Being a beginner, I'm trying to clarify why there's such a large differential between the respective response times. Can an expert explain it? Have you seen something like it before?
p.s. I already know why I shouldn't modify the environment variable for Django settings inside a middleware, so don't worry about that.
The reason will likely have to do something with django reloading your settings configuration for every request rather than once per server thread/process (and thus, also, re-instantiating/connecting to your database, cache, etc.). You will want to confirm this with profiling. this behavior is also very likely dependent on which app server you are running.
If you really want this level of control for your settings, it is much easier for you to add this line to manage.py, wsgi.py or whatever file/script you use to launch your app server.
P.S. If you already know you shouldn’t do it, why are you doing it?

Deploying Django as standalone internal app?

I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.

Handy way to connect PostgreSQL from an out-of-box application? (Embedding PostgreSQL)

I'm trying to use PostgreSQL in my application to manage some records.
With SQLite, nothing complicated was needed because it's a file db.
However, with PostgreSQL, some settings should be done before use it even if I want to connect localhost or local socket file.
Of course, initialization is only required once but the problem is, I don't want to bother any use of my application with this system setting problem.
I want to make it possible for users to just run and use my application out-of-box state.
I've read a wiki article to use PostgreSQL in Arch Linux(https://wiki.archlinux.org/index.php/PostgreSQL), and no one would like to do these things to use my application.
Furthermore, it's not possible to run those commands in wiki from application because the specific settings are depends on distros and also requires root-privileges.
Is there any way to connect PostgreSQL without such complicated initialization, like SQLite? Or, simple way to make the user's system prepared?
Just for your information, my application is written in C++ and Qt.
And, I cannot use SQLite for its limitations.
If you want an embedded database PostgreSQL isn't a great choice. It's usable, but it's a bit clunky compared to DBs designed for embedding like Firebird, SQLite, etc.
Bundling PostgreSQL
If you want to use PostgreSQL, I suggest bundling PostgreSQL binaries with your program, and starting a PostgreSQL server up when your program is used - assuming you only need a single instance of your program at a time. You can get handy pre-built Pg binaries from EDB. You can offer your users a choice on startup - "Use existing PostgreSQL" (in which case they must create the user and db themselves, make any pg_hba.conf changes, etc) or "Start private PostgreSQL server" (when you run your own using the binaries you bundled).
If they want to use an existing DB, all you need to do is tell them to enter the host / socket_directory, database name, port, username, and password. Don't make "password" required, it might be blank if their hba configuration doesn't require one.
If they want to use a private instance, you fire one up - see the guidance below.
Please do not bundle a PostgreSQL installer and run it as a silent install. This is a nightmare for the poor user, who has no idea where this "postgresql" thingy came from and is likely to uninstall it. Or, worse, it'll conflict with their own PostgreSQL install, or confuse them when they go to install PostgreSQL themselves later, finding that they already have it and don't even know the password. Yeah. Don't do that. Bundle the binaries only and control your private PostgreSQL install yourself.
Do not just use the postgres executables already on the system to make your own private instance. If the user decides to upgrade from 9.2 to 9.3, suddenly your private instance won't start up, and you won't have access to the old 9.2 binaries to do a pg_upgrade. You need to take full responsibility if you're using a private instance of Pg and bundle the binaries you need in your program.
How to start/control a private instance of Pg
On first run, you run PostgreSQL's initdb -D /path/to/datadir, pointing at an empty subdir of a private data directory for your program. Set some environment variables first so you don't conflict with any normal PostgreSQL install on the system. In particular, set PGPORT to some random high-ish port, and specify a different unix_socket_directory configuration parameter to the default.
Once initdb has run, your program will probably want to modify postgresql.conf and pg_hba.conf to fit its needs. Personally I don't bother, I just pass any parameters I want as overrides on the PostgreSQL server start-up command line, including overriding the hba_file to point to one I have pre-created.
If the data dir already exists during program startup you then need to check whether the datadir matches the current PostgreSQL major version by examining the PG_VERSION file. If it doesn't, you need to make a copy of the datadir and run pg_upgrade to upgrade it to the current version you have bundled. You need to retain a copy of the binaries for the old version for this, so you'll need to special-case your update processes or just bundle old versions of the binaries in your update installer as well as the new ones.
When your program is started, after it's checked that the datadir already exists and is the correct version, set the PGPORT env var to the same value you used for initdb then start PostgreSQL directly with postgres -D /path/to/datadir and suitable parameters for log file output, etc. The postmaster you start will have your program as the parent executable, and will be terminated if your program quits suddenly. That's OK, it'll get time to clean up, and even if it doesn't PostgreSQL is crash-safe by design. Your program should still politely ask PostgreSQL to shut down before exiting by sending an appropriate signal, though.
Your application can now connect to the PostgreSQL instance it owns and controls using libpq or whatever, as normal, by specifying the port you're running it on and (if making a unix socket connection) passing host as /path/to/whatever/unix/socket/dir.
Instead of directly controlling postgres you might instead choose to use pg_ctl to drive it. That way you can leave the database running when your program exits and only start it if you find it's not already running. It doesn't really matter if the user shuts the system down without shutting down PostgreSQL - Pg will generally get some shutdown warning from the operating system, but doesn't need it or care much about it, it's quite happy to just crash and recover when next started up.
It is definitily not as easy as using SQLite.
In SQLite you have only a single file which contains the whole database and if you "connect" to the database you simply open the file.
In PostgreSQL, MySQL, ... you have a database daemon which keeps lots of files open, you have multiple files for the database itself, then transaction logs, log files .. and of course the configuration files which specifify where all these files are, resource usage, how to connect and much more. And the database needs also be maintained regularly for optimal performance and there are lots of tunables. That's all because the usual use of all these databases is to serve multiple clients at once as fast as possible and not to make setup simple.
It is still possible to setup PostgreSQL just for a single user. To setup the database you need to create a database directory, find an unused port on localhost for TCP connections or better use UNIX domain sockets, write the config with the needed parameters (like the used socket, database directory, user, resource restrictions, permissions so that no other users on the same machine can connect), start the database daemon, initialize the database(s) (postgresql can serve multiple databases within the same daemon setup) and finally connect to the database from your program. And don't forget to regularly run the maintainance tasks and to shutdown the database in an ordered way.

How do I run one version of a web app while developing the next version?

I just finished a Django app that I want to get some outside user feedback on. I'd like to launch one version and then fork a private version so I can incorporate feedback and add more features. I'm planning to do lots of small iterations of this process. I'm new to web development; how do websites typically do this? Is it simply a matter of copying my Django project folder to another directory, launching the server there, and continuing my dev work in the original directory? Or would I want to use a version control system instead? My intuition is that it's the latter, but if so, it seems like a huge topic with many uses (e.g. collaboration, which doesn't apply here) and I don't really know where to start.
1) Seperate URLs www.yoursite.com vs test.yoursite.com. you can also do www.yoursite.com and www.yoursite.com/development, etc.. You could also create a /beta or /staging..
2) Keep seperate databases, one for production, and one for development. Write a script that will copy your live database into a dev database. Keep one database for each type of site you create. (You may want to create a beta or staging database for your tester).. Do your own work in the dev database. If you change the database structure, save the changes as a .sql file that can be loaded and run on the live site database when you turn those changes live.
3) Merge features into your different sites with version control. I am currently playing with a subversion setup for web apps that has my stable (trunk), one for staging, and one for development. Development tags + branches get merged into staging, and then staging tags/branches get merged into stable. Version control will let you manage your source code in any way you want. You will have to find a methodology that works for you and use it.
4) Consider build automation. It will publish your site for you automatically. Take a look at http://ant.apache.org/. It can drive a lot of automatically checking out your code and uploading it to each specific site as you might need.
5) Toy of the month: There is a utility called cUrl that you may find valuable. It does a lot from the command line. This might be okay for you to do in case you don't want to use all or any of Ant.
Good luck!
You would typically use version control, and have two domains: your-site.com and test.your-site.com. Then your-site.com would always update to trunk which is the current latest, shipping version. You would do your development in a branch of trunk and test.your-site.com would update to that. Then you periodically merge changes from your development branch to trunk.
Jas Panesar has the best answer if you are asking this from a development standpoint, certainly. That is, if you're just asking how to easily keep your new developments separate from the site that is already running. However, if your question was actually asking how to run both versions simultaniously, then here's my two cents.
Your setup has a lot to do with this, but I always recommend running process-based web servers in the first place. That is, not to use threaded servers (less relevant to this question) and not embedding in the web server (that is, not using mod_python, which is the relevant part here). So, you have one or more processes getting HTTP requests from your web server (Apache, Nginx, Lighttpd, etc.). Now, when you want to try something out live, without affecting your normal running site, you can bring up a process serving requests that never gets the regular requests proxied to it like the others do. That is, normal users don't see it.
You can setup a subdomain that points to this one, and you can install middleware that redirects "special" user to the beta version. This allows you to unroll new features to some users, but not others.
Now, the biggest issues come with database changes. Schema migration is a big deal and something most of us never pay attention to. I think that running side-by-side is great, because it forces you to do schema migrations correctly. That is, you can't just shut everything down and run lengthy schema changes before bringing it back up. You'd never see any remotely important site doing that.
The key is those small steps. You need to always have two versions of your code able to access the same database, so changes you make for the new code need to not break the old code. This breaks down into a few steps you can always make:
You can add a column with a default value, or that is optional. The new code can use it, and the old code can ignore it.
You can update the live version with code that knows to use a new column, at which point you can make it required.
You can make the new version ignore a column, and when it becomes the main version, you can delete that column.
You can make these small steps to migrate between any schemas. You can iteratively add a new column that replaces an old one, roll out the new code, and remove the old column, all without interrupting service.
That said, its your first web app? You can probably break it. You probably have few users :-) But, it is fantastic you're even asking this question. Many "professionals" fair to ever ask it, and even then fewer answer it.
What I do is have an export a copy of my SVN repository and put the files on the live production server, and then keep a virtual machine with a development working copy, and submit the changes to the repo when Im done.