How to set maintenance mode with django in stateless aplication - django

I've hosted my app in Google Cloud Run, a simple Vue's frontend connected to a django API. The problem arises when I try to set maintenance mode to the API, protecting it from unexpected calls. For this purpose I've used django-maintenance-mode's package, but, as I said, due to the implicit stateless character of GC Run, the enviroment variable that stores the maintenance mode's value drops when there isn't any active instance, getting back to OFF.
I'd like to know any other possible solution or fix overriding any of this package's methods to make it work in my project.
Thanks in advance!!

You can use the Graceful shutdowns which will allow you to capture the environment variable that stores the maintenance mode value. Once the value is captured you can store it in a database (Cloud SQL) or in a file on Cloud Storage. At each startup, you get the last value.

Related

Why Environment variable doesn't update in postman flow?

When I am calling an api with normal api calling in postman and running a test script and setting environment value, it's working but when I use that api in postman flow, environment doesn't changing.
Script in my test:
pm.environment.set('email', body.email)
Looks like you are looking for this issue from discussions section of Postman Flows repository:
https://github.com/postmanlabs/postman-flows/discussions/142. Here are some key points from it:
I want to begin by saying that nothing is wrong with environments or variables. They just work differently in Flows from how they used to work in the Collection Runner or the Request Tab.
Variables are not first-class citizens in Flows.
It was a difficult decision to break the existing pattern, but we firmly believe this is a necessary change as it would simplify problems for both us and users.
Environment works in a read-only mode, updates to the environment from scripts are not respected.
Also in this post they suggest:
We encourage using the connection to pipe data from one block to another, rather than using Globals/Environments, etc.
According to this post:
We do not supporting updating globals and environment using flows.

env variable GOOGLE_APPLICATION_CREDENTIALS last only one day on Google cloud

In Google shell which is a part of Google cloud, I set environment variable GOOGLE_APPLICATION_CREDENTIALS because It is need it for PHP NLP project [info: https://cloud.google.com/natural-language/docs/quickstart-client-libraries#client-libraries-install-php]. My project worked fine, but I notice that variable GOOGLE_APPLICATION_CREDENTIALS lasts on my sistem only one day. This is my third time that I am setting it. My project doesn't work when I am missing required variable. Am I doing something wrong?
EDIT:
It is default OS (Debian) when you create new App on Google App engine.
When I type help in Google shell I get info with:
Your 5GB home directory will persist across sessions, but the VM is ephemeral and will be reset
approximately 20 minutes after your session ends. No system-wide change will persist beyond that.
You are completely right, Cloud Shell is running on an ephemeral instance that resets some minutes after the session has ended, reason why you are losing the content of the environment variable you mentioned.
The documentation about limitations in Cloud Shell clearly states that it is intended for interactive use only, and any non-interactive session or intensive usage can be automatically terminated with (or without) a warning.
Therefore, and understanding from your question that you have a background script that is working with Cloud Natural Language, I would strongly advise you to move to a "real" instance of Compute Engine, in which you will have much more control about what is happening. This will allow more flexibility and you will be able to use a bigger machine type, given that Cloud Shell runs on a g1-small GCE instance which, in general, is not enough to run an application. Also, depending on your use case, you may even consider App Engine.
That being said, I have found that when constructing the LanguageClient instance, you may also not use Application Default Credentials and, instead, use the keyFile or keyFilePath variables (explained in the PHP Client Library reference) to pass the path to the JSON key directly to your code, instead of reading it from the environment variable.
Lets assume you are using Linux, make sure that:
The system is not being restarted, and if it is, make sure to set the environment variables accordingly (see how to set permantent environment variables)

Setup code for loopback under boot folder

I'm looking to access control example. https://github.com/strongloop/loopback-example-access-control
It says we need to put sample-models.js file under server/boot folder. That means, everytime I run the application, the creation process will be made again and again. Of course, I'm getting errors on the second call.
Should I put my own mechanism to disable if ones it run, or is there a functionality in loopback?
Bot scripts are for setting up the application. And run once per application start.
So if you want to initialize database or any initializing which would be persisted by running boot script, you need to check if it is initialized first or not.
For example for initializing roles in db, you need to check if there is desired roles in db or not. And if there is not, so create ones.
There is no other functionality in loopback for this.

Use flask admin to set config parameters

As title says, I have small web app, without using database and models.
I'd like interface to change some of Flask own config parameters, and thought that flask-admin may bring me there quickly. Is this easily possible?
You can't generally change configuration after starting the application without restarting the server.
The application (at least in production) will be served with multiple processes, possibly even on multiple servers. Changes to the configuration will only effect the process that handled the request, until the other processes are reaped and re-start. Even then, they may fork from a time after the configuration was read.
Extensions are not consistent about how they read configuration. Some read the configuration from current_app every request. Some only read it during init_app and store their own copy, so changing the configuration wouldn't change their copy.
Even if the configuration is read each time, some configuration just can't be changed, or requires other steps as well. For example, if you change databases, you should probably make sure you also close all connections to the old database, which the config knows nothing about. Another example, you could change debug mode but it won't do anything, because most of the logging is set up ahead of time.
The web app might not be the only thing relying on the configuration, so even if you could restart it automatically when configuration changed, you'd also need to restart dependent services such as Celery. And those services also might be on completely different machines or as different users.
Configuration is typically stored in Python files, so you'd need to create a serializer that can dump valid Python code, or write a config loader for a different format.
Flask-Admin might be able to be used to create a user interface for editing the configuration, but it wouldn't otherwise help with any of these issues.
It's not really worth it to try and change Flask.config after starting the application. It's just not designed for that. Design a config system specifically for the config you need if that's something you need, but don't expect to be able to generally change Flask.config.

Deploying Django as standalone internal app?

I'm developing an tool using Django for internal use at my organization. It's used to search and tag documents (using Haystack and Solr), and will be employed on different projects. My team currently has a working prototype and we want to deploy it 'in the wild.'
Our security environment is strict. Project documents are located on subfolders on a network drive, and access to these folders is restricted based on users' Windows credentials (we also have an MS SQL server that uses the same credentials). A user can only access the projects they are involved in. Since we're an exclusively Microsoft shop, if we want to deploy our app on the company intranet, we'll need to use an IIS server to deal with these permissions. No one on the team has the requisite knowledge to work with IIS, Active Directory, and our IT department is already over-extended. In short, we're not web developers and we don't have immediate access to anybody experienced.
My hacky solution is to forgo IIS entirely and have each end user run a lightweight server locally (namely, CherryPy) while each retaining access to a common project-specific database (e.g. a SQLite DB living on the network drive or a DB on the MS SQL server). In order to use the tool, they would just launch an all-in-one batch script and point their browser to 127.0.0.1:8000. I recognize how ugly this is, but I feel like it leverages the security measures already in place (note that never expect more than 10 simultaneous users on a given project). Is this a terrible idea, and if so, what's a better solution?
I've dealt with a similar situation (primary development was geared toward a normal deployment situation, but some users have a requirement to use the application on a standalone workstation). Rather than deploy web and db servers on a standalone workstation, I just run the app with the Django internal development server and a SQLite DB. I didn't use CherryPy, but hopefully this is somewhat useful to you.
My current solution makes a nice executable for users not familiar with the command line (who also have trouble remembering the URL to put in their browser) but is also relatively easy development:
Use PyInstaller to package up the Django app into single executable. Once you figure this out, don't continue to do it by hand, add it to your continuous integration system (or at least write a script).
Modify the manage.py to:
Detect if the app is frozen by PyInstaller and there are no arguments (i.e.: user executed it by double clicking it) and if so, then run execute_from_command_line(..) with arguments to start the Django development server.
Right before running the execute_from_command_line(..), pop off a thread that does a time.sleep(2) (to let the development server come up fully) and then webbrowser.open_new("http://127.0.0.1:8000").
Modify the app's settings.py to detect if frozen and change things around such as the path to the DB server, enabling the development server, etc.
A couple additional notes.
If you go with SQLite, Windows file locking on network shares may not be adequate if you have concurrent writing to the DB; concurrent readers should be fine. Additionally, since you'll have different DB files for different projects you'll have to figure out a way for the user to indicate which file to use. Maybe prompt in app, or build the same app multiple times with different settings.py files. Variety of a ways to hit this nail...
If you go with MSSQL (or any client/server DB), the app will have to know the DB credentials (which means they could be extracted by a knowledgable user). This presents a security risk that may not be acceptable. Basically, don't try to have the only layer of security within the app that the user is executing. The DB credentials used by the app that a user is executing should only have the access that the user is allowed.