We developed a Flask webapp, and want to deploy it on IIS. During development, we started the app via flask run, which lanches a single instance of our app. On IIS, however, we observed (via the task manager) that our app runs multiple instances concurrently.
The problem is that our app is not designed to run in parallel. For example, our app reads a file from the file system and keeps it in memory for efficiency. This optimization is correct only if it is guaranteed that no other process changes the content of the file.
Is there a way to prevent IIS from starting multiple instances?
In IIS, you can go to FastCGI settings, in there you can see all the applications used by websites on your server. In the column "Max. Instances", the script you are talking about is probably set to 0 (or some value greater than 1), meaning it can be started multiple times. Limiting this to 1 will solve your problem.
you could use below code to run only one instance of a program:
from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running
the command to install:
pip install tendo
Reference link:
Check to see if python script is running
How to make only one instance of the same script running at any time
https://github.com/pycontribs/tendo/blob/master/tendo/singleton.py
Related
I have looked at this question but I am not sure I got it correctly or not.
I have opened pycharm and one python script and its running (it's topic modeling).
Also I have another python script in which I opened in another pycharm in the same server. I also run it.
Now these two program are running in the same server, I should mention that I have not changed any configuration neither server nor pycharm.
Do you think its ok in this way? or one script technically won't run(in terms of progressing I mean it just show its running but practically wont run) until the other script finished?
Edit Configurations -> Allow parallel run. Done
First, PyCharm will create independent processes on the server, so both scripts will run. You can check it with something like htop - search for processes and verify that they're running.
Second, you don't have to open second PyCharm window to run the second script. You can run both of them from the single one. There are at least two ways: with run configurations or by spawning multiple terminal windows and running scripts from there.
From the Run/Debug Configurations windows you can add a Compound configuration that contains multiple configurations that will run in parallel. The Allow parallel run option for child configurations make no difference in this case.
The default behaviour was changed starting from version 2018.3. You can allow multiple runs by selecting Allow parallel run within the Edit Configurations menu.
Recently I have started a Django server on Azure Web App Service, now I want to add a usage of "ChromoDriver" for web scraping, I have noticed that for that I need to install some additional Linux packages (not python) on the machine. the problem is that it gets erased on every deployment, does it mean that I should switch to Docker ?
Container works, but you can also try to pull down the additional packages in the custom start up file without messing around the machine after the deployment
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-deploy-app-service-on-linux-04
I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.
I have a weird problem with Django-Kronos.
I've got it running successfully on my local machine and on our development server. However, on the production server, I can't get kronos to acknowledge my cron.py file. When I run installtasks, it runs but says "0 tasks installed". I've also tried running the tasks manually and kronos tells me the task doesn't exist.
We use git to push everything through to the server, so all the files and the structures are identical between the three locations. I've also checked and the cron.py file exists and has exactly the same content as the working servers.
The only differences between the servers is that the production server is running Postgres (SQlite on the dev server) and it's Ubuntu 12.10, whereas the dev server is 12.01.
Kronos is functioning properly, but it's not picking up our cron.py file for some reason....
Any got any ideas?!
Well, unfortunately, our solution was to scrap Django-Kronos altogether and create a custom management command which we're running from the crontab.
This happens when one of import you are trying to make is not there, your production system might be missing some Python package which is included in your cron.py.
I have a django application that uses an SQLite database. I use git to push the changes to my EC2 instance, which runs the website on an Elastic IP. The site itself works, but when I try to log in to the admin interface I get one of two errors from django:
attempt to write a readonly database
or
unable to open database file
It seems that chmod u+rw leads to the first error and a+rw leads to the second error, but I'm unsure of what is happening. The testserver on my local machine works as expected.
I realize that SQLite may be a bad choice for production, but the site will not have much traffic and I will be the only one using the admin interface or writing to the database. If someone has a solution for setting up MySQL or Postgres and somehow synchronizing the database contents, I would accept that too.
I found the solution after much research. I only defined one user in my EC2 server, but apparently Apache needs access as the user www-data.
sudo chown www-data /projectdir
sudo chown www-data /projectdir/sqlite.db
Is your application running in multiple threads?
The problem is that sqlite3 databases can not be shared between multiple threads of your Django application. I am not sure this is a general sqlite3 bug, a Django bug or just intended. Maybe other users figured out a way to deal with it. I didn't and use either PostgreSQL or MySQL on production servers.
If your site is really low traffic you may just set your webservers config to run your app single threaded (and within a single process). This will have a similiar behavior (and limits) as running the Django test server locally.