I use Elastic Beanstalk on aws to host my webapp which needs a task runner like django q. I need to run it on my instance and am facing difficulty doing that. I found this script https://gist.github.com/codeSamuraii/0e11ce6d585b3290b15a9ad163b9aa06 which does what I need but its for the older version of ec2 instance. So far I know I must run django q post deployment, but is it possible to add the process to the procfile along with starting the wsgi server.
Any help that could point me in the right direction will be greatly appreciated.
You can create a "Procfile" at the root of your bundle with following content:
web: gunicorn --bind 127.0.0.1:8000 --workers=1 --threads=15 mysite.config.wsgi:application
qcluster: python3 manage.py qcluster
Obviously, replace "mysite.config.wsgi" with the path to your wsgi.
I ended up not finding a solution, i chose a different tech altogether to fulfill the requirements. It was a crontab making curl requests to a Django server. So on the Django admin I would create task routes linking it to modules in the file storage. And paste the route info in crontab setting and set the appropriate time interval.
Related
I have a django repository setup in gitlab and I am trying to automate build and deploy on google cloud using gitlab CI/CD.
The app has to be deployed on App Engine and has to use CloudSQL for dynamic data storage.
Issue that I am facing is while executing migration on the db, before deploying my application.
I am supposed to run ./manage.py migrate which connects to cloudSQL.
I have read that we can use cloud proxy to connect to cloudSQL and migrate db. But it kind of seems like a hack. Is there a way to migrate my db via CI/CD pipeline script?
Any help is appreciated. Thanks.
When running Django in the App Engine Standard environment the recommended way of approaching database migration is to run ./manage.py migrate directly from the Console shell or from your local machine (which requires using the cloud sql proxy).
If you want the database migration to be decoupled from your application deployment and run it in Gitlab CI/CD you could do something along these lines:
Use as base image google/cloud-sdk:latest
Acquire credentials gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
Download the cloudsqlproxy with wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy and make it executable chmod +x cloud_sql_proxy.
Start the proxy ./cloud_sql_proxy -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:3306.
Finally, run the migration script.
You might also create a custom docker image that already does what's above behind the scenes, the result would be the same.
If you want to read further on the matter I suggest taking a look at the following articles link1 link2.
I'm also trying to find the correct way to do that.
One other hacky way would be to just add a call to it in the settings file that is loaded with your app.
something like the migrate.py file does:
from django.core.management import execute_from_command_line
execute_from_command_line(['./manage.py', 'migrate'])
so everytime after you'll deploy a new version of the app it will also run the migrate.
I want to beleive there are other ways, not involving the proxy, especially if you also want to work with a private ip for the sql - then this script must run in the same vpc.
Some days ago I have deployed a Django on AWS elastic beanstalk(EB) and it worked fine. Today, after a new deploy, where I did minor changes in view.py, the Django APP on EB has a very big problem and it becomes not accessible anymore. Looking at the log file in AWS EB I read this errors:
Script timed out before returning headers: wsgi.py
End of script output before headers: wsgi.py, referer ...
Do you have any ideas how to solve this issues?
I would like to thank you in advance,
Two potential solutions (depending on your specific situation)
This is a similar question in addressing this specific question
End of script output before headers: wsgi.py deploying python django to AWS EB
Add this WSGIApplicationGroup %{GLOBAL} to your wsgi configuration. It directs your wsgi app- your django app, to "run within the very first Python sub interpreter created when Python is initialised" (from https://modwsgi.readthedocs.io/en/develop/user-guides/application-issues.html#python-simplified-gil-state-api)
The other solution is related to increaseing memory of the instance you are using.
My actual requirement is to expose a python script as a web-service.
Using flask I did that. As Flask I snot a production-grade server. I used uWSGI to deploy that.
Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
Is it possible to deploy a simple "HelloWorld" python script in http.server?
Can you please provide an example of how can I deploy a simple python script in a http.server.
If I want to deploy more such "HelloWorld" python scripts how can I do that?
Also can you point some links on http.server VS uWSGI.
Thanks, Vijay.
Most of the sites suggesting to deploy this with NGINX. Why my web-service didn't contain any static data.
You can configure nginx as a reverse proxy, between the Internet and your WSGI server, even if you don't need to serve static files from nginx. This is the recommended way to deploy.
My manager suggested deploying the flask script in http.server instead of NGINX. Can I deploy like this?
http.server is a simple server which is built into Python and comes with the same warning as Flask's development server: do not run in production.
You can't run a flask script with http.server. Flask's dev server does the same job as http.server.
Technically you could run either of these behind nginx but this is not advised as both http.server and Flask's dev server are low security implementations, intended for single user connections. Even with nginx in front, requests are ultimately handled by either server, which is why you need to launch the app with a WSGI server which can handle load properly.
I read somewhere that the queue size for uWSGI is 100. Means Ata point of time it can queue upto 100 requests?
This doesn't really make sense. For example gunicorn which is one of many WSGI servers, states the following about load:
Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
So by specifying the number of workers when you launch gunicorn with something like:
gunicorn --bind '0.0.0.0:5000' --workers 4 app:app
... will increase the capability of the WSGI server (gunicorn in this case) to process requests. However leaving the --workers 4 part out, which will defualt to 1 is probably more than sufficient for your HelloWorld script.
I'm trying to understand how virtual environment gets invoked. The website I have been tasked to manage has a .venv directory. When I ssh into the site to work on it I understand I need to invoke it with source .venv/bin/activate. My question is: how does the web application invoke the virtual environment? How do I know it is using the .venv, not the global python?
More detail: It's a Drupal website with Django kind of glommed onto it. Apache it the main server. I believe the Django is served by gunicorn. The designer left town
_
Okay, I've found how, in my case, the virtualenv was being invoked for the django.
BASE_DIR/run/gunicorn script has:
#GUNICORN='/usr/bin/gunicorn'
GUNICORN=".venv/bin/gunicorn"
GUNICORN_CONF="$BASE_DIR/run/gconf.py"
.....
$GUNICORN --config $GUNICORN_CONF --daemon --pid $PIDFILE $MODULE
So this takes us into the .venv where the gunicorn script starts with:
#!/media/disk/www/aavso_apps/.venv/bin/python
Voila
Just use absolute path when calling python in virtualenv.
For example your virtualenv is located in /var/webapps/yoursite/env
So you must call it /var/webapps/yoursite/env/bin/python
If you use just Django behind a reverse proxy, Django will use whatever is the python environment for the user that started the server was determined in which python command. If you're using a management tool like Gunicorn, you can specify which environment to use in those configs. Although Gunicorn itself requires us to activate the virtual environment before invoking Gunicorn
EDIT:
Since you're using Gunicorn, take a look at this, https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-apps-using-gunicorn-http-server-behind-nginx
I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.