I'm wondering how we can handle database migration in django while the site in production as while developing we stop the server then make changes in database then rerun the server I think it may be stupid question but I am learning by myself and can't figure it out thanks in advance.
You can connect to the server using ssh and run commands to migrate without stopping the server and once you are done, you restart the server.
python manage.py makemigrations
and then
python manage.py migrate
and then restart the server.
for example: in case of nginx and gunicorn
sudo service gunicorn restart
sudo service nginx restart
Related
I have a django project on ubuntu server that works with Nginx and Gunicorn correctrly.
I change some django code and push on git server and then pull from git server on ubuntu server. It seems new changes not implemented on server. How to fix it?
I restart and reload Gunicorn and Nginx but not happend.
I found the answer we can enter command like this for restarting Gunicorn:
sudo systemctl daemon-reload
sudo systemctl restart gunicorn
I have a Django webapp. The app has some scheduled tasks, for this I'm using django-q. In local development you need to run manage.py qcluster to be able to run the scheduled tasks.
How can I automatically run the qcluster process in production?
I'm deploying to a Digital Ocean droplet, using ubuntu, nginx and gunicorn.
Are you using a Procfile?
My configuration is to have a Procfile that contains:
web: python ./manage.py runserver 0.0.0.0:$PORT
worker: python ./manage.py qcluster
This way, every time the web process is started, another process for django-q is also created.
First of all: please don't blame me for making newbie mistakes or anything like that. I am still learning and need just a bit of help.
So I created a droplet on digitalocean with ubuntu 16.04 and logged in and ran:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python-setuptools python-pip apache2 libapache2-mod-wsgi
and installed django:
sudo pip install Django==11.1.4
and created the project (without virtualenv) in a directory:
django-admin.py startproject mysite
However, if I do "runserver"
python manage.py runserver
and type xxx.xxx.xxx.xx:8000 in firefox there is nothing even though I see in the terminal that the runserver did work. If I type xxx.xxx.xxx.xx then I see the default page of apache. My goal is to run django over apache but I cannot even get started because the runserver which is just for testing didn't even work. How can I make this work, where did I make a mistake?
Edit: the output of the runserver is:
Performing system checks...
System check identified no issues (0 silenced).
September 02, 2017 - 22:21:12
Django version 1.11.4, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
The manage.py runserver is NOT meant for serving your pages through Apache, it is ONLY meant as a lightweight local development server for running your code as you write it.
Instead, you should be following a guide to run DJango on Apache using mod_wsgi.
The only bit of advice I'd be willing to give that might help in this situation is that if you're trying to host runserver inside a virtual machine or Docker container, you need to explicitly set the host binding to
manage.py runserver 0.0.0.0:8000
Note that the default IP address, 127.0.0.1, is not accessible from other machines on your network. To make your development server viewable to other machines on the network, use its own IP address (e.g. 192.168.2.1) or 0.0.0.0 or :: (with IPv6 enabled).
Again, runserver is for local development ONLY. Do NOT use it to try and host a site through Apache since that's what hosting drivers like mod_wsgi are for.
The runserver and apache are 2 different things.
The local runserver is used by a developer to code and run the web application locally. Usually the settings loaded locally are different, for example the production website use settings and locally you use settings_local.py that ovverrides the production settings (very simplified example).
If you run your runserver and you load http://127.0.0.1:8000/ you should be able to see something and on your terminal you should see the runserver response to your requests from the browser.
If your goal is deploy your web application than you need something like apache, but not locally. IMHO, I suggest you to avoid Apache and use Nginx + Gunicorn + supervisord to manage your webapp on your production server. Much much easier to configure.
I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
I deployed my django app and when I tried
heroku run python manage.py syncdb
I got a timeout awaiting process error. Superuser is not created for the system yet, though i did syncdb by using:
heroku run:detached python manage.py createsuperuser
But this does not prompt me for the superuser.
Port 5000 is not blocked in my system. How do I make heroku run work (or) how do I create the super user?
do not detach the heroku shell:
heroku run python manage.py createsuperuser
worked for me
after wasting entire day i got the answer
heroku run python manage.py syncdb
doesn't run on institute or office lan to run this you must remove system proxy as well
unset http_proxy
`unset https_proxy`
I hope this will help
Try
heroku run python manage.py shell
And then create your superuser from there. Good luck!
More info please.
What DB are you using?
Have you set your local_settings.py?
Are you using Debian?
I use Postgres on Debian so I had to both apt-get install python-psycopg2 (otherwise you can't use postgres) and pip install --user psycopg2 (otherwise pip freeze misses Postgres), then manually create a user and db. Replace USER with the username from your local_settings.py
sudo su - postgres
createuser USER -dPRs
createdb --owner USER db.db