I have installed this https://github.com/andybak/django-backup
App just provides backup management command
What i do:
1. $ python manage.py backup
Everything is fine. Backup created!
2. $ python manage.py shell
from django.core.management import call_command
call_command('backup')
Everything is fine. Backup created!
3. I've created view:
from django.core.management import call_command
def backup(request): # /admin/backup/
call_command('backup')
return redirect(request.META.get('HTTP_REFERER'))
4. $ python manage.py runserver 0.0.0.0:9999
Go to browser mywebsite.com:9999/admin/backup/
Everything is fine. Backup created!
5. But when I run my website through nginx+gunicorn+supervisord and go to browser mywebsite.com/admin/backup/ - backup file is empty.
Maybe it is all about permissions? Please help.
Django 1.7
EDIT:
6. /var/env/project/bin/gunicorn core.wsgi -b 0.0.0.0:9999 --user=root --group=root and go to browser mywebsite.com:9999/admin/backup/
Everything is fine. Backup created!
/etc/supervisord.conf:
[program:project]
command=/var/env/project/bin/gunicorn core.wsgi -b 0.0.0.0:8000 --user=root --group=root
directory=/var/www/project/
environment=PATH="/var/env/project/bin/activate",DJANGO_SETTINGS_MODULE="core.settings_prod"
user=root
autostart=true
autorestart=true
redirect_stderr=true
Got it! Mistake in supervisord.conf environment=PATH= it should be environment=PYTHONPATH=
environment=PYTHONPATH="/var/env/project/bin/activate",DJANGO_SETTINGS_MODULE="core.settings_prod"
Related
minimal django/celery/redis is running locally, but when deployed to heroku gives me the following error, when I run on python:
raise ConnectionError(self._error_message(e))
kombu.exceptions.OperationalError: Error 111 connecting to localhost:6379. Connection
refused.
This is my tasks.py file in my application directory:
from celery import Celery
import os
app = Celery('tasks', broker='redis://localhost:6379/0')
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
#app.task
def add(x, y):
return x + y
Requirements.txt:
django
gunicorn
django-heroku
celery
redis
celery-with-redis
django-celery
kombu
I have set worker dyno to 1.
Funny things is i could have sworn it was working before, now it doesnt work for some reason.
Once, you have a minimal django-celery-redis project setup on local, here is how you deploy it on heroku:
Add to your tasks.py:
import os
app.conf.update(BROKER_URL=os.environ['REDIS_URL'],
CELERY_RESULT_BACKEND=os.environ['REDIS_URL'])
Make sure your requirements.txt is like this:
django
gunicorn
django-heroku
celery
redis
Add to your Procfile: "worker: celery worker --app=hello.tasks.app"
Make sure it still runs on local
enter into terminal: "export REDIS_URL=redis://"
run "heroku local&"
run python
import hello.tasks
hello.tasks.add.delay(1,2)
Should return something like:
<AsyncResult: e1debb39-b61c-47bc-bda3-ee037d34a6c4>
"heroku apps:create minimal-django-celery-redis"
"heroku addons:create heroku-redis -a minimal-django-celery-redis"
"git add ."
"git commit -m "Demo""
"git push heroku master"
"heroku open&"
"heroku ps:scale worker=1"
"heroku run python"
import hello.tasks
hello.tasks.add.delay(1, 2)
You should see the task running in the application logs: "heroku logs -t -p worker"
This solved it for me, i forgot to import celery in project/init.py like so
from .celery import app as celery_app
__all__ = ("celery_app",)
Fairly new to Docker, I am trying to add the execution of a custom sql script (triggers and functions) to Django's migration process and I am starting to feel a bit lost. Overall, What I am trying to achieve follows this pretty clear tutorial. In this tutorial, migrations are achieved by the execution of an entry point script. In the Dockerfile:
# run entrypoint.sh
ENTRYPOINT ["/usr/src/my_app/entrypoint.sh"]
Here is the entrypoint.sh:
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# tried several with and without combinations
python manage.py flush --no-input
python manage.py makemigrations my_app
python manage.py migrate
exec "$#"
So far so good. Turning to the question of integrating the execution of custom sql scripts in the migration process, most articles I read (this one for instance) recommend to create an empty migration to add the execution of sql statements. Here is what I have in
my_app/migrations/0001_initial_data.py
import os
from django.db import migrations, connection
def load_data_from_sql(filename):
file_path = os.path.join(os.path.dirname(__file__), '../sql/', filename)
sql_statement = open(file_path).read()
with connection.cursor() as cursor:
cursor.execute(sql_statement)
class Migration(migrations.Migration):
dependencies = [
('my_app', '0001_initial'),
]
operations = [
migrations.RunPython(load_data_from_sql('my_app_base.sql'))
]
As stated by dependencies, this step depends on the initial one (0001_initial.py):
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Unpayed',
fields=[
etc etc
[The Issue] However, even when I try to manually migrate (docker-compose exec web python manage.py makemigrations my_app), I get the following error because the db in the postgresql container is empty:
File "/usr/src/my_app/my_app/migrations/0001_initial_data.py", line 21, in Migration
migrations.RunPython(load_data_from_sql('my_app_base.sql'))
File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 82, in _execute
....
return self.cursor.execute(sql)
django.db.utils.ProgrammingError: relation "auth_user" does not exist
[What I do not understand] However, when I log in the container, remove 0001_initial_data.py and run ./entrypoint.sh, everything works like a charm and tables are created. I can add 0001_initial_data.py manually later on, run entrypoint.sh angain and have my functions. Same when I remove this file before running docker-compose up -d --build: tables are created.
I feel like I am missing some obvious way and easier around trying integrate sql script migrations in this canonical way. All I need is this script to be run after 0001_initial migration is over. How would you do it?
[edit] docker-compose.yml:
version: '3.7'
services:
web:
build:
context: ./my_app
dockerfile: Dockerfile
command: python /usr/src/my_app/manage.py runserver 0.0.0.0:8000
volumes:
- ./my_app/:/usr/src/my_app/
ports:
- 8000:8000
environment:
- SECRET_KEY='o##xO=jrd=p0^17svmYpw!22-bnm3zz*%y(7=j+p*t%ei-4pi!'
- SQL_ENGINE=django.db.backends.postgresql
- SQL_DATABASE=postgres
- SQL_USER=postgres
- SQL_PASSWORD=N0tTh3D3favlTpAssw0rd
- SQL_HOST=db
- SQL_PORT=5432
depends_on:
- db
db:
image: postgres:10.5-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
django:2.2
python:3.7
I believe the issue has to do with you naming the migration file and manually making your dependencies with the same prefix "0001" The reason I say this is because when you do reverse migrations, you simply can just reference the prefix. IE if you wanted to go from your 7th migration to your 6th migration. The command looks like this python manage.py migrate my_app 0006 Either way, I would try deleting and creating a new migration file via python manage.py makemigrations my_app --empty and moving your code into that file. This should also write the dependencies for you.
The error message alongside the test you ran with adding he migration file after is indicative of the issue though. Some how initial migrations aren't running before the other one. I would also try dropping your DB as it may have persisted some bad state ./manage.py sqlflush
[The easiest way I could find] I simply disentangled django migrations from the creation of custom functions in the DB. Migration are run first so that the tables exists when creating the functions. Here is the entrypoint.sh
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py flush --no-input
python manage.py migrate
# add custom sql functions to db
cat my_app/sql/my_app_base.sql | python manage.py dbshell
python manage.py collectstatic --no-input
exec "$#"
Keep in mind that manage.py dbshell requires a postgresql-client to run. I just needed to add it in the Dockerfile:
# pull official base image
FROM python:3.7-alpine
...........
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev postgresql-client\
&& pip install psycopg2 \
&& apk del build-deps
I read quite a few posts about this but still no solution...
I have a docker-compose project with, among other, a django service that I build.
On my prod environment, it is using gunicorn + nginx. All fine, working as expected.
However on my dev environment, I am using only manage.py runserver. And here the troubles begin. Somehow, manage.py uses an old version of my settings.py that has been since then deleted. In my specific case, runserver is looking for a local mysql db, which doesnt exist because it is in another container.
So, it is the same settings.py between gunicorn and manage.py, why does it work in one and not the other one???
My project structure:
mysite
|_ django_mysite/
| |_ __init__.py
| |_ settings.py
| |_ urls.py
| |_ wsgi.py
|_ myapp/
| |...
|_ static/
| |...
|_ manage.py
|_ uwsgi_params
My manage.py:
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mysite.settings")
try:
from django.core.management import execute_from_command_line
except ImportError:
# The above import may fail for some other reason. Ensure that the
# issue is really that Django is missing to avoid masking other
# exceptions on Python 2.
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
raise
execute_from_command_line(sys.argv)
My wsgi.py:
"""
WSGI config for django_mysite project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/
"""
import os
import sys
from django.core.wsgi import get_wsgi_application
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(BASE_DIR)
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_mysite.settings'
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mysite.settings")
application = get_wsgi_application()
My Dockerfile (in case it is useful):
FROM alpine:3.7
MAINTAINER XXX XXX
# Dependencies
RUN rm -rf /var/cache/apk/* \
&& rm -rf /tmp/* \
&& apk update \
&& apk --no-cache add python py-pip build-base gettext libxslt-dev jpeg-dev \
&& mkdir -p /data/web
WORKDIR /data/web
# Django mysite requirements
COPY settings /data/web/
RUN apk --no-cache add python-dev mysql-client mysql-dev \
&& pip install --no-cache-dir -r requirements.txt
&& apk del -r python-dev mysql
# Pull mysite code
# Change settings.py and requirements.txt
RUN apk --no-cache add git \
&& git clone -b xx_xxxx https://github.com/XXX/xxx \
&& apk del -r git \
&& rm /data/web/mysite/requirements.txt \
&& rm /data/web/mysite/django_mysite/settings.py \
&& mv requirements.txt mysite/requirements.txt \
&& mv settings.py mysite/django_mysite/settings.tmp.py \
&& mv settings.build.py mysite/django_mysite/settings.py \
# Collect static django files and replace right settings.py file
WORKDIR /data/web/mysite
RUN python /data/web/mysite/manage.py collectstatic --no-input \
&& rm django_mysite/settings.py \
&& mv django_mysite/settings.tmp.py django_ mysite/settings.py
If I go into the django container and run "manage.py diffsettings", I see only the old settings I used for the collectstatic in my build.
However if I directly check the settings.py file from within the container I see the right settings.py.
On dev environment, my compose launch the runserver via the command:
/usr/bin/python manage.py runserver 0.0.0.0:8000
and having the following issue:
'Can\'t connect to local MySQL server through socket \'/run/mysqld/mysqld.sock\' (2 "No such file or directory")'
(makes sense. The django container doesnt have mysql, it is the mysql container which is referenced in the host)
On my prod:
/usr/bin/gunicorn django_mysite.wsgi:application -w 2 -b :8000
All working perfectly. Gunicorn from the django container deals with the msql container and the nginx container.
Any idea? Could it be related to Docker layers?
Thanks!
E
Can't tell from the files you shared, but this seems to be a recurring issue with applications in docker. Here are some options you can try:
1) Do you have different files for setting prod and dev databases? Check if the dev database have a "hostname" option. If it doesn't, 95% sure it will point to localhost (and therefore will give you trouble on dev machine).
2) Do you have a database container for the dev machine? Try to connect with mysql client from the application container, to the database container. You can use docker exec -it container_id /bin/bash (you'll have to adapt since it's Alpine) to attach to the app container.
3) Is the database container running, but you are not able to connect to it? Check if you container door is open to reach the database.
I have been stuck to a bottleneck which I have tried to resolve using official docs and other answers here in stackoverflow but still not able to create django superuser programatically in the beanstalk environment.
Current state -
a. Application is getting deployed smoothly and I am able to access database from my UI application. basically the entry is getting made in some other table that i have in application.
How I have tried to create superuser -
a. By passing container commands -
Option 1-
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
commands:
super_user:
command: "source /opt/python/run/venv/bin/activate && python <appname>/createuser.py"
leader_only: true
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "<Appname>.settings"
PYTHONPATH: "/opt/python/current/app:$PYTHONPATH"
In the logs -
I didn't see it trying to run the custom command.
Option 2 -
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "django-admin.py collectstatic --noinput"
03_createsuperuser:
command: "source /opt/python/run/venv/bin/activate && django-admin.py createsuperuser"
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "<appname>.settings"
PYTHONPATH: "/opt/python/current/app:$PYTHONPATH"
For this, I created a createsuperuser.py file under /management/commands/ following the structure of init.py in both folders and one createsuperuser.py under commands -
from django.core.management.base import BaseCommand
from django.contrib.auth.models import User
class Command(BaseCommand):
def handle(self, *args, **options):
if not User.objects.filter(username="admin").exists():
User.objects.create_superuser("admin", "admin#gmail.com", "admin")
On this, I got a following message from logs -
Superuser creation skipped due to not running in a TTY. You can run `manage.py createsuperuser` in your project to create one manually.
My queries are -
why I am not able to create a superuser from command line of my virtual env? In that I am getting a message like this -
raise ImproperlyConfigured("settings.DATABASES is improperly configured. "
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details.
A bit weird considering makemigrations command is working fine.
And when I echo $DJANGO_SETTINGS_MODULE, i get the right setting
appname.settings
Let me know where I am going wrong in create superuser thing?
I solved this problem recently with one of my sample app deployment in beanstalk.
I mostly followed the official documentation from this link
In your django app folder create python package 'management'
create another package inside management package 'commands'
create a python file in commands package mysuperuser.py
import os
from django.core.management.base import BaseCommand
from django.contrib.auth.models import User
class Command(BaseCommand):
def handle(self, *args, **options):
if not User.objects.filter(username='myuser').exists():
User.objects.create_superuser('myuser',
'myuser#myemail.com',
'mypassword')
In your django-migrate.config file, add a second command
02_create_superuser_for_django_admin:
command: "python manage.py mysuperuser"
leader_only: true
do python manage.py collectstatic and eb deploy.
Doing this created the superuser for me.I didn't have to add any PYTHONPATH as described in some answers available online.
Your custom file is named "createsuperuser.py" that's the same as the Django command, and that collision is what's causing the issue. Use "createsu.py" for the file name, then be sure to change the config file to also use "createsu."
I spent ages working out how to do this and this is by far the simplest & most secure way. Create the following file .platform > hooks > postdeploy > 01_migrate.sh and input the below:
#!/bin/bash
source /var/app/venv/*/bin/activate && { python migrate.py createsuperuser --noinput; }
You can then add DJANGO_SUPERUSER_PASSWORD, DJANGO_SUPERUSER_USERNAME, DJANGO_SUPERUSER_EMAIL to the configuration section of the application environment and it will know these are to be used as we have specified --noinput.
Then add the below to the folder .ebextentions > django.config . This just gets round permission issues in running 01_migrate.sh
container_commands:
01_chmod1:
command: "chmod +x .platform/hooks/postdeploy/01_migrate.sh"
That will create your superuser in a secure way, with the same logic you can also run migrations and collect static by adding to the 01_migrate.sh file.
I have a slightly simpler version of #jimbo's answer. Inside .ebextensions/db-migrate.config I have the following:
container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
leader_only: true
02_createsuperuser:
command: "source /var/app/venv/*/bin/activate && python3 manage.py createsuperuser --noinput"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: <appname>.settings
The key lines there are the 02_createsuperuser container command. Once you've got that, you can set the DJANGO_SUPERUSER_PASSWORD, DJANGO_SUPERUSER_USERNAME, DJANGO_SUPERUSER_EMAIL environment variables in the environment and deploy and you'll be good to go. Once you've got the user created, remove that container command so it's not run again with the next deployment.
deepesh and jimbos combined solution did it for me.
It is particularly useful if you have a custom User.
I will write down the steps.
1 Create the command file under management/command. Don't name it createsuperuser.py to avoid conflict.
└-- App_dir
└-- management
|-- __init__.py
└-- commands
|-- __init__.py
└-- createsu.py
2 The command file should look like this.
import os
from django.contrib.auth.models import User
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Creates a superuser."
def handle(self, *args, **options):
if not User.objects.filter(username="username").exists():
password = os.environ.get("SUPERUSER_PASSWORD")
if password is None:
raise ValueError("Password not found")
User.objects.create_superuser(
username="username",
email="email",
password=password,
)
print("Superuser has been created.")
else:
print("Superuser exists")
3 Add the command in the config (inside .ebextension) file.
container_commands:
...
03_superuser:
command: "source /var/app/venv/*/bin/activate && python3 manage.py createsu"
leader_only: true
4 Add the SUPERUSER_PASSWORD in environment > configuration > Software > Environment properties
5 Commit and eb deploy.
We are still storing raw passwords, which isnt the most secure thing in the world. However its much safer than hardcoding the password in the command file.
You can't use create superuser in a situation where the user can't input the info. See https://realpython.com/blog/python/deploying-a-django-app-to-aws-elastic-beanstalk/#Create.the.Admin.User for a different approach.
In one of my test files I call a Django management command:
def setUpModule():
management.call_command('loaddata', 'frontend/fixtures/chemicals.json',
verbosity=0)
management.call_command('create_indexes_and_matviews',
db_name, db_user, db_pass,
verbosity=2)
This test runs fine when I run it locally with manage.py test.
However, on Travis I get this error:
======================================================================
ERROR: setUpModule (frontend.tests.test_api_views)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/.../frontend/tests/test_api_views.py", line 35, in setUpModule
verbosity=2)
File "/home/travis/virtualenv/python2.7.9/lib/python2.7/site-packages/django/core/management/__init__.py", line 95, in call_command
raise CommandError("Unknown command: %r" % name)
CommandError: Unknown command: 'create_indexes_and_matviews'
How can I let Travis know about the command?
This is my Travis file:
language: python
python:
- "2.7"
addons:
postgresql: "9.3"
env:
- SECRET_KEY=test DB_NAME=dbtest DB_USER=test DB_PASS=test
before_install:
- export DJANGO_SETTINGS_MODULE=....settings.local
- export PYTHONPATH=$HOME/builds/...
install:
- pip install -r requirements.txt
- pip install -r requirements/local.txt
before_script:
- psql -U postgres -c 'CREATE DATABASE dbtest;'
- psql -U postgres -c "CREATE EXTENSION postgis" -d dbtest
- psql -U postgres -c "CREATE EXTENSION postgis_topology" -d dbtest
- psql -U postgres -c "CREATE USER test WITH CREATEUSER PASSWORD 'test';"
- psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE dbtest to test;"
- psql -U postgres -c "ALTER USER test CREATEDB;"
- cd frontend && python manage.py migrate
script:
- python manage.py test
Is there something I should add so that it knows where to find management commands?
From my practice I know two reasons for such problem.
A. No the_app with create_indexes_and_matviews listed in settings.INSTALLED_APPS (it could be missed, excluded in if/else or try/except magic)
To check actual settings, try to add following command to the Travis file
echo "from django.conf import settings;print(settings.INSTALLED_APPS)" | python manage.py shell
B. Missed app dependencies. Try to get the actual error on travis with following command:
echo "from the_app.management.commands.create_indexes_and_matviews import Command" | python manage.py shell
Usually, real import error is descriptive enough to find the fix.