Django-Crontab with deployment environment - django

I'm developing a module with the crontab.
Actually, the framework I'm using is django so I did install 'django-crontab'
I did test as the instruction did and make it with localhost environment.
When I deployed("sudo service apache2 restart") it on AWS after doing a command 'python manage.py crontab add', it didn't work.
I thiknk it's working on only localhost environment, isn't it?
How can I solve this problem?

If you have more than one profile in your django settings, you shuold specify one before add crontab. if not specified, django crontab run as default environment, which is develop mostly. To run it on product environment , you should do these:
specify crontab enviromen in settings.product.py, something like
CRONTAB_DJANGO_SETTINGS_MODULE = 'gold.settings.product'
specify settings profile and add crontab
export MYPROJECT_PROFILE = product
python manage.py crontab add

Related

Python Django set DJANGO_SETTINGS_MODULE when migrating source code

Task: Set up a new running environment by being provided python/Django source code and some additional details.
Problem: Cannot get Django-admin to validate due to missing/incorrect settings configuration
"django.core.exceptions.ImproperlyConfigured Requested setting USE_I18N, but settings are not configured. ..... "
You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings
Env Details: Ubuntu OS, Python 2.7, Django 1.7, PostgreSQL (also Supervisor + gunicorn) Running a venv located **/home/dave/python-env/vas/**bin/activate
Python sys.path
/usr/lib/python2.7/* (multiple defined)
/home/dave/python-env/vas/python2.7/site-packages
So tried several methods (including #export DJANGO_SETTINGS_MODULE=project-name.settings....) with little success.
How can one set the DJANGO_SETTINGS_MODULE variable?
os.environ.setdefault() is set in wsgi (I know this is the next step)
BUT this value is also set in /manage.py ...?
os.environ.setdefault("DJANGO_SETTINGS_MODULE", project_name.settings)
The directory /var/www/app/ (where the python source code is located) has several files, one of them is the project_name where the settings.py sits....
I am new to python/django...
Trying to get django-admin.py validate to validate.
Update: Running #python manage.py runserver runs OK. #python manage.py validate|check returns "System check identified no issues (0 silenced)
Running #django-admin.py check returns the error in question. "You must either define DJANGO_SETTINGS_MODULE ...."
UPDATE 2: Solution
Turns out you don't need django-admin.py as suggested by (Alasdair) and you can use manage.py.
Details - If the 'manage.py check' function returns no issues and #pip install -r requirements.txt completes within your virtual environment THEN one can run #manage.py createsuperuser
I was able to use #manage.py runserver after creating a super user, and using this new user (as the database tables were empty for security reasons) I was able to log into 127.0.0.1:8000/admin. From there the models/tables were visible and using the admin functions I could create a new user + group to access the original system that was being migrated as an admin user.
Also not that a database was required (running postgres) with db/username/pass as per settings files and a git repository (at least empty initialised) for raven...
hope this helps someone coming into python.

Can manage.py runserver execute npm scripts?

I am developing a web application with React for frontend and Django for backend. I use Webpack to watch for changes and bundle code for React apps.
The problem is that I have to run two commands concurrently, one for React and the other one for Django:
webpack --config webpack.config.js --watch
./manage.py runserver
Is there any way to customize runserver command to execute the npm script, like npm run start:dev? When you use Node.js as a backend platform, you can do the similar job like npm run build:client && npm run start:server.
If you are already using webpack and django, probably you can be interested in using webpack-bundle-tracker and django-webpack-loader.
Basically webpack-bundle-tracker will create an stats.json file each time the bundle is build, and django-webpack-loader will watch for those stats.json file to relaunch the dev server. This stack allows to separate the concerns between the server and the client.
There are a couple of posts out there explaining this pipeline.
I'm two and a half years late, but here's a management command that implements the solution that OP wanted, rather than a redirection to another solution. It inherits from the staticfiles runserver and runs webpack concurrently in a thread.
Create this management command at <some_app>/management/commands/my_runserver.py:
import os
import subprocess
import threading
from django.contrib.staticfiles.management.commands.runserver import (
Command as StaticFilesRunserverCommand,
)
from django.utils.autoreload import DJANGO_AUTORELOAD_ENV
class Command(StaticFilesRunserverCommand):
"""This command removes the need for two terminal windows when running runserver."""
help = (
"Starts a lightweight Web server for development and also serves static files. "
"Also runs a webpack build worker in another thread."
)
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument(
"--webpack-command",
dest="wp_command",
default="webpack --config webpack.config.js --watch",
help="This webpack build command will be run in another thread (should probably have --watch).",
)
parser.add_argument(
"--webpack-quiet",
action="store_true",
dest="wp_quiet",
default=False,
help="Suppress the output of the webpack build command.",
)
def run(self, **options):
"""Run the server with webpack in the background."""
if os.environ.get(DJANGO_AUTORELOAD_ENV) != "true":
self.stdout.write("Starting webpack build thread.")
quiet = options["wp_quiet"]
command = options["wp_command"]
kwargs = {"shell": True}
if quiet:
# if --quiet, suppress webpack command's output:
kwargs.update({"stdin": subprocess.PIPE, "stdout": subprocess.PIPE})
wp_thread = threading.Thread(
target=subprocess.run, args=(command,), kwargs=kwargs
)
wp_thread.start()
super(Command, self).run(**options)
For anyone else trying to write a command that inherits from runserver, note that you need to check for the DJANGO_AUTORELOAD_ENV variable to make sure you don't create a new thread every time django notices a .py file change. Webpack should be doing it's own auto-reloading anyway.
Use the --webpack-command argument to change the webpack command that runs (for example, I use --webpack-command 'vue-cli-service build --watch'
Use --webpack-quiet to disable the command's output, as it can get messy.
If you really want to override the default runserver, rename the file to runserver.py and make sure the app it lives in comes before django.contrib.static in your settings module's INSTALLED_APPS.
You shouldn't mess with the built-in management commands but you can make your own: https://docs.djangoproject.com/en/1.10/howto/custom-management-commands/.
On your place I'd leave runserver in place and create one to run your custom (npm in this case) script, i.e. with os.execvp.
In theory you could run two parallel subprocesses one that would execute for example django.core.management.execute_from_command_line and second to run your script. But it would make using tools like pbd impossible (which makes work very hard).
The way I do it is that I leverage Docker and Docker compose. Then when I use docker-compose up -d my database service, npm scripts, redis, etc run in the background (running runserver separately but that's another topic).

Run django migrate in docker

I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.

How do I set up Jupyter/IPython Notebook for Django?

I have been using the method described in this post for setting up IPython Notebook to play nicely with Django. The gist of the method is to create an IPython extension which sets the DJANGO_SETTINGS_MODULE and runs django.setup() when IPython starts.
The code for the extension is:
def load_ipython_extension(ipython):
# The `ipython` argument is the currently active `InteractiveShell`
# instance, which can be used in any way. This allows you to register
# new magics or aliases, for example.
try:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
import django
django.setup()
except ImportError:
pass
With a recent upgrade to Jupyter Notebook this setup is now broken for me. I am able to run Django code in the Jupyter notebook by adding a similar bit of code to the first cell of the notebook. However, I was not able to figure out how to get Jupyter to run the extension automatically so I would not have to do this again for each and every notebook I am creating.
What should I do to get Django and Jupyter to play nicely?
UPDATE:
For #DarkLight - I am using Django 1.8.5 with Jupyter 1.0.0. The code I run in the notebook is:
import os, sys
sys.path.insert(0, '/path/to/project')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settingsfile")
import django
django.setup()
Install django-extensions from https://github.com/django-extensions/django-extensions/blob/master/docs/index.rst
pip install django-extensions
Change your settings file to include 'django-extensions'
INSTALLED_APPS += ['django_extensions']
Run your Django server like this:
python manage.py shell_plus --notebook
alter to suit, and run this in your first cell
import os, sys
PWD = os.getenv('PWD')
os.chdir(PWD)
sys.path.insert(0, os.getenv('PWD'))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "local_settings.py")
import django
django.setup()
Now you should be able to import your django models etc. eg:
from app.models import Foobar
Foobar.objects.all()
Just for completeness (but it's 2018, so maybe things changed since this question was posted): you can actually install a Jupyter Python kernel in your Django environment that will then connect (run under) a different Jupyter server/environment (one where you've installed widgets, extensions, changed the theme, etc.). django_extensions right now still does only part of the required work :-)
This assumes you have a Jupyter virtual environment that's separate from Django's one and whose kernels/extensions are installed with --user. All the Jupyter extensions (and their dependencies) are installed in this venv instead of the Django's one/ones (you'll still need pandas, matplotlib, etc. in the Django environment if you need to use them together with Django code).
In your Django virtual environment (that can run a different version of Python, including a version 2 interpreter) install the ipython kernel:
pip install -U ipykernel
ipython kernel install --user --name='environment_name' --display-name='Your Project'
This will create a kernel configuration directory with the specified -–name in your user’s Jupyter kernel directory (on Linux it's ~/.jupyter/ while on OSX it’s ~/Library/Jupyter/) containing its kernel.json file and images/icons (by default the default Jupyter icon for the kernel we’re installing are used). This kernel will run inside the virtual environment what was active at creation, thus using the exact same version of python and all the installed modules used by our Django project.
Running ./manage.py shell_plus --notebook does something very similar, but in addition to requiring everything (including the Jupyter server and all the extensions) installed in the current venv, it’s also unable to run notebooks in directories different from the project’s root (the one containing ./manage.py). In addition it’ll run the kernel using the first executable called python it finds on the path, not the virtual environment’s one, making it misbehave when not started from the command line inside an active Django virtual environment.
To fix these problems so that we're able to create a Notebook running inside any Django project we have so configured and to be able to run notebooks stored anywhere on the filesystem, we need to:
make sure the first ‘argv’ parameter contains the full path to the python interpreter contained in the virtual environment
add (if not already present) an ‘env’ section that will contain shell environment variables, then use these to tell Python where to find our project and which Django settings it should use. We do this by adding something like the following:
"env": {
"DJANGO_SETTINGS_MODULE": "my_project.settings",
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project"
}
optional: change ‘display_name’ to be human friendly and replace the icons.
editing this environment kernel.json file you'll see something similar:
{
"display_name": "My Project",
"language": "python",
"env": {
"DJANGO_SETTINGS_MODULE": "my_project.settings",
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project"
},
"argv": [
"/home/projectuser/.pyenv/versions/2.7.15/envs/my_project_venv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}",
"--ext",
"django_extensions.management.notebook_extension"
]
}
Notable lines:
"DJANGO_SETTINGS_MODULE": "my_project.settings": your settings, usually as seen inside your project's manage.py
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project": PYTHONPATH is extended to include your project's main directory (the one containing manage.py) so that settings can be found even if the kernel isn't run in that exact directory (here django_extensions will use a generic python, thus running the wrong virtual environment unless the whole Jupyter server is launched from inside it: adding this to the kernel.json created by django_extensions will enable it to run notebooks anywhere in the Django project directory)
"/home/projectuser/.pyenv/versions/2.7.15/envs/my_project_venv/bin/python": first argument (argv list) of the kernel execution, should be the full path to your project's virtual environment's python interpreter (this is another thing django_extensions gets wrong: fixing this will allow any notebook server to run that specific Django environment's kernel with all its installed modules)
"django_extensions.management.notebook_extension": this is the extension that will load the 'shell_plus' functionality in the notebook (optional but useful :-) )
Here's what just worked for me
install Django Extensions (I used 1.9.6) as per other answers
install jupyterpip install jupyter
some stuff I did to setup jupyter inside my Docker container -- see below if this applies to you †
from your base Django directory, create a directory for notebooks, e.g. mkdir notebooks
Go to that directory cd notebooks
start django extensions shell_plus from inside that directory: ../manage.py shell_plus --notebook
The notebook server should now be running, and may launch a new browser. If it doesn't launch a browser window, follow the instructions to paste a link or a token.
from the browser, open a new "Django Shell Plus" notebook, as per John Mee's answer's screenshot
AND, importantly, what didn't work was changing directories from inside the notebook environment. If I tried to work with any notebook that was not in the directory that manage.py shell_plus --notebook was run in, then the kernal was not configured correctly. For me, having the notebook be configured for just a single directory at a time was good enough. If you need a more robust solution, you should be able set PYTHONPATH prior to starting jupyter. For example add export PYTHONPATH="$PYTHONPATH:/path/to/django/project" to a virtualenv activate script. But I haven't tried this.
† Docker Setup (optional)
add a port mapping for your container for port 8888
For example, in your docker compose file;
ports:
- "8890:8888"
Configure your project settings file to use ip 0.0.0.0
This is what I did:
NOTEBOOK_ARGUMENTS = [
'--ip', '0.0.0.0',
'--allow-root',
'--no-browser',
]
Note: I am using Python 3.7 and Django 2.1, it works for Django 2.2. I don't have to run anything in my first cell, and this works like charm as long as you don't mind having the notebooks in the root of your Django project.
It is assumed that you have a virtual environment for your project, and it is activated. I use pipenv to create virtual environments and track dependencies of my python projects, but it is up to you what tool you use.
It is also assumed that you have created a Django project and your current working directory is the root of this project.
Steps
Install jupyter
Using pip
pip install jupyter
Using pipenv
pipenv install jupyter
Install django-extentions
Using pip
pip install django-extensions
Using pipenv
pipenv install django-extensions
Set up django-extensions by adding it to the INSTALLED_APPS setting of your Django project settings.py file.:
INSTALLED_APPS = (
...
'django_extensions',
)
Run the shell_plus management command that is part of django-extensions. Use the option --notebook to start a notebook:
python manage.py shell_plus --notebook
Jupyter Notebooks will open automatically in your browser.
Start a new Django Shell-Plus notebook
That's it!
Again, you don't have to run anything in the first cell, and you can corroborate by running dir() to see the names in the current local scope.
Edit:
If you want to put your notebooks in a directory called notebooks at the root directory, you can do the following:
$ mkdir notebooks && cd notebooks
$ python ../manage.py shell_plus --notebook
Thanks to Mark Chackerian whose answer provided the idea to make run the notebooks in a directory other than the project's root.
These are the modules that are imported automatically thanks to shell_plus:
# Shell Plus Model Imports
from django.contrib.admin.models import LogEntry
from django.contrib.auth.models import Group, Permission, User
from django.contrib.contenttypes.models import ContentType
from django.contrib.sessions.models import Session
# Shell Plus Django Imports
from django.core.cache import cache
from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import transaction
from django.db.models import Avg, Case, Count, F, Max, Min, Prefetch, Q, Sum, When, Exists, OuterRef, Subquery
from django.utils import timezone
from django.urls import reverse
Actually turns out you (might not) need to do all that crap. Just install django-extensions and run jupyter!
(myprojectvenv)$ cd myproject
(myprojectvenv)$ pip install jupyter
(myprojectvenv)$ pip install django-extensions
(myprojectvenv)$ jupyter notebook
In the browser, start a new "Django Shell-Plus":
And you should be good to go. eg:
from myproject.models import Foobar
Foobar.objects.all()
While the accepted answer from RobM works, it was less clear than it could be and has a few unnecessary steps. Simply put, to run notebooks through Django from a notebook environment outside of the project directory:
Install:
pip install django-extensions
Add 'django-extensions' to your INSTALLED_APPS list in settings.py
INSTALLED_APPS += ['django_extensions']
Run a notebook from within Django, then close it:
python manage.py shell_plus --notebook
This will create your kernel, which we will now edit to point to an absolute path of Python rather than a relative path.
On OSX, the kernel file is at: ~/Library/Jupyter/kernels/django_extensions/kernel.json
On Linux: ~/.jupyter/kernels/django_extensions/kernel.json
We only need to make two changes:
The first is to edit the first value in the "argv" list from "python" to the full address of the python version in your Django virtual environment. E.g.: "/Users/$USERNAME/Documents/PROJECT_FOLDER/venv/bin/python"
Secondly, to the "env" dictionary, add "DJANGO_SETTINGS_MODULE": "mysite.settings", where mysite is the folder that contains your Django settings.
Optionally, change the value of "display_name".
Now when you run a notebook from any directory, choosing the "Django Shell-Plus" kernel will allow your notebooks to interact with Django. Any packages such as pandas will need to be installed in the Django venv.
The following does work for me using win10, Python 3.5, Django 1.10:
Install Python with the Anaconda distribution so Jupyter will be installed as well
Install Django and install django-extensions:
pip install Django
pip install django-extensions
Start a new Django project. You have to do that in that part of your tree of directories which can be accessed by Jupyter later.
django-admin startproject _myDjangoProject_
Start Jypter
navigate Jupyter to the directory myDjangoProject and enter the first/top myDjangoProject-directory
Start within the first/top myDjangoProject-directory a new Jupyter noteboke: new --> Django Shell-Plus
enter and run the following piece of code :
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myDjangoProject.settings")
import django
django.setup()
Note that this piece of code is the same as in manage.py, and note that "myDjangoProject.settings" points to myDjangoProject/settings.py
Now you can start with examples, e.g.:
from django.template import Template, Context
template = Template('The name of this project is {{ projectName }}')
context = Context({'projectName': 'MyJypyterDjangoSite'})
template.render(context)
Run this command.
PYTHONPATH=/path/to/project/root DJANGO_SETTINGS_MODULE=settings python manage.py shell_plus --notebook
I will add some information to the very complete answer of RobM, for the benefit of the very rare developers that use buildout along with djangorecipe djangorecipe as I do... I refer to jupyter lab as I use that but I think all info can be applied to old jupyter notebooks.
When using buildout you end up with a 'bin/django' handler you'll use instead of 'manage.py'. That's the script that defines the whole path. I added one more part in my buildout.cfg:
[ipython]
recipe = zc.recipe.egg
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}/apps
initialization = import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'web.settings'
so that another script named ipython will be created in ./bin directory. I point kernelspec to that interpreter. Moreover I use kernel argument rather than "-m", "ipykernel_launcher" so that the kernel definition I use is:
{
"argv": [
"/misc/src/hg/siti/trepalchi/bin/ipython",
"kernel",
"-f",
"{connection_file}",
"--ext",
"django_extensions.management.notebook_extension"
],
"display_name": "Trepalchi",
"language": "python"
}
Due to how the ipython script is created by buildout there's no need to add environmental variables in my case.
As Rob already mentioned, jupiterlab is only installed in one environment where I start it with the command:
jupyter lab
not in the environment of Django project whare I only install ipykernel (that has already a bunch of 20 dependencies).
Since I tend to have quite a lot of projects I find it usefull to have a single point where I start jupyter lab with many links to the projects so that I can reach them easily. Thanks to the extension provided by django_extension I don't need any extra cell to initialize the notebook.
Any single kernel added in this way can be found with the command:
jupyter kernelspec list
And clearly listed in the launcher of jupyter lab

Running Docker-Compose Commands With Fabric

I have a Django site running in Docker containers, which uses docker-compose to manage the various containers (database, nginx, etc.). There are a few Django tasks that I use for site maintenance using the Django manage.py command. They commands take the form of:
manage.py updateflickr --settings=mysite.myproj.prod
Running under docker-compose, they look like:
docker-compose run --rm app manage.py updateflickr --settings=mysite.myproj.prod
My problem is that when I try to run these same commands using Fabric, it appears that the settings file I am specifying is not being used. Django is returning database connection errors, which typically mean that it is not getting the correct database information, or in this case the connection specified in mysite.myprod.prod
My Fabric file looks like:
import os
from fabric.api import *
env.hosts = ['myserver.com']
env.user = "myuser"
env.key_filename = '~/.ssh/do_rsa'
env.shell = '/bin/bash -c'
#task
def updateflickr():
run('docker-compose run --rm app python manage.py updateflickr --settings=mysite.myproj.prod')
I have also expirimented with setting the DJANGO_SETTINGS_MODULE environment variable in my docker-compose.yml but am getting the same results. Finally, the last thing I tried was wrapping the command in a shell script. Same results - if I run on the server, it runs fine. If I run the shell script from Fabric, I get database connection issues.
UPDATE
I am not so sure this is so much a question about Fabric, then a question about how docker-compose runs. If I try the following:
ssh -t me#myserver.com 'docker-compose run --rm app python manage.py updateflickr --settings=mysite.myproj.prod'
I still get the same results. There must be something different about loading up an interactive shell with just sending a command. I have tried using ssh with and without a -t flag, because docker-compose might need a pty active.