using browser-sync with django on docker-compose - django

I'm doing a project for college and I've never used docker before, I usually use browser-sync when working on static files, but now when I'm using Django on docker-compose (i followed THIS tutorial ), I have no idea how to set it up to work, can anybody give me advice or direct me?

So, i found a solution,
Start by following the tutorial here to set up django with docker-compose, by the end of this you should have a working base django project. then follow the steps below.
How to Use "livereload" With "docker-compose" :
on you host machine, open the command line or the terminal and do :
pip install --upgrade pip
pip install django-livereload-server
pip install psycopg2-binary
PS: i'm using psycopg2 in docker-compose that's why i'm installing it,
if you're using something else, install it instead of psycopg2
Now add to the requirements.txt file (from the tutorial) this line
django-livereload-server
The file should look like this (if you followed the tutorial step by step, you can change according to what db you want to use)
Django==2.0
psycopg2-binary
django-livereload-server
Open the terminal, cd to your projects directory, and do:
docker-compose build
to download the new django-livereload to your docker environment.
Now, you've everything installed,
you need to setup your project to use the django-livereload-server module,
In your project's settings.py
add livereload to INSTALLED_APPS:
INSTALLED_APPS = [
...
'livereload',
...
]
and add the livereload middleware to MIDDLEWARE:
MIDDLEWARE = [
...
'livereload.middleware.LiveReloadScript',
]
and make sure that DEBUG is set to True.
now you can start developing,
open 2 consoles (terminals) in your project's directory
in the first one do :
python manage.py livereload
wait until the server starts, when it's working, leave it running, and in the second terminal do:
docker-compose up
the server in the second terminal is running the django development server, and the server in the first terminal is feeding the first one a livereload.js file, which is used by the django-livereload-server module to inject css, automatically reload html and js when saving, .. etc
PS : make sure the first server (livereload) is working before you launch the second one
i hope this helped !

Related

Django No module name found although I already pip install

I am using this for push notification and it is okay on my localhost.
https://github.com/jleclanche/django-push-notifications
But when I deploy to server,
I did
pip install django-push-notifications
But in my settings.py, when I add this, it crash and it say server configuration is wrong. How shall I do?
INSTALLED_APPS = (
'push_notifications'
)

How do I set up Jupyter/IPython Notebook for Django?

I have been using the method described in this post for setting up IPython Notebook to play nicely with Django. The gist of the method is to create an IPython extension which sets the DJANGO_SETTINGS_MODULE and runs django.setup() when IPython starts.
The code for the extension is:
def load_ipython_extension(ipython):
# The `ipython` argument is the currently active `InteractiveShell`
# instance, which can be used in any way. This allows you to register
# new magics or aliases, for example.
try:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
import django
django.setup()
except ImportError:
pass
With a recent upgrade to Jupyter Notebook this setup is now broken for me. I am able to run Django code in the Jupyter notebook by adding a similar bit of code to the first cell of the notebook. However, I was not able to figure out how to get Jupyter to run the extension automatically so I would not have to do this again for each and every notebook I am creating.
What should I do to get Django and Jupyter to play nicely?
UPDATE:
For #DarkLight - I am using Django 1.8.5 with Jupyter 1.0.0. The code I run in the notebook is:
import os, sys
sys.path.insert(0, '/path/to/project')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settingsfile")
import django
django.setup()
Install django-extensions from https://github.com/django-extensions/django-extensions/blob/master/docs/index.rst
pip install django-extensions
Change your settings file to include 'django-extensions'
INSTALLED_APPS += ['django_extensions']
Run your Django server like this:
python manage.py shell_plus --notebook
alter to suit, and run this in your first cell
import os, sys
PWD = os.getenv('PWD')
os.chdir(PWD)
sys.path.insert(0, os.getenv('PWD'))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "local_settings.py")
import django
django.setup()
Now you should be able to import your django models etc. eg:
from app.models import Foobar
Foobar.objects.all()
Just for completeness (but it's 2018, so maybe things changed since this question was posted): you can actually install a Jupyter Python kernel in your Django environment that will then connect (run under) a different Jupyter server/environment (one where you've installed widgets, extensions, changed the theme, etc.). django_extensions right now still does only part of the required work :-)
This assumes you have a Jupyter virtual environment that's separate from Django's one and whose kernels/extensions are installed with --user. All the Jupyter extensions (and their dependencies) are installed in this venv instead of the Django's one/ones (you'll still need pandas, matplotlib, etc. in the Django environment if you need to use them together with Django code).
In your Django virtual environment (that can run a different version of Python, including a version 2 interpreter) install the ipython kernel:
pip install -U ipykernel
ipython kernel install --user --name='environment_name' --display-name='Your Project'
This will create a kernel configuration directory with the specified -–name in your user’s Jupyter kernel directory (on Linux it's ~/.jupyter/ while on OSX it’s ~/Library/Jupyter/) containing its kernel.json file and images/icons (by default the default Jupyter icon for the kernel we’re installing are used). This kernel will run inside the virtual environment what was active at creation, thus using the exact same version of python and all the installed modules used by our Django project.
Running ./manage.py shell_plus --notebook does something very similar, but in addition to requiring everything (including the Jupyter server and all the extensions) installed in the current venv, it’s also unable to run notebooks in directories different from the project’s root (the one containing ./manage.py). In addition it’ll run the kernel using the first executable called python it finds on the path, not the virtual environment’s one, making it misbehave when not started from the command line inside an active Django virtual environment.
To fix these problems so that we're able to create a Notebook running inside any Django project we have so configured and to be able to run notebooks stored anywhere on the filesystem, we need to:
make sure the first ‘argv’ parameter contains the full path to the python interpreter contained in the virtual environment
add (if not already present) an ‘env’ section that will contain shell environment variables, then use these to tell Python where to find our project and which Django settings it should use. We do this by adding something like the following:
"env": {
"DJANGO_SETTINGS_MODULE": "my_project.settings",
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project"
}
optional: change ‘display_name’ to be human friendly and replace the icons.
editing this environment kernel.json file you'll see something similar:
{
"display_name": "My Project",
"language": "python",
"env": {
"DJANGO_SETTINGS_MODULE": "my_project.settings",
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project"
},
"argv": [
"/home/projectuser/.pyenv/versions/2.7.15/envs/my_project_venv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}",
"--ext",
"django_extensions.management.notebook_extension"
]
}
Notable lines:
"DJANGO_SETTINGS_MODULE": "my_project.settings": your settings, usually as seen inside your project's manage.py
"PYTHONPATH": "$PYTHONPATH:/home/projectuser/projectfolder/my_project": PYTHONPATH is extended to include your project's main directory (the one containing manage.py) so that settings can be found even if the kernel isn't run in that exact directory (here django_extensions will use a generic python, thus running the wrong virtual environment unless the whole Jupyter server is launched from inside it: adding this to the kernel.json created by django_extensions will enable it to run notebooks anywhere in the Django project directory)
"/home/projectuser/.pyenv/versions/2.7.15/envs/my_project_venv/bin/python": first argument (argv list) of the kernel execution, should be the full path to your project's virtual environment's python interpreter (this is another thing django_extensions gets wrong: fixing this will allow any notebook server to run that specific Django environment's kernel with all its installed modules)
"django_extensions.management.notebook_extension": this is the extension that will load the 'shell_plus' functionality in the notebook (optional but useful :-) )
Here's what just worked for me
install Django Extensions (I used 1.9.6) as per other answers
install jupyterpip install jupyter
some stuff I did to setup jupyter inside my Docker container -- see below if this applies to you †
from your base Django directory, create a directory for notebooks, e.g. mkdir notebooks
Go to that directory cd notebooks
start django extensions shell_plus from inside that directory: ../manage.py shell_plus --notebook
The notebook server should now be running, and may launch a new browser. If it doesn't launch a browser window, follow the instructions to paste a link or a token.
from the browser, open a new "Django Shell Plus" notebook, as per John Mee's answer's screenshot
AND, importantly, what didn't work was changing directories from inside the notebook environment. If I tried to work with any notebook that was not in the directory that manage.py shell_plus --notebook was run in, then the kernal was not configured correctly. For me, having the notebook be configured for just a single directory at a time was good enough. If you need a more robust solution, you should be able set PYTHONPATH prior to starting jupyter. For example add export PYTHONPATH="$PYTHONPATH:/path/to/django/project" to a virtualenv activate script. But I haven't tried this.
† Docker Setup (optional)
add a port mapping for your container for port 8888
For example, in your docker compose file;
ports:
- "8890:8888"
Configure your project settings file to use ip 0.0.0.0
This is what I did:
NOTEBOOK_ARGUMENTS = [
'--ip', '0.0.0.0',
'--allow-root',
'--no-browser',
]
Note: I am using Python 3.7 and Django 2.1, it works for Django 2.2. I don't have to run anything in my first cell, and this works like charm as long as you don't mind having the notebooks in the root of your Django project.
It is assumed that you have a virtual environment for your project, and it is activated. I use pipenv to create virtual environments and track dependencies of my python projects, but it is up to you what tool you use.
It is also assumed that you have created a Django project and your current working directory is the root of this project.
Steps
Install jupyter
Using pip
pip install jupyter
Using pipenv
pipenv install jupyter
Install django-extentions
Using pip
pip install django-extensions
Using pipenv
pipenv install django-extensions
Set up django-extensions by adding it to the INSTALLED_APPS setting of your Django project settings.py file.:
INSTALLED_APPS = (
...
'django_extensions',
)
Run the shell_plus management command that is part of django-extensions. Use the option --notebook to start a notebook:
python manage.py shell_plus --notebook
Jupyter Notebooks will open automatically in your browser.
Start a new Django Shell-Plus notebook
That's it!
Again, you don't have to run anything in the first cell, and you can corroborate by running dir() to see the names in the current local scope.
Edit:
If you want to put your notebooks in a directory called notebooks at the root directory, you can do the following:
$ mkdir notebooks && cd notebooks
$ python ../manage.py shell_plus --notebook
Thanks to Mark Chackerian whose answer provided the idea to make run the notebooks in a directory other than the project's root.
These are the modules that are imported automatically thanks to shell_plus:
# Shell Plus Model Imports
from django.contrib.admin.models import LogEntry
from django.contrib.auth.models import Group, Permission, User
from django.contrib.contenttypes.models import ContentType
from django.contrib.sessions.models import Session
# Shell Plus Django Imports
from django.core.cache import cache
from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import transaction
from django.db.models import Avg, Case, Count, F, Max, Min, Prefetch, Q, Sum, When, Exists, OuterRef, Subquery
from django.utils import timezone
from django.urls import reverse
Actually turns out you (might not) need to do all that crap. Just install django-extensions and run jupyter!
(myprojectvenv)$ cd myproject
(myprojectvenv)$ pip install jupyter
(myprojectvenv)$ pip install django-extensions
(myprojectvenv)$ jupyter notebook
In the browser, start a new "Django Shell-Plus":
And you should be good to go. eg:
from myproject.models import Foobar
Foobar.objects.all()
While the accepted answer from RobM works, it was less clear than it could be and has a few unnecessary steps. Simply put, to run notebooks through Django from a notebook environment outside of the project directory:
Install:
pip install django-extensions
Add 'django-extensions' to your INSTALLED_APPS list in settings.py
INSTALLED_APPS += ['django_extensions']
Run a notebook from within Django, then close it:
python manage.py shell_plus --notebook
This will create your kernel, which we will now edit to point to an absolute path of Python rather than a relative path.
On OSX, the kernel file is at: ~/Library/Jupyter/kernels/django_extensions/kernel.json
On Linux: ~/.jupyter/kernels/django_extensions/kernel.json
We only need to make two changes:
The first is to edit the first value in the "argv" list from "python" to the full address of the python version in your Django virtual environment. E.g.: "/Users/$USERNAME/Documents/PROJECT_FOLDER/venv/bin/python"
Secondly, to the "env" dictionary, add "DJANGO_SETTINGS_MODULE": "mysite.settings", where mysite is the folder that contains your Django settings.
Optionally, change the value of "display_name".
Now when you run a notebook from any directory, choosing the "Django Shell-Plus" kernel will allow your notebooks to interact with Django. Any packages such as pandas will need to be installed in the Django venv.
The following does work for me using win10, Python 3.5, Django 1.10:
Install Python with the Anaconda distribution so Jupyter will be installed as well
Install Django and install django-extensions:
pip install Django
pip install django-extensions
Start a new Django project. You have to do that in that part of your tree of directories which can be accessed by Jupyter later.
django-admin startproject _myDjangoProject_
Start Jypter
navigate Jupyter to the directory myDjangoProject and enter the first/top myDjangoProject-directory
Start within the first/top myDjangoProject-directory a new Jupyter noteboke: new --> Django Shell-Plus
enter and run the following piece of code :
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myDjangoProject.settings")
import django
django.setup()
Note that this piece of code is the same as in manage.py, and note that "myDjangoProject.settings" points to myDjangoProject/settings.py
Now you can start with examples, e.g.:
from django.template import Template, Context
template = Template('The name of this project is {{ projectName }}')
context = Context({'projectName': 'MyJypyterDjangoSite'})
template.render(context)
Run this command.
PYTHONPATH=/path/to/project/root DJANGO_SETTINGS_MODULE=settings python manage.py shell_plus --notebook
I will add some information to the very complete answer of RobM, for the benefit of the very rare developers that use buildout along with djangorecipe djangorecipe as I do... I refer to jupyter lab as I use that but I think all info can be applied to old jupyter notebooks.
When using buildout you end up with a 'bin/django' handler you'll use instead of 'manage.py'. That's the script that defines the whole path. I added one more part in my buildout.cfg:
[ipython]
recipe = zc.recipe.egg
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}/apps
initialization = import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'web.settings'
so that another script named ipython will be created in ./bin directory. I point kernelspec to that interpreter. Moreover I use kernel argument rather than "-m", "ipykernel_launcher" so that the kernel definition I use is:
{
"argv": [
"/misc/src/hg/siti/trepalchi/bin/ipython",
"kernel",
"-f",
"{connection_file}",
"--ext",
"django_extensions.management.notebook_extension"
],
"display_name": "Trepalchi",
"language": "python"
}
Due to how the ipython script is created by buildout there's no need to add environmental variables in my case.
As Rob already mentioned, jupiterlab is only installed in one environment where I start it with the command:
jupyter lab
not in the environment of Django project whare I only install ipykernel (that has already a bunch of 20 dependencies).
Since I tend to have quite a lot of projects I find it usefull to have a single point where I start jupyter lab with many links to the projects so that I can reach them easily. Thanks to the extension provided by django_extension I don't need any extra cell to initialize the notebook.
Any single kernel added in this way can be found with the command:
jupyter kernelspec list
And clearly listed in the launcher of jupyter lab

Add requirement for only dependent app

I am working on my django project on Linux Ubuntu.
i am not using virualevn. so when i run the command
pip freeze > requirement.txt
it add the hundreds of lines(apps) to my requirement.txt file. I want only to add those app who will need to run this app only.
is there any way to do it?
There's no automatic way to get only the apps you need. You'll have to construct the requirements file manually. It's not that hard to do though - start by looking at all the imports in all your files an add the apps for those imports. Then run your app in a new virtualenv with only those imports - any time it crashes because of a missing import you know that you need to add another one!
Get pip downloaded packages Only
It omits the dependent packages, and can be used to get clean list of downloaded python modules to add in requirement.txt file
comm -12 <(pip list --format=freeze --not-required) <(pip freeze) > requirements.txt
Hope This Helps!
I am not sure whether we should put whatever we received from pip freeze or only required packages in requirement.txt file
I have Asked a question here

Issues deploying django-nvd3 charts on heroku

Did any one try to deploy django-nvd3 charts on to Heroku recently with success? I was trying to deploy django application using nvd3 charts on to heroku the whole weekend with no luck. It works perfectly fine in my dev enviornment (ubuntu). However when I try to push it to Heroku, am facing all sorts of errors.
on Dev environment I installed npm (this includes node.js) and later installed bower and finally installed django-bower; as suggested on https://github.com/areski/django-nvd3. I tried different charts and all work okay, with no issues
However, when I was trying to push the code over to Heroku, I was hitting quite a few errors. Fixing one leads to others. I was wondering, if I need to add a package.json (to list npm dependencies like bower) and bower.json (to list bower dependencies like d3, nvd3) files to my repo, in the first place?
I googled a lot for some documentation that gives gun-shot info on this(django, nvd3, bower, npm/node all married together), but couldn't see any
Note: I will try to post heroku logs for more info.
bower.json is given something like:
{
"dependencies": {
"d3": "3.3.6",
"nvd3": "1.1.12-beta"
}
package.json is given something like:
"engines": {
"node": "0.11.11",
"npm": "1.3.25"
},
"dependencies": {
"bower": "1.3.1"
}
Errors I encountered are something like:
1. gunicorn is not recognized - resolved this
2. NameError: Name 'DATABASES' is not defined in settings.py - resolved this
3. django.core.management is not found - resolved this
4. Git error: fatal: HEAD corrupted/ cannot be deployed on to heroku - resolved this
5. listening at localhost 127.0.0.1:8000 - am working on this. I think this is also to do with my DATABASES setting that is pointing at dj_database_url.config(default=['DATABASE_URL'])??
Is there any Git repo with django+nvd3charts that is deployed successfully on to Heroku? Can I have a look at the configuration?
Also looking at https://github.com/areski/django-nvd3; I do not see any bower dependencies or npm dependencies listed here, does it work like this?
Or, can Heroku automatically install npm/bower without package.json and also can it look at settings.py file and by looking at bower dependencies, does Heroku also install those dependencies with out a need for bower.json file to specifically listing d3, nvd3 as dependencies? I suppose its not the case, as far as I could see
Please suggest
I wrote a blog post about this which you find here: https://mattdoesstuff.wordpress.com/2015/04/10/getting-npm-d3-nvd3-django-bower-django-bower-nvd3-and-heroku-to-play-nicely-together/
Use django-nvd3 and django-bower
pip install django-nvd3 django-bower
pip freeze > requirements.txt
git add .
git commit -m "don't forget your requirements.txt!"
Use a multi-buildpack
heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Use the Node and Python buildpacks together
# ./.buildpacks
https://github.com/heroku/heroku-buildpack-nodejs.git
https://github.com/amanjain/heroku-buildpack-python-with-django-bower.git
Download bower using npm
# ./package.json
{"private": true,"dependencies": {"bower": "1.4.1"}}
User django-bower to collect its assets
# ./bin/post_compile
# install bower components
./manage.py bower_install
Tell django where to find bower
# settings.py
...
import os
APPLICATION_DIR = os.path.dirname(globals()['__file__'])
HEROKU = bool(os.environ.get('DATABASE_URL'))
BOWER_COMPONENTS_ROOT = os.path.join(APPLICATION_DIR, 'components')
# where to find your local bower
BOWER_PATH = '/usr/local/bin/bower'
if HEROKU:
BOWER_PATH = '/app/node_modules/bower/bin/bower'
BOWER_INSTALLED_APPS = (
'd3#3.3.13',
'nvd3#1.7.1',
)
...
Credits
* http://www.rawsrc.com/using-django-bower-on-heroku/
* https://github.com/ddollar/heroku-buildpack-multi.git
* https://github.com/areski/django-nvd3

heroku PostGIS syncdb error

I am having trouble getting a simple GeoDjango app running on heroku. I have created the postgis extension for my database but I am not able to run syncdb without getting the following error:
from django.contrib.gis.geometry.backend import Geometry
File "/app/.heroku/python/lib/python2.7/site-packages/django/contrib/gis/geometry/backend/__init__.py", line 14, in <module>
'"%s".' % geom_backend)
django.core.exceptions.ImproperlyConfigured: Could not import user-defined GEOMETRY_BACKEND "geos".
Any ideas what I am doing wrong? Also does anyone know of a tutorial for getting a simple geodjango project running on heroku? Thanks for your help
I ran into this same issue and Joe is correct, you are missing a buildpack. What I did differently was include both the heroku-geo-buildpack and the heroku-buildpack-python. Both can be included by using the heroku-buildpack-multi and adding a ".buildpacks" file to your home directory in which to include the other buildpacks.
https://github.com/ddollar/heroku-buildpack-multi
So set buildpack-multi as your buildpack and add a .buildpacks file in your project base directory:
$ heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
$ touch .buildpacks
# .buildpacks
https://github.com/cyberdelia/heroku-geo-buildpack.git#1.0
https://github.com/heroku/heroku-buildpack-python
When you push this, Heroku will install the software packages required to run python (python, pip, etc), along with the software packages required to run postgis (geos, proj and gdal).
I gave heroku-buildpack-geodjango a try but I believe it might be out of date (hasn't been updated in a year).
I just ran into the exact same error after using the multi buildpack method from ddollar https://github.com/ddollar/heroku-buildpack-multi with no problems up until this morning. As Jeff writes, you just point your buildpack at the multi and then add a .buildpacks file.
$ heroku config:set BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
$ cat .buildpacks
# .buildpacks
https://github.com/cyberdelia/heroku-geo-buildpack.git
https://github.com/heroku/heroku-buildpack-python
Also dont forget to add django.contrib.gis to the apps in settings.
Everything should go well and install the geos and gdal libs when you push to heroku but you will find that django doesnt find them and you get the error. This is because django wants the full path as per the docs:
https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/geolibs/
So add this to settings.py:
GEOS_LIBRARY_PATH = "{}/libgeos_c.so".format(environ.get('GEOS_LIBRARY_PATH'))
GDAL_LIBRARY_PATH = "{}/libgdal.so".format(environ.get('GDAL_LIBRARY_PATH'))
It seems like you are missing some C libraries. Consider the GeoDjango Heroku buildpack:
https://github.com/cirlabs/heroku-buildpack-geodjango/
heroku create --stack cedar --buildpack http://github.com/cirlabs/heroku-buildpack-geodjango/
git push heroku master
The required libraries should be installed automatically using these commands.