Django apps: installing system-wide vs project-wide - django

What are pros/cons (regarding maintainability) of installing django apps system-wide vs installing them project-wide? Is there a recommended aproach?

By django extensions, do you mean django-extensions?
In all honesty, I'd steer clear of system wide installations, they instantly tie you to the system's installed versions, and if incompatibilities arise system-wide, that is a bigger issue than with a project-wide approach. In addition, they add complexity when deploying to remote services, and don't stick to the 12 Factor App principles. Keeping everything self contained, project code and its dependencies will make life easier in the long run.
I'd recommend using virtualenv and pip to install your dependencies, which keeps them isolated to the project in question, and dramatically simplifies deployment.

The recommended approach is not to copy any reusable app inside you project. They provide extension points and settings to customize. Also, it is recommended to use virtualenv for projects and install any project specific python modules there. This will protect you from different versions conflicts.

Related

What's the proper way to use different pipenv environments for a Django project on different computer systems?

Hopefully this isn't too stupid of a question, concerning the use of pipenv for the same Django project on different computer systems. The scenario is that I'm using pipenv with a test Django project on one laptop, everything works fine, using VS Code and it's using the proper pipenv environment for the Python environment within VS Code.
The project however is within Dropbox so when I'm using a different laptop, which I do sometimes, one is my work laptop the other is my personal one at home, I can work on the same project wherever I left off. So you can probably deduce the issue I'm having.
I'm using pipenv environment A on my work laptop for the Django project. But when I open the project in VS Code on my personal laptop at home I have to keep specifying the proper pipenv environment to use, which obviously is different than the one on my work laptop.
Maybe I shouldn't be working this way and should just work on one laptop for the project, but I imagine others have done the same with similar setups before.
Is there a "proper" way to do this, using different pipenv environments on different machines for the same Django project? Should I copy over the pipenv environment to each machine as well? Or am I missing something really simple here?
First, use version control. (git is the defacto standard these days and there are plenty of services to choose from)
For using pipenv, you want to keep your Pipfile and Pipfile.lock in your repository (these are the only files you should be copying related to your environment from system to system).
The local environment is then built on each machine where it is needed by using pipenv sync or pipenv sync --dev (to install dev specific requirements).
So the related virtualenv will be created on each system in which you are developing your project.

Can you add CLI tools to a Cloud Foundry app?

Terribly sorry if this is an obvious question. But I have a webapp that relies on a CLI tool to get it to work. I was wondering if there was a way I could specify this without using a custom buildpack. And how to go about doing this if possible
Any help on this would be great, thanks
Can you add CLI tools to a Cloud Foundry app?
It's not possible to directly install things with apt or apt-get. Your app runs as an unprivileged user and is unable to run those tools to install things.
This leaves you with a couple options:
Get the binary and bundle it with your app. Some people (no judgement from me though) would say that your app is responsible for bringing everything it needs to run anyway, so you should be doing this already.
Twelve-factor apps also do not rely on the implicit existence of any system tools. Examples include shelling out to ImageMagick or curl. [1]
This path works well for dependencies that are small or self contained, like statically built Go apps. If your app need shared libraries or other resources to function, you need to bundle those with your app too. It's also not great if the size of what you bundle is large. Everything you bundle will be pushed up with the app, so it can slow down your pushes. You are also responsible for tracking updates and making sure that you have the latest, bug free & security patched binaries & libraries.
The general steps for doing this are:
Create a folder like binaries/ under the root of your app, with subfolders of bin/ and lib/.
Place all your binaries under binaries/bin and any shared libraries they require under binaries/lib.
Add a .profile file at the root of your app. This will be sourced prior to your app starting so it will put any binaries you bundle on the path and add libraries to the search path.
In .profile put the following:
export PATH=$HOME/binaries/bin:$PATH
export LD_LIBRARY_PATH=$HOME/binaries/lib:$LD_LIBRARY_PATH
That should be it. Just push your app with all the new files.
Another easier option, is to use the Apt buildpack [2]. This buildpack can install required dependencies using apt. You just need to add an apt.yml file to the root of your app & run your app with multi-buildpacks (apt buildpack first, then your normal buildpack).
The main benefit of doing this is that you don't have to manage the dependencies. The apt buildpack will automatically install them from the repo you tell it to use, so it'll pick up new versions from there as well. This is good if what you need to install has a lot of dependencies, particularly sensitive dependencies (like openssl) or dependencies that get updated/patched often like other language runtimes (Python, Perl, Ruby, etc...).
Other benefits. It's easier because the buildpack takes care of adjusting PATH & LD_LIBRARY_PATH. It also makes the app size smaller so pushes are faster.
The downsides of this option are that the apt-buildpack is not an official buildpack (it's community maintained). It also works best when you have Internet access, so it can download binaries from the Internet, although you can work around this by using an internal repo.
There's a couple other options as well, but I wouldn't recommend them unless both options above are definitely not going to work for you.
Use Docker. You can set up your own Docker container with all the dependencies you need, plus your app code and cf push the Docker image to CF. The downside of this is that your lose the advantages of using buildpacks, so you're back to building and managing Docker images and all the required dependencies of your app all on your own.
You could create your own custom buildpack and supply the dependencies that way. I don't see any reason you'd want to do this though. It's a decent bit of work and in the end, you'd have something that's just more brittle and less flexible than Apt Buildpack.
It's technically possible to ship your own rootfs, but you really really shouldn't (I'm just including this to be thorough). This is the base file system that's used by all apps on CF. Doing this has a lot of drawbacks though, chiefly being that it's difficult. It also applies to all apps on the foundation, can bloat the size of the rootfs, and makes a larger attack surface for anything using the rootfs (i.e. all apps).
Hope that helps!
[1] https://12factor.net/dependencies
[2] https://github.com/cloudfoundry/apt-buildpack

django virtualenv vs repository packages

I'm new to django and I'm trying to find out which way to build project is better:
Build own virtualenv for each (including core django packages, which available via systemm repository) project
or
use packages from debian repository (if available, otherwise put it in virtualenv).
As I understand, in case any security update the repository version will receive patch asap, however virtualenv version I'll need patch myself.
However in case of movement to another server virtual will cause less issues, then repository version.
What is the best practices?
As far as I know virtualenv is used by almost every django developer.
It is also good practice to have versions in your requirements.txt so that if some of the packages change you'll have your project working in production.
I think virtualenv is de facto the best practice for development with Python (not just with Django)

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.

Run Pinax without VirtualEnv

Is there a way to run Pinax without virtualenv?
I want to run it without virtualenv as I want to run it on a django-container on mediatemples grid-hosting service. Their containers can scale upto 1Gb of dedicated memory, so I wouldnt have to worry about my own VPS or scaling issues. But their response was:
" because of the way the DjangoContainer works, you won't be able to configure your server to use your virtualenv. Essentially the DjangoContainer is a virtualized server (to which you don't have access other than the AccountCenter tools, or the 'mtd' command line tool) with the specific purpose of serving your Django applications. It mounts your django container folder so that it has your application code, but you cannot modify the version or location of python it uses. This probably means you'll have to use Pinax without virtualenv support, as the general idea of using virtualenv in this way would be to create a custom environment for your Pinax application, which as I mentioned here is impossible to instruct the server to use. "
As of 0.9a1, Pinax can be used without pinax-boot.py which was the virtualenv dependency (we bundled it). Requirements are project-level and must be installed with pip. However, setup_project does enforce a virtual environment when installing requirements (it calls pip for you as a convenience; I would be open to not enforcing a virtual environment here). You can pass --no-reqs to setup_project forcing it to skip dependency installation. You can then run pip yourself and install it however you like.
technically yes, but you would have to change out quite a bit of the configuration that is handed out and hand install a lot of libraries. Pinax has virtualenv as a very low level built in assumption.
you can, all you need to do is find out what is in the virtualenv. set it up and install yolk in the virtual env and type yolk -l to see what you need to install to get it to work.