I have deployed several Django-driven sites, mostly "concept" stuff; nothing serious. Now I'm ready to deploy a real-deal site (for my brother's medical practice), and would like to ensure that I'm doing it correctly.
My central concern, is the testing environment. I had been doing it by maintaining two separate folders with different Mercurial copies of a site, then updating the development branch, merging with the release branch, and then uploading to the server (Webfaction).
How do you manage testing environment for your Django projects?
All development is done on my local machine. I use virtualenv (and virtualenvwrapper) for the multiple projects. With virtualenv, you can have several versions of the same software without having to 'break' other code that may depend on a certain version. I use pip for downloading the proper libraries/applications into these separate environments. For each project (and therefore environment), I have a mercurial repository. If the new development passes all unit tests and works as expected, I send it up to the VCS. Once in the VCS, the code gets reviewed by colleagues.
Related
I just decided to jump into using docker to test out building a microservice application using AWS fargate.
My question really relates to hearing about many development teams using Docker to avoid people saying the phrase "works on my machine" when committing code. Although I see the solution to that problem being solved, I still do not see how Docker images actually can be used in development environment.
The workflow for anything above production baffles me. Example of my thinking is...
team of 10 devs all use docker, each pull the image from the repo to there container, with the source code, if they all have a individual version of the image, that means any edits they make to that image is their own and when they push back to the repo where none of the edits can be merged (along with that to edit a image source code is not easily done as well).
I am thinking of it in the say way as git -GitHub, where code is pushed to a branch and then merged to master to create a finished product.
I guess if you pull the code from the GitHub master and create the Docker image is the way for it to be used, but again that points back to my original assumption of Docker being used for Production environments over development.
Is docker being used in development, more so the dev can just test the feature on the container that ever other dev on the team is using so all the environments match across the team?
I just really do not understand the workflow of development environments with docker.
I'd highlight three cases where I've found Docker particularly useful, prior to a production deploy:
Docker is really useful for installing local dependencies. If your application needs a database, docker run postgresql with appropriate options. Need a clean start? Delete the container. Running two microservices that need separate databases? Start two containers. The second microservice is maintained by another team? Run it in a container too.
Docker is useful for capturing the build environment in the CI system. Jenkins, for example, can run build steps inside a container, bind-mounting the current work tree in, so it's useful to build an image that just contains build-time dependencies (which can be updated independently of the CI system itself).
If you're running Docker in production, you can test the exact thing you're about to run. You're guaranteed the install environment will be the same in the QA and prod environments, because it's encapsulated inside the same Docker image. A developer can debug problems against the production-installed code without actually being in production.
In the basic scenario you describe, an important detail to note is that you never "edit an image"; you always docker build a new image from its Dockerfile and other source code. In compiled languages (C++, Go, Java, Rust, Haskell) the source code won't be in the image. Even if you're "using Docker in development" the actual source code will be in some other system (frequently Git), and typically you will have a CI system that builds "official" images from that source code.
Where I see Docker proposed for day-to-day development, it's either because the language ecosystem in use makes it hard to have multiple versions concurrently installed, or to avoid installing software on the host system. You need specific tooling support to "develop inside a container", and if developers choose their own IDE, this support is not universal. Conversely, in between OS package managers (APT, Homebrew) and interpreter version managers (rbenv, nvm) it's usually straightforward to install a couple of things on the host. If your application isn't that sensitive to, say, the specific version of Node, it's probably easier to use whichever version is already installed on your host than to try to insert Docker into the process.
I'm writing some django apps and I have this setup:
local machine (laptop) that I use for development, with local dev
virtualenv
remote machine VPS (with public address) used for test. I
need to have some end-users testing my app before moving to prod
with test virtualenv
remote machine VPS (with public address, same as
above) used for production with production virtualenv
I use git for versioning.
The idea that I have so far (after reading various tutorials) to manage everything is:
develop on local machine new branch
push branch to git
deploy branch into test virtualenv
test it
test passed, push branch to master and deploy into production
virtualenv
And I have lot of questions about this:
is this a recommended approach?
how can I get the new branch to test virtualenv and not to production? Do I need to have two separate app folders, one for prod and one for test?
How can I then move code from test to prod?
Thanks in advance, I'm a django/git novice so I'm trying to approach it in the best way from start.
It seems almost right to me (but there are many strategies),
I'd make a testing-branch, so you could continue pushing to develop-branch while others are testing the test-branch. Then when it passes the test merge to Master.
(Also, if you want to make your live easier, use fab files to 'pull' on the remote machine.)
Why does redmine not use the development and test environments?
In the official installation guide they only show one environment when setting up the databases, advise to run bundler skipping dev and test, and run the rails server in production mode.
I think this instruction describes the installation process only for server (which runs in Production mode). I think it is done this way not to confuse new users (who do not have a lot of knowledge in Rails)
You can easily use this instruction to setup Redmine locally (I did it successfully several times ;). In order to install Redmine locally you should change only few points in the instruction.
What are pros/cons (regarding maintainability) of installing django apps system-wide vs installing them project-wide? Is there a recommended aproach?
By django extensions, do you mean django-extensions?
In all honesty, I'd steer clear of system wide installations, they instantly tie you to the system's installed versions, and if incompatibilities arise system-wide, that is a bigger issue than with a project-wide approach. In addition, they add complexity when deploying to remote services, and don't stick to the 12 Factor App principles. Keeping everything self contained, project code and its dependencies will make life easier in the long run.
I'd recommend using virtualenv and pip to install your dependencies, which keeps them isolated to the project in question, and dramatically simplifies deployment.
The recommended approach is not to copy any reusable app inside you project. They provide extension points and settings to customize. Also, it is recommended to use virtualenv for projects and install any project specific python modules there. This will protect you from different versions conflicts.
I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?
At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.
Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.