We use jenkins as continious integration system. We have two django servers validated by jenkins.
jenkins validates successully the first server. The second server depends on the first one. Thus we would like to launch at the end of the first server validation the first server itself.
We are using python, virtualenv and django and defined the Virtualenv Builder as follow:
pip install -r requirements.txt
rm -f .coverage
fab localhost test
coverage xml
nohup python manage.py runserver 9090 &
The issue is that the build never ends due to the nohup.
How can I launch the server after a successful build?
I had the same problem.
Ken,
I tried using fabric, but again python manage.py runserver - runs continuosly, so the next command is not starting.
And just few mins ago my collegue showed me how to use nohup and with variable BUILD_ID of Jenkins it would be like this to get Success from the build and leave the Django server running:
BUILD_ID=dontKillMe nohup python manage.py runserver host_server &
This worked for our Django project testing.
Since you are using fabric to test, I would recommend defining another fabric task, say, deploy, which you could call assuming the build succeeds.
Much like the call to fab completes for a successful build such that you get to the nohup line, I would expect the deploy task to return also.
You may also want to consider making the server a service (either via an /etc/init.d style script, or upstart if Ubuntu), and have the fabric task stop the currently running one, copy over whatever new files it needs (or similar process), and then restart it.
Assuming what you have above is a bash script or similar, you may want to also define set -e so that, in case any of the commands returns a non-success code, the script will fail, and in turn, fail the build.
Related
I have a django docker container for the backend on my website. There is a volume for the database so i can save the database even if i change the backend configuration.
I have tried to put the createsuperuser in my Dockerfile and in a script launched with "command" in docker-compose. In the Dockerfile, the problem is the prompt is not connected to the database...
In the script, the command is re-run each time the container is started
I would like this command to be run only once, but i dont know how to proceed.
The problem is the container is rebuilt in my ci/cd pipeline if i change the configuration files, and so the command is re-run.
I have seen this post Run command in Docker Container only on the first start but that also works only if the container is not rebuilt.
A workaround with a createsuperuser command that would fail would work and that seemed to be the case with django previous versions (before version 4) but i now have "django.db.utils.IntegrityError: UNIQUE constraint failed: auth_user.username" error which tells me the command seems to be run multiple times and gives me errors in the database...
You can use environment variables to run createsuperuser based on those variables.
If you ignore the return value of the command, it doesn't matter whether the superuser already exists or not. Additionally, you would benefit from being sure that the defined superuser always exists.
And if you change the environment variables, the superuser will be recreated with the changed values.
See similar question here: run initial commands in a docker-compose service
I am developing Django Wagtail application on my local machine connected to a local postgres server.
I have a test server and a production server.
However when I develop locally and try upload it there is always some issue with makemigration and migrate e.g. KeyError etc.
What are the best practices of ensuring I do not get run into these issues? What files do I need to port across?
so ill tell you what i do and what most of the companies that i worked as a django developer did and i can tell you by experience that worked pretty well.
First containerize your application, this will make your life much more easy and you will remove external influence in your code, also will get you an easy way to reproduce your environment.
Your Dockerfile should be from some python image and should do 3 basically things:
Install your requirements dependencies
Run the python manage.py migrate --noinput command
Run a http server such as gunicorn with gunicorn -c /gunicorn.py wsgi:application
You ill do the makemigration in your local machine and make sure that everything is working before commit then to the repo.
In your gunicorn.py you ill put your settings to run the app such as the number of CPU, the binding port, the folder that your app is, something like:
import os
import multiprocessing
# Chdir to specified directory before apps loading.
# https://docs.gunicorn.org/en/stable/settings.html#chdir
chdir = '/app/'
# Bind the application on localhost both on ipv6 and ipv4 interfaces.
# https://docs.gunicorn.org/en/stable/settings.html#bind
bind = '0.0.0.0:8000'
Second containerize your other stuff, for example the postgres database, the redis (for cache), a connection pooler for the database depending on the size of your application.
Its highly recommend that you have a step in the pipeline to do tests, they need to run before everything, maybe just after lint
Ok what now? now you need a way to deploy that stuff, the best for that scenario is: pull your image to github registry, and you can add a tag to that for example:
IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
docker tag $IMAGE_NAME $IMAGE_ID:staging
docker push $IMAGE_ID:staging
This can be add in a github action in the build step for example.
After having your new code in a new image inside github you just need to update the current one, this can be done by creaaing a script to do it in the server and running that script from github action, is something like:
docker pull ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME
echo 'Restarting Application...'
docker stop {YOUR_CONTAINER} && docker up -d
sudo systemctl restart nginx
echo 'Cleaning old images'
sudo docker system prune -af
You can see that i create the image with a staging tag, you can create a rule in github actions for example to trigger that action when you create a new release for example, and create another action to be trigger in every new commit and build/deploy for a dev tag.
For the migration problem, the first thing is, when your application go live squash every migration to the first one (you can drop the database and all the migration then create the database and run the makemigration command again to reach this), so you can have a clean migration in the server. Never creates unnecessary relation between the tables, prefer always doing cached properties instead of add new columns, use UUID for unique ids, and try to not do breaking changes in the database, its hard but if you plan the database before is not so difficult to do.
Hit me if you have any questions. A lot of the stuff that i said can be done in a lot of other platforms such as gitlab, travis, circle ci, but i use the github action in the example because i think is more simple to picture.
EDIT:
I forgot to tell you to have a cron in your server doing backups of your databases, the migrate command ill apply the changes only after the verification but if something else break the database this can save your life.
I have a Django app up and running in Google App Engine flexible. I know how to run migrations using the cloud proxy or by setting the DATABASES value but I would like to automate running migrations by doing it in the deployment step. However, there does not seem to be a way to run a custom script before or after the deployment.
The only way I've come up with is by doing it in the entrypoint command which you can set in the app.yaml:
entrypoint: bash -c 'python3 manage.py migrate --noinput && gunicorn -b :$PORT app.wsgi'
This feels a lot like doing it wrong. A lot of Googling didn't provide a better answer.
Defining the python3 manage.py migrate command in your app.yaml file will make it run every time a new instance is spawned and set up to serve traffic. Although technically this may not be an issue (no migration will happen if database schema hasn't changed) this isn't the right place to declare it.
You'd want this command to run once on every new version code push. This fits perfectly in a CI/CD approach. There are several tutorials on the Google Cloud online documentation using Bitbucket Pipelines or Travis CI for example but you can use many other CI/CD solutions.
I am building a Python+Django development environment using docker. I defined Dockerfile files and services in docker-compose.yml for web server (nginx) and database (postgres) containers and a container that will run our app using uwsgi. Since this is a dev environment, I am mounting the the app code from the host system, so I can easily edit it in my IDE.
The question I have is where/how to run migrate command.
In case you don't know Django, migrate command creates the database structure and later changes it as needed by the project. I have seen people run migrate as part of the compose command directive command: python manage.py migrate && uwsgi --ini app.ini, but I do not want migrations to run on every container restart. I only want it to run once when I create the containers and never run again unless I rebuild.
Where/how would I do that?
Edit: there is now an open issue with the compose team. With any luck, one time command containers will get supported by compose. https://github.com/docker/compose/issues/1896
You cannot use RUN because as you mentioned in the comments your source is mounted during running of the container.
You cannot use CMD either since you don't want it to run everytime you restart the container.
I recommend using docker exec manually after running the container. I do not think there is a way to automate this inside a dockerfile or docker-compose because of the two reasons I gave above.
It sounds like what you need is a tool for managing project tasks. dobi is a tool designed to handle these tasks (disclaimer: I am the author of this tool).
You can see an example of how to run a migration here: https://github.com/dnephin/dobi/tree/master/examples/init-db-with-rails. The example uses rails, but it's basically the same idea as django.
You could setup a task called migrate which would run the command in a container and write the data to a volume. Then when you start your docker-compose containers, use that volume as the source for your database service.
https://github.com/docker/compose/issues/1896 is finally resolved now by the new service profiles introduced with docker-compose 1.28.0. With profiles you can mark services to be only started in specific profiles:
services:
nginx:
# ...
postgres:
# ...
uwsgi:
# ...
migrations:
profiles: ["cli-only"] # profile name chosen freely
# ...
docker-compose up # start only your app services, no migrations
docker-compose run migrations # run migrations on-demand
docker exec -it container-name bash
Then you will be inside the container and you can run any command you normally do when you develop without using docker.
I couldn't find information how to get build result from manage.py runserver
It runs contantly and no server log is outputed.
This way I can't execute the next shell commant or trigger the next job.
The only solition I have come to is to use parallel jobs.
Any one here with better idea?
Thanks.
And just few mins ago my collegue showed me to use nohup and with setting BUILD_ID of Jenkins it would be like this to get Success from the build and still the server running
BUILD_ID=dontKillMe nohup python manage.py runserver host_server &