So hopefully this makes sense to the non-Djangoers of the k8s community. I will try my best to explain the setup / reasoning.
With Django, we have LOTS of what are called management commands that we can run within the scope and environment of our Django app that can really help development and deployment. I'm sure most other frameworks have similar, if not identical, concepts.
An example would be the "python manage.py migrate" command that ensures our codebase (migration scripts) are applied to and reflect in the associated database.
There are approx. 30 - 50 core commands we can run, we can also create our own, as well as apply those from any installed third party applications.
Anyways. The most important takeaway is that there are a lot of commands we can and do run.
Now, I have the following k8s Job to run the "migrate" command:
apiVersion: batch/v1
kind: Job
metadata:
name: asencis-web-migrate-job
spec:
template:
spec:
containers:
- name: asencis-web-migrate-job
image: asencis/asencis-base:latest
imagePullPolicy: Always
command: ['python', 'manage.py', 'migrate']
envFrom:
- configMapRef:
name: asencis-config
- secretRef:
name: asencis-secret
restartPolicy: Never
backoffLimit: 5
This job essentially runs the python manage.py migrate command within the application scope/environment. It works like a charm:
$ kubectl apply -f asencis-web-migrate-job.yaml
It's very useful in the deployment of the application when all of our tests have run, we can then build an image, "rollout restart" the cluster and then apply any migrations. It's incredibly seamless. (Not of my hat to the k8s core team for making such a useful product!)
Anyways.
My question is essentially this, can we apply an argument to the above kubectl apply command on the job, to run any command we like?
An example would be:
$ kubectl apply -f asencis-web-job.yaml --command python manage.py migrate
You will probably need to build your own tool for this.
You could use a shell script around yq, for example:
#!/bin/sh
yq eval \
".spec.template.spec.containers.[0].command.[2]=\"$1\"' \
template-web-job.yaml \
| kubectl apply -f-
You can fill in more parts of the Job this way: compute .metadata.name from $USER-$1-$(date +%s), attach labels: to the Pod to find it later, and so on.
If this wasn't a one-off Job I might recommend a more Kubernetes-native tool like Helm or Kustomize. Both tools need somewhat involved filesystem layouts, and then you need to pass the variables (script command, submitter) in some form; this wouldn't actually be easier than using yq to rewrite the YAML. Helm is a little more oriented towards mostly-stable installations (it knows about the main application Deployment and how to update it in-place).
If you have Helm already, you could build a similar script around helm template. Or if you have jq but not yq you could rewrite the Job in JSON {"apiVersion": "batch/v1", "kind": "Job", ...} but otherwise use the same script.
Related
I have automatic builds set up in Google Cloud, so that each time I push to the master branch of my repository, a new image is built and pushed to Google Container Registry.
These images pile up quickly, and I don't need all the old ones. So I would like to add a build step that runs a bash script which calls gcloud container images list-tags, loops the results, and deletes the old ones with gcloud container images delete.
I have the script written and it works locally. I am having trouble figuring out how to run it as a step in Cloud Builder.
It seems there are 2 options:
- name: 'ubuntu'
args: ['bash', './container-registry-cleanup.sh']
In the above step in cloudbuild.yml I try to run the bash command in the ubuntu image. This doesn't work because the gcloud command does not exist in this image.
- name: 'gcr.io/cloud-builders/gcloud'
args: [what goes here???]
In the above step in cloudbuild.yml I try to use the gcloud image, but since "Arguments passed to this builder will be passed to gcloud directly", I don't know how to call my bash script here.
What can I do?
You can customize the entry point of your build step. If you need gcloud installed, use the gcloud cloud builder and do this
step:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "enter 1 bash command per line"
ls -la
gcloud version
...
As per the official documentation Creating custom build steps indicates, you need a custom build step to execute a shell script from your source, the step's container image must contain a tool capable of running the script.
The below example, shows how to configure your args, for the execution to perform correctly.
steps:
- name: 'ubuntu'
args: ['bash', './myscript.bash']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/custom-script-test', '.']
images: ['gcr.io/$PROJECT_ID/custom-script-test']
I would recommend you to take a look at the above documentation and the example as well, to test and confirm if it will help you achieve the execution of the script.
For your case, specifically, there is this other answer here, where is indicated that you will need to override the endpoint of the build to bash, so the script runs. It's indicated as follow:
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Besides that, these two below articles, include more information and examples on how to configure customized scripts to run in your Cloud Build, that I would recommend you to take a look.
CI/CD: Google Cloud Build — Custom Scripts
Mastering Google Cloud Build Config Syntax
Let me know if the information helped you!
I'm new to Gitlab (and I only know the basic features of git : pull, push, merge, branch...).
I'm using a local DynamoDB database launched with docker run -p 8000:8000 amazon/dynamodb-local to do unit testing on my Python project. So I have to launch this docker container in the Gitlab CI/CD so that my unit tests work.
I already read the documentation on this subject on the site of gitlab without finding an answer to my problem and I know that I have to modify my gitlab-ci.yml file in order to launch the docker container.
When using Gitlab you can use Docker-in-Docker.
At the top of your .gitlab-ci.yml file
image: docker:stable
services:
- docker:dind
Then in your stage for tests, you can start up the database and use it.
unit_tests:
stage: tests
script:
- export CONTAINER_ID=$(docker run -p 8000:8000 amazon/dynamodb-local)
## You might need to wait a few seconds with `sleep X` for the container to start up.
## Your database is now here docker:8000
## Run your tests here. Database host=docker and port=8000
This is the best way I have found to achieve it and the easiest to understand
I'm using Cloud Build with the gcloud builder. I override the entrypoint to be bq so I can run some BigQuery SQL in my build step. Previously, I had the SQL embedded directly in the YAML config for Cloud Build. This works fine:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', 'SELECT 1']
Now I'd like to refactor the SQL out of the YAML and into a file instead. According to here, you can cat the file or pipe it to bq. This works on the command line without any problems.
But, I can't get it to work with Cloud Build. I've tried lots of different combinations, and escaping chars etc. but no matter what I try the shell doesn't evaluate/execute the cat my_query.sl backticks, and instead thinks that it's the query itself:
Works fine:
Build in Cloud Build it won't work:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', '`cat my_query.sql`']
I also tried piping it instead of using cat, but I get the same error.
I must be missing something obvious here, but I can't see it. I could build a custom docker image, and wrap everything in a shell script, but I'd rather not have to do that if possible.
How do you use Cloud Build with shell evaluation inside a build step?
You can create a custom Bash script, e.g.:
#!/bin/bash
if [ $# -eq 0 ]; then
echo "No arguments supplied"
fi
bq query --use_legacy_sql=false < $1
Name this run_query.sh, then define your steps as:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', 'my_query.sql']
Disclaimer: this is based on reading the docs, but I haven't actually used Cloud Build.
I have done this:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
dir: 'my/directory'
args: ['-c', 'bq --project_id=my-project-name query --use_legacy_sql=false < ./my_query.sql']
Which works with gcloud builds submit ... and eliminates one file if you prefer.
As the most important benefit of using docker is to keep dev and prod env to be the same so let's rule out the option of using two different docker-compose.yml
Let's say we have a Django application, and we use gunicorn to serve for production and we have a dedicated apache2 as a reverse proxy(this apache2 is out of docker by design). So this application(docker-compose) has only two parts, web(Django) and db(mysql). There's nothing wrong with the db part.
For the Django part, the dev routine without docker would be using venv and python3 manage.py runserver or whatever shortcut that an IDE provides. We can happily change our code, the dev server is smart to pick up and change and reflect in no time.
Things get tricky when docker comes in since all source code should be packed into the image, this gives our dev a big overhead of recreating the image&container again and again. One might have the following solutions(which I found not elegant):
In docker-compose.yml use volume to mount source code folder into the container, so that all changes in the host source code folder will automatically reflect in the container, then gunicorn will pick up the change and reflect. --- This does remove most of the recreating container overhead, but we can't use the same docker-compose.yml in production as this introduces a dependency to the source code on the host server.
I know there is a command line option to mount a host folder to the container, but to my knowledge, this option only exists in docker run not docker-compose. So using a different command to bring the service up in different env is another dead end. ( I am not 100% sure about this as I'm still quite new to docker, please correct me if I'm wrong)
TLDR;
How can I set up my env so that
I use only one single docker-compose.yml for both dev and prod
I'm able to dev with live changes easily without recreating docker container
Thanks a lot!
Define your django service in docker-compose.yml as
services:
backend:
image: backend
Then add a file for dev: docker-compose.dev.yml
services:
backend:
extends:
file: docker-compose.yml
service: backend
volume: local_path:path
To launch for prod, just docker-compose up
To launch for dev
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
To hot reload dev django app, just reload gunicorn ps aux | grep gunicorn | grep greencar_proj | awk '{ print $2 }' | xargs kill -HUP
I have also liked to jam as much functionality into a single docker-compose.yml file. A few strategies I would consider:
define different services for prod and dev. So you'll run docker-compose up dev or docker-compose up prod or docker-compose run dev. There is some copying here but usually not a lot.
Use multiple docker-compose.yml files and merge them. eg: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d. More details here: https://docs.docker.com/compose/extends/
I usually just comment out my volumes section, but that's probably not the best solution.
I deploy a Django Application with Cloudfoundry. Building the app takes some time, however I need to launch the application with different start commands and the only solution I have today is fully to rebuild each time the application.
With Docker, changing the start command is very easy and it doesn't require to rebuild to the whole container, there must be a more efficient way to do this:
Here are the applications launched:
FrontEndApp-Prod: The Django App using gunicorn
OrchesterApp-Prod: The Django Celery Camera & Heartbeat
WorkerApp-Prod: The Django Celery Workers
All these apps are basically identical, they just use different routes, configurations and start commands.
Below is the file manifest.yml I use:
defaults: &defaults
timeout: 120
memory: 768M
disk_quota: 2G
path: .
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry/buildpack-python.git
services:
- PostgresDB-Prod
- RabbitMQ-Prod
- Redis-Prod
applications:
- name: FrontEndApp-Prod
<<: *defaults
routes:
- route: www.myapp.com
instances: 2
command: chmod +x ./launch_server.sh && ./launch_server.sh
- name: OrchesterApp-Prod
<<: *defaults
memory: 1G
instances: 1
command: chmod +x ./launch_orchester.sh && ./launch_orchester.sh
health-check-type: process
no-route: true
- name: WorkerApp-Prod
<<: *defaults
instances: 3
command: chmod +x ./launch_worker.sh && ./launch_worker.sh
health-check-type: process
no-route: true
Two options I can think of for this:
You can use some of the new v3 API features and take advantage of their support for multiple processes in a Procfile. With that, you'd essentially have a Profile like this:
web: ./launch_server.sh
worker: ./launch_orchester.sh
worker: ./launch_worker.sh
The platform should then stage your app once, but deploy it three times based on the droplet that is produced from staging. It's slick because you end up with only one application that has multiple processes running off of it. The drawback is that this is a experimental API at the time of me writing this, so it still has some rough edges, plus the exact support you get could vary depending on how quickly your CF provider installs new versions of the Cloud Controller API.
You can read all the details about this here:
https://www.cloudfoundry.org/blog/build-cf-push-learn-procfiles/
You can use cf local. This is a cf cli plugin which allows you to build a droplet locally (staging occurs in a docker container on your local machine). You can then take that droplet and deploy it as much as you want.
The process would look roughly like this, you'll just need to fill in some options/flags (hint run cf local -h to see all the options):
cf local stage
cf local push FrontEndApp-Prod
cf local push OrchesterApp-Prod
cf local push WorkerApp-Prod
The first command will create a file ending in .droplet in your current directory, the subsequent three commands will deploy that droplet to your provider and run it. The net result is that you should end up with three applications, like you have now, that are all deployed from the same droplet.
The drawback is that your droplet is local, so you're uploading it three times once for each app.
I suppose you also have a third option which is to just use a docker container. That has it's own advantages & drawbacks though.
Hope that helps!