Execute some bash script before docker-compose build triggered by command - build

Is there a way before building of docker containers with docker compose to execute a bash script?
something like
version: "3"
before_script: //execute my local script
services:
database: ....

No there is not.
you can make a custom bash build script that executes your local script before starting the containers

Related

Cron jobs show but are not executed in dockerized Django app

I have "django_crontab" in my installed apps.
I have a cron job configured
CRONJOBS = [
('* * * * *', 'django.core.management.call_command',['dbbackup']),
]
my YAML looks like this:
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py crontab add
python manage.py runserver 0.0.0.0:8000
build + up then I open the CLI:
$ python manage.py crontab show
Currently active jobs in crontab:
efa8dfc6d4b0cf6963932a5dc3726b23 -> ('* * * * *', 'django.core.management.call_command', ['dbbackup'])
Then I try that:
$ python manage.py crontab run efa8dfc6d4b0cf6963932a5dc3726b23
Backing Up Database: postgres
Writing file to default-858b61d9ccb6-2021-07-05-084119.psql
All good, but the cronjob never gets executed. I don't see new database dumps every minute as expected.
django-crontab doesn't run scheduled jobs itself; it's just a wrapper around the system cron daemon (you need to configure it with the location of crontab(1), for example). Since a Docker container only runs one process, you need to have a second container to run the cron daemon.
A setup I might recommend here is to write some other script that does all of the required startup-time setup, then runs some command that can be passed as additional arguments:
#!/bin/sh
# entrypoint.sh: runs as the main container process
# Gets passed the container's command as arguments
# Run database migrations. (Should be safe, if inefficient, to run
# multiple times concurrently.)
python manage.py migrate
# Set up scheduled jobs, if this is the cron container.
python manage.py crontab add
# Run whatever command we got passed.
exec "$#"
Then in your Dockerfile, make this script be the ENTRYPOINT. Make sure to supply a default CMD too, probably what would run the main server. With both provided, Docker will pass the CMD as arguments to the ENTRYPOINT.
# You probably already have a line like
# COPY . .
# which includes entrypoint.sh; it must be marked executable too
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000
Now in the docker-compose.yml file, you can provide both containers, from the same image, but only override the command: for the cron container. The entrypoint script will run for both but launch a different command at its last line.
version: '3.8'
services:
web:
build: .
ports:
- '8000:8000'
# use the Dockerfile CMD, don't need a command: override
cron:
build: .
command: crond -n # for Vixie cron; BusyBox uses "crond -f"
# no ports:

Docker with Serverless- files not getting packaged to container

I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy

Pass host machine's environment variables to dockerfile

My Dockerfile looks like this:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 5000
ADD target/*.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/.urandom -jar /app.jar"]
I would like to pass a couple of environment variables like RDS_HOSTNAME to the docker container. How should I modify this file to do that?
You can pass ENV during
Build Time
Run Time
To Set ENV during build time you will need modification in Dockerfile.
ARG RDS_HOSTNAME
ENV RDS_HOSTNAME="${RDS_HOSTNAME}"
And pass the RDS_HOSTNAME ENV during build time.
docker build --build-arg RDS_HOSTNAME=$RDS_HOSTNAME -t my_image .
Run Time:
As mentioned in the comment you can just pass
docker run -ti -e RDS_HOSTNAME=$RDS_HOSTNAME yourimage:latest
With the second approach, your Docker container will not container information if someone gets access but you will need to pass every time you run the container, while you first you just need to pass once during build time.

Dockerfile CMD not running, but the command works when use inside the container

I want to run a python web app in a container, but the CMD in dockerfile isn't running (CMD ["gunicorn","--config","python:config","web:app"]). When I use the command, "gunicorn --config python:cpnfig web:app", in the container, it works (in this condition, the container starts with "/bin/bash").
How can I modify the CMD in dockerfile to run the app?

Django + docker + periodic commands

What's the best practices for running periodic/scheduled tasks ( like manage.py custom_command ) when running Django with docker (docker-compose) ?
f.e. the most common case - ./manage.py clearsessions
Django recommends to run it with cronjobs...
But Docker does not recommend adding more then one running service to single container...
I guess I can create a docker-compose service from the same image for each command that i need to run - and the command should run infinite loop with a needed sleeps, but that seems overkill doing that for every command that need to be scheduled
What's your advice ?
The way that worked for me
in my django project I have a crontab file like this:
0 0 * * * root python manage.py clearsessions > /proc/1/fd/1 2>/proc/1/fd/2
Installed/configured cron inside my Dockerfile
RUN apt-get update && apt-get -y install cron
ADD crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
and in docker-compose.yml add a new service that will build same image as django project but will run cron -f as CMD
version: '3'
services:
web:
build: ./myprojectname
ports:
- "8000:8000"
#...
cronjobs:
build: ./myprojectname
command: ["cron", "-f"]
I ended up using this project - Ofelia
https://github.com/mcuadros/ofelia
so you just add it to your docker-compose
and have config like:
[job-exec "task name"]
schedule = #daily
container = myprojectname_1
command = python ./manage.py clearsessions
Create one docker image with your Django application.
You can use it to run your Django app (the web interface), and at the same time, using cron schedule your period tasks by passing in the command to the docker executable, like this:
docker exec --rm your_container python manage.py clearsessions
The --rm will make sure that docker removes the container once it finishes; otherwise you will quickly have containers stopped that are of no use.
You can also pass in any extra arguments, for example using -e to modify the environment:
docker exec --rm -e DJANGO_DEBUG=True -e DJANGO_SETTINGS_MODULE=production \
python manage.py clearsessions