can not add ARG as an argument to CMD in Dockerfile - dockerfile

I want to send an argument to a dockerfile and send it to a script as an argument. so I build the docker like this:
docker build --build-arg x=somedata .
on the dockerfile I write something like:
CMD ["python3", "script.py", ???]
When I run the container I want the following to be run:
python3 script.py somedata
how can this be done?

Pass the argument when you run the container.
docker run your-image \
python3 script.py somedata
# or in docker-compose.yml
command: python3 script.py somedata
If you'll be doing this frequently, there is a pattern of using the image's ENTRYPOINT as the command to run and its CMD as its argument;
# Dockerfile
ENTRYPOINT ["python3", "script.py"]
CMD ["--help"]
docker run your-image \
somedata
In all cases, this command replaces the image's CMD, and is appended to the ENTRYPOINT if present. If you're using the ENTRYPOINT-as-command pattern then both the ENTRYPOINT and CMD lines must use the JSON-array "exec form" syntax.

Eventually I had used environment variables as a way to pass arguments to scripts

Related

How to use a variable defined at Gitlab CI stage in Dockerfile

I have a GitlabCI stage for building Docker image of a Python app. It uses a dynamic S3 variable name which is defined at stage level like this:
Build Container:
stage: package
variables:
S3_BUCKET: <value>
I tried using it inside Dockerfile like this as I want to use it during build time:
FROM python:3.8
EXPOSE 8080
WORKDIR /app
COPY requirements.txt ./requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
RUN SOMETHING
ARG S3_BUCKET_NAME=${S3_BUCKET}
RUN echo "$S3_BUCKET_NAME"
RUN python3 some_Script.py
CMD streamlit run app.py
But this variable is not able to fetch value. Is there some other way of using it inside Dockerfile.
How do you build the image? You will need to ensure that the flag --build-arg passes S3_BUCKET_NAME in.
If you wish to persist the value within the image, such that the value is available during runtime, you will need to export it via a RUN command, or an ENV command. For example
# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE

Cron jobs show but are not executed in dockerized Django app

I have "django_crontab" in my installed apps.
I have a cron job configured
CRONJOBS = [
('* * * * *', 'django.core.management.call_command',['dbbackup']),
]
my YAML looks like this:
web:
build: .
command:
- /bin/bash
- -c
- |
python manage.py migrate
python manage.py crontab add
python manage.py runserver 0.0.0.0:8000
build + up then I open the CLI:
$ python manage.py crontab show
Currently active jobs in crontab:
efa8dfc6d4b0cf6963932a5dc3726b23 -> ('* * * * *', 'django.core.management.call_command', ['dbbackup'])
Then I try that:
$ python manage.py crontab run efa8dfc6d4b0cf6963932a5dc3726b23
Backing Up Database: postgres
Writing file to default-858b61d9ccb6-2021-07-05-084119.psql
All good, but the cronjob never gets executed. I don't see new database dumps every minute as expected.
django-crontab doesn't run scheduled jobs itself; it's just a wrapper around the system cron daemon (you need to configure it with the location of crontab(1), for example). Since a Docker container only runs one process, you need to have a second container to run the cron daemon.
A setup I might recommend here is to write some other script that does all of the required startup-time setup, then runs some command that can be passed as additional arguments:
#!/bin/sh
# entrypoint.sh: runs as the main container process
# Gets passed the container's command as arguments
# Run database migrations. (Should be safe, if inefficient, to run
# multiple times concurrently.)
python manage.py migrate
# Set up scheduled jobs, if this is the cron container.
python manage.py crontab add
# Run whatever command we got passed.
exec "$#"
Then in your Dockerfile, make this script be the ENTRYPOINT. Make sure to supply a default CMD too, probably what would run the main server. With both provided, Docker will pass the CMD as arguments to the ENTRYPOINT.
# You probably already have a line like
# COPY . .
# which includes entrypoint.sh; it must be marked executable too
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array form
CMD python manage.py runserver 0.0.0.0:8000
Now in the docker-compose.yml file, you can provide both containers, from the same image, but only override the command: for the cron container. The entrypoint script will run for both but launch a different command at its last line.
version: '3.8'
services:
web:
build: .
ports:
- '8000:8000'
# use the Dockerfile CMD, don't need a command: override
cron:
build: .
command: crond -n # for Vixie cron; BusyBox uses "crond -f"
# no ports:

Pass host machine's environment variables to dockerfile

My Dockerfile looks like this:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 5000
ADD target/*.jar app.jar
ENV JAVA_OPTS=""
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/.urandom -jar /app.jar"]
I would like to pass a couple of environment variables like RDS_HOSTNAME to the docker container. How should I modify this file to do that?
You can pass ENV during
Build Time
Run Time
To Set ENV during build time you will need modification in Dockerfile.
ARG RDS_HOSTNAME
ENV RDS_HOSTNAME="${RDS_HOSTNAME}"
And pass the RDS_HOSTNAME ENV during build time.
docker build --build-arg RDS_HOSTNAME=$RDS_HOSTNAME -t my_image .
Run Time:
As mentioned in the comment you can just pass
docker run -ti -e RDS_HOSTNAME=$RDS_HOSTNAME yourimage:latest
With the second approach, your Docker container will not container information if someone gets access but you will need to pass every time you run the container, while you first you just need to pass once during build time.

Docker django runs only if I specify the command

I am new to docker and I was trying to create an Image for my Django application.
I have created the image using the following Dockerfile
FROM python:3.6-slim-buster
WORKDIR /app
COPY . /app
RUN pip install -r Requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", '0.0.0.0:8000']
The problem is when I run the image using
docker run -p 8000:8000 <image-tag>
I am unable to access the app in my localhost:8000
But if I run the container using the command
docker run -p 8000:8000 <image-tag> runserver 0.0.0.0:8000
I can see my app in localhost:8000
I think that you can use only Entrypoint command.
Try with:
FROM python:3.6-slim-buster
WORKDIR /app
COPY . /app
RUN pip install -r Requirements.txt
EXPOSE 8000
ENTRYPOINT ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Or you can write script file (entrypoint.sh) with the line. And maybe you can run makemigrations and migrations in the same file.
You need to change the single quotes to double quotes in your CMD line.
Let's play with this simplified Dockerfile:
FROM alpine
ENTRYPOINT ["echo", "python", "manage.py"]
CMD ["runserver", '0.0.0.0:8000']
Now build it and run it:
$ docker build .
...
Successfully built 24d598ae4182
$ docker run --rm 24d598ae4182
python manage.py /bin/sh -c ["runserver", '0.0.0.0:8000']
Docker is pretty picky on the JSON-array form of the CMD, ENTRYPOINT, and RUN commands. If something doesn't parse as a JSON array, it will silently fall back to treating it as a plain command, which will get implicitly wrapped in a /bin/sh -c '...' invocation. That's what you're seeing here.
If you edit my Dockerfile to have double quotes in the CMD line and rebuild the image, then you'll see
$ docker run --rm 58114fa1fdb4
python manage.py runserver 0.0.0.0:8000
and if you actually COPY code in, use a Python base image, and delete that debugging echo, that's the command you want to execute.

Docker Compose ENTRYPOINT and CMD with Django Migrations

I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate