How can I use an environment variable in the exec from? - dockerfile

Is there a possibility with the exec from to be able to control the path of the sshd_config via an environment variable? The service should still respond to a SIGTERM.
ENV SSHD_CONFIG=/var/test/sshd_config
ENTRYPOINT ["/usr/sbin/sshd"]
CMD ["-D" , "-f", "$SSHD_CONFIG"]
As far as I've seen, openssh doesn't have an environment variable that specifies the path of sshd_config.

Related

How to login to W&B by using ENTRYPOINT

I want to know how to use ENTRYPOINT in a Dockerfile to run a shell script that logs me in.
There’s no need for a custom entry point. Simply set the WANDB_API_KEY environment variable in a kubernetes spec or via the -e flag passed to docker run

Pass Django SECRET_KEY in Environment Variable to Dockerized gunicorn

Some Background
Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.
The Problem
I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).
The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?
My Current Approach
I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.
Follow Up Questions
Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?
Is there a "best practice" for handling secret keys in Dockerized applications?
I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.
Code
Here's the Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts
RUN source app-env
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
I'll start with why it doesn't work as is, and then discuss the options you have to move forward:
During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.
Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.
Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:
docker run -d -e SECRET_KEY=supersecretkey mydjangoproject
The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)
In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:
CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application
This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""
The shell will handle running source app-env and then your gunicorn command.
If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script
The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:
#!/bin/bash
cd /app && source app-env && cd - && exec "$#"
This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:
ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
With the entrypoint strategy, you can leave your CMD as-is without changing it.

How to set $HOME before startup script in Google Compute Engine

In my use case, I am trying to use the $HOME variable to identify my app server path in the instance startup.
I am using Google compute engine with a startup script which uses $HOME variable. But it looks $HOME is not set or the user is not created while startup script executes in google cloud.
It throws $HOME not set error. Is there any workaround for this? Now I have to restart the instance after creating for the first time. So that the $HOME variable will be set when I restart. But this is an ugly hack for production.
Could someone help me with this?
The startup script is executed as root when the user have been not created yet and no user is logged in (you can check it running at startup $ users and comparing the output of $ cat /etc/shadow after a reboot).
Honestly I don't understand how just a reboot can make your $HOME be populated at startup time since on Linux, the HOME environment variable is set by the login program:
by login on console, telnet and rlogin sessions
by sshd for SSH
connections by gdm, kdm or xdm for graphical sessions.
However if you need to reboot and you don't want to do it manually you can reboot just once after the creation of a machine:
if [ -f flagreboot ]; then
...
your script
...
else
touch flagreboot
reboot
fi
On the other hand if you know which is going to be the $HOME path of your application you can think to simply export this variable at startup to populate it manually.
$ export HOME=/home/username
printenv
cd $HOME
touch test.txt
echo $HOME >> test.txt
echo $PWD >> test.txt
printenv > env.txt
I included the above code in my startup script. Strangely, the $HOME, $PWD and many other environment variables are not set while the startup script is runninng. Here are the contents of of the files I created during the startup.
test.txt:
/
env.txt:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
PWD=/
LANG=en_US.UTF-8
SHLVL=2
_=/usr/bin/printenv
Here's the output(some values removed) of printenv command, immediately after the VM creation.
XDG_SESSION_ID=
HOSTNAME=server1
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=
SELINUX_USE_CURRENT_RANGE=
SSH_TTY=/dev/pts/0
USER=
LS_COLORS=
MAIL=/var/spool/mail/xyz
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/*<username>*/.local/bin:/home/*<username>*/bin
PWD=/home/*<username>*
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/*<username>*
LOGNAME=*<username>*
SSH_CONNECTION=
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/printenv
To summarize, not all the environment variables are set at the time the startup script executes. They are populated some time after. I find that wierd, but that's how it's works.

Django: environment variable for SECRET_KEY not working

I have SECRET_KEY = os.environ['SECRET_KEY'] in my prod.py, and SECRET_KEY=secret_string in my .bashrc
This will cause 502 error but if I set SECRET_KEY="secret_string", it is working. How can I use environment variable to do this?
I'm starting gunicorn via sudo service gunicorn restart and I have a upstart script.
Here is the output of cat /proc/<PID>/environ:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin^#TERM=linux^#UPSTART_JOB=gunicorn^#UPSTART_INSTANCE=^#
You need to do:
export SECRET_KEY=secret_string
in your .bashrc. If you just do:
SECRET_KEY=secret_string
It's only available in current process, but when you run django server/shell, the subprocess has no idea of this variable. export make the variable available in subprocesses as well.
.bashrc only affects bash login shells. Init scripts are not affected in any way by it.
You should copy the export SECRET_KEY=... line to the top of your init script.

Why are my environment variables not detected when starting up celery?

I am running django on centos served by apache and mod_wsgi. I followed the instructions to set up celery to be run as a daemon.
I put this init script https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd in /etc/init.d/celeryd
and set up the configuration in
/etc/default/celeryd
I am using environment variables in my django settings.py file so I can use different configurations in my development and production environments. I know these environment variables are set correctly because the app has been working this whole time. I think that celery is just not getting the variable passed to it or something.
I checked by typing the env command. variables are showing fine.
To start up I just do:
service celeryd start
It tries to start up but throws an error saying that I do not have my environment variables set.
I wrote a function to grab environment variables. that is what throws the error.
def get_env_variable(var_name):
try:
return os.environ[var_name]
except KeyError:
error_msg = "Set the %s environment variable" % var_name
raise ImproperlyConfigured(error_msg)
The only way that error is thrown is if the environment variable is not set correctly.
Does anyone know why celery is not detecting the enironment variables that I have set?
I just discovered that I not only had to set my environment variables in the system, but I also had to pass those variables in to the /etc/default/celleryd script.
I just put my variables at the bottom of /etc/default/celleryd:
export MY_SPECIAL_VARIABLE = "my production variable"
export MY_OTHERSPECIAL_VARIABLE = "my other production variable"
if environment variables write on ~/.bashrc, you can add source ~/.bashrc to /etc/init.d/celeryd at first.
Does your /etc/default/celeryd define what user celery should run as?
In mine I have:
CELERYD_USER="celery"
CELERYD_GROUP="celery"
Can you post your /etc/defaults/celeryd config file?
I had the same problem using celery and supervisor, I had supervisord to use a shell script to start celery worker and also source the env variables.
#!/bin/bash
source ~/.profile
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log
CELERYD_OPTS=" --loglevel=INFO --autoscale=10,5 --concurrency=8"
cd /usr/local/src/imbue/application/imbue/conf
exec celery worker -n celeryd#%h -f $CELERY_LOGFILE $CELERYD_OPTS
in ~/.profile:
export C_FORCE_ROOT="true"
export KEY="DEADBEEF"