I have SECRET_KEY = os.environ['SECRET_KEY'] in my prod.py, and SECRET_KEY=secret_string in my .bashrc
This will cause 502 error but if I set SECRET_KEY="secret_string", it is working. How can I use environment variable to do this?
I'm starting gunicorn via sudo service gunicorn restart and I have a upstart script.
Here is the output of cat /proc/<PID>/environ:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin^#TERM=linux^#UPSTART_JOB=gunicorn^#UPSTART_INSTANCE=^#
You need to do:
export SECRET_KEY=secret_string
in your .bashrc. If you just do:
SECRET_KEY=secret_string
It's only available in current process, but when you run django server/shell, the subprocess has no idea of this variable. export make the variable available in subprocesses as well.
.bashrc only affects bash login shells. Init scripts are not affected in any way by it.
You should copy the export SECRET_KEY=... line to the top of your init script.
Related
So this has been a long-running problem for me and I'd love to fix it - I also think it will help a lot of others. I'd love to run Django commands after ssh'ing on my Elastic Beanstalk EC2 instance. E. g.
python manage.py dumpdata
The reason why this is not possible are the missing environment variables. They are present when the server boots up but are unset as soon as the server is running (EB will create a virtual env within the EC2 and delete the variables from there).
I've recently figured out that there is a prebuilt script to retrieve the env variables on the EC2 instances:
/opt/elasticbeanstalk/bin/get-config environment
This will return a stringified object like this:
{"AWS_STATIC_ASSETS_SECRET_ACCESS_KEY":"xxx-xxx-xxx","DJANGO_KEY":"xxx-xxx-xxx","DJANGO_SETTINGS_MODULE":"xx.xx.xx","PYTHONPATH":"/var/app/venv/staging-LQM1lest/bin","RDS_DB_NAME":"xxxxxxx":"xxxxxx","RDS_PASSWORD":"xxxxxx"}
This is where I'm stuck currently. I think need would need a script, that takes this object parses it and sets the key / values as environment variables. I would need to be able to run this script from the ec2 instance.
Or a command to execute from the .ebextensions that would get the variables and sets them.
Am I absolutely unsure how to proceed at this point? Am I overlooking something obvious here? Is there someone who has written a script for this already? Is this even the right approach?
I'd love your help!
Your env variables are stored in /opt/elasticbeanstalk/deployment/env
Thus to export them, you can do the following (must be root to access the file):
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
Once you execute the command you can confirm the presence of your env variables using:
env
To use this in your .extentions, you can try:
container_commands:
10_dumpdata:
command: |
export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
source $PYTHONPATH/activate
python ./manage.py dumpdata
Some Background
Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.
The Problem
I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).
The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?
My Current Approach
I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.
Follow Up Questions
Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?
Is there a "best practice" for handling secret keys in Dockerized applications?
I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.
Code
Here's the Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts
RUN source app-env
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
I'll start with why it doesn't work as is, and then discuss the options you have to move forward:
During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.
Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.
Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:
docker run -d -e SECRET_KEY=supersecretkey mydjangoproject
The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)
In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:
CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application
This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""
The shell will handle running source app-env and then your gunicorn command.
If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script
The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:
#!/bin/bash
cd /app && source app-env && cd - && exec "$#"
This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:
ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
With the entrypoint strategy, you can leave your CMD as-is without changing it.
Here's what I'm working with right now:
Ubuntu Trusty 14.04
Rails 4.2.6
Ruby 2.2.3
Passenger
Nginx
When I try to visit the IP I get this message:
Incomplete response received from application
When I look at nginx/error.log I see:
Missing `secret_token` and `secret_key_base` for 'production' environment, set these values in `config/secrets.yml`
On the server I did:
RAILS_ENV=production bundle exec rake secret
I placed that result into each of these files for good measure:
~/.bashrc
~/.bash_profile
~/.profile
/app/shared/config/local_env.yml
For all shell scripts the format is:
export SECRET_KEY_BASE="[key]"
For the local_env.yml I used just:
SECRET_KEY_BASE="[key]"
I've also tried entering it without quotation marks.
I've restarted the server each time I made a change. No cigar.
What else might be the issue?
-- UPDATE
I've even added the secret key to the secrets.yml file directly. So now I'm thinking my issue is either something to do with passenger/nginx or with a typo somewhere.
It is more likely that the environment variables are not actually set rather than Rails is not picking them up. You're raking secrets, which I don't do. I set them up manually in the Unix etc/environment, and do not check any secrets into source control. But the following are a few steps that should help you either resolve or hone in on the problem.
On your Ubuntu server for system wide environment variables
1- $env
Look for your SECRET_TOKEN and SECRET_KEY_BASE. The error tells you that these are not set, this is just a technique to check env. (RAILS_ENV will also be shown in the list if it is set.)
2- $sudo nano /etc/environment
Add the following lines -- use your actual values between double quotes. Do not use a [key] or any programmatic replacement.
export SECRET_TOKEN="T99ABC..."
export SECRET_KEY_BASE="99ABC..."
3- $logout / $login to reload environment vars
4- $env - Check the environment again
Look for your SECRET_TOKEN and SECRET_KEY_BASE to be set.
5- Try deploying again. If it fails, check the environment vars using $env again. It will tell you if something in your deploy is smashing your SECRET_* env vars.
I am running django on centos served by apache and mod_wsgi. I followed the instructions to set up celery to be run as a daemon.
I put this init script https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd in /etc/init.d/celeryd
and set up the configuration in
/etc/default/celeryd
I am using environment variables in my django settings.py file so I can use different configurations in my development and production environments. I know these environment variables are set correctly because the app has been working this whole time. I think that celery is just not getting the variable passed to it or something.
I checked by typing the env command. variables are showing fine.
To start up I just do:
service celeryd start
It tries to start up but throws an error saying that I do not have my environment variables set.
I wrote a function to grab environment variables. that is what throws the error.
def get_env_variable(var_name):
try:
return os.environ[var_name]
except KeyError:
error_msg = "Set the %s environment variable" % var_name
raise ImproperlyConfigured(error_msg)
The only way that error is thrown is if the environment variable is not set correctly.
Does anyone know why celery is not detecting the enironment variables that I have set?
I just discovered that I not only had to set my environment variables in the system, but I also had to pass those variables in to the /etc/default/celleryd script.
I just put my variables at the bottom of /etc/default/celleryd:
export MY_SPECIAL_VARIABLE = "my production variable"
export MY_OTHERSPECIAL_VARIABLE = "my other production variable"
if environment variables write on ~/.bashrc, you can add source ~/.bashrc to /etc/init.d/celeryd at first.
Does your /etc/default/celeryd define what user celery should run as?
In mine I have:
CELERYD_USER="celery"
CELERYD_GROUP="celery"
Can you post your /etc/defaults/celeryd config file?
I had the same problem using celery and supervisor, I had supervisord to use a shell script to start celery worker and also source the env variables.
#!/bin/bash
source ~/.profile
CELERY_LOGFILE=/usr/local/src/imbue/application/imbue/log/celeryd.log
CELERYD_OPTS=" --loglevel=INFO --autoscale=10,5 --concurrency=8"
cd /usr/local/src/imbue/application/imbue/conf
exec celery worker -n celeryd#%h -f $CELERY_LOGFILE $CELERYD_OPTS
in ~/.profile:
export C_FORCE_ROOT="true"
export KEY="DEADBEEF"
I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.