I have a question how to generate dumpdata without this txt on start:
[1mLoading .env environment variables...[0m
Here is an example:
[1mLoading .env environment variables...[0m
[
{
"model": "auth.permission",
"pk": 1,
"fields": {
"name": "Can add permission",
"content_type": 1,
"codename": "add_permission"
}
},
....
I can't find solution, it is annoying, because i want to do a sh script
docker-compose exec django pipenv run python manage.py dumpdata --indent 2 > fixtures/test_dumpdata.json
As one of the comments mentioned, you can also completely bypass using stdout redirection by using the -o or --output flags, providing a valid path and filename as the flags parameter. And in your case this would be the RECOMMENDED way to do it.
More information on that in the docs here:
https://docs.djangoproject.com/en/4.1/ref/django-admin/#cmdoption-dumpdata-output
Additionally, if you just want to do ithis one time, you can go into the docker container itself.
What is happening is its writing stdout to the file you specified, but because you're running the command from the host, there's extra verbosity from docker in the stdout.
docker exec -it <container_name> bash
python manage.py dumpdata ...
Also, in your specific case you will need to activate your virtual environment before running dumpdata
Furthermore, you may automate this by creating a script to dump data in the docker container, and invoking that from the host as youcwere before (I believe, im currently unable to test this last bit)
Thanks for Swift. To do it i used -o flag with output path.
Here is done script.
#!/bin/bash
cd ..
docker-compose exec <container_name> pipenv run python manage.py dumpdata -o fixtures/dumpdata_test.json --indent 2
Pipenv run is important if you are using pipenv, -o OUTPUT set a destination, --indent 2 change inline to beauty json format. Swif said about django-admin, my solution is still with manage.py
Related
Some Background
Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.
The Problem
I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).
The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?
My Current Approach
I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.
Follow Up Questions
Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?
Is there a "best practice" for handling secret keys in Dockerized applications?
I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.
Code
Here's the Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts
RUN source app-env
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
I'll start with why it doesn't work as is, and then discuss the options you have to move forward:
During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.
Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.
Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:
docker run -d -e SECRET_KEY=supersecretkey mydjangoproject
The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)
In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:
CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application
This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""
The shell will handle running source app-env and then your gunicorn command.
If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script
The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:
#!/bin/bash
cd /app && source app-env && cd - && exec "$#"
This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:
ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
With the entrypoint strategy, you can leave your CMD as-is without changing it.
I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...
I'm trying to run IPython on a production ubuntu server. I want to control it with upstart.
I have a bash script that properly invokes it in the foreground but it doesn't work when invoked through upstart. I'm not sure how to debug the problem other than piping the upstart script's output to a file, which just confirms that the IPython console dashboard properly shows up.
I'm using django-extensions with the following configuration:
IPYTHON_ARGUMENTS = [
'--ext', 'django_extensions.management.notebook_extension',
'--pylab=inline',
'--profile=myprofile',
]
My bash script is:
#!/bin/bash
set -e
cd /home/ubuntu/myproject
exec venv/bin/python /home/ubuntu/myproject/manage.py shell_plus --notebook
Any help is appreciated
No idea what can be the reason.
Did you had a look at Hydra that have been designed to launch multiple IPython server?
I want to run my development django server at startup so I defined following cron job:
#reboot screen -d -m django-admin.py runserver 192.168.0.28:8000
But it didn't work.
What is really interesting, when I copy/paste directly to terminal and execute it works just fine.
I even tried something like this:
#reboot cd /home/ubuntu && /usr/bin/screen -d -m /usr/bin/python /usr/local/bin/django-admin.py runserver 192.168.0.28:8000 &> /home/ubuntu/cron.err
To be sure I'm not using some undefined commands in wrong location and examined contents of cron.err file but it's empty.
And (of course) when I fire this directly from the console it works immediately.
Please help.
Does it work if you try and run it from cron at a specific time? Eg:
50 12 2 8 * /usr/bin/screen -dmS set_from_cron
I want my custom made Django command to be executed every minute. However it seems like python /path/to/project/myapp/manage.py mycommand doesn't seem to work while at the directory python manage.py mycommand works perfectly.
How can I achieve this ? I use /etc/crontab with:
****** root python /path/to/project/myapp/manage.py mycommand
I think the problem is that cron is going to run your scripts in a "bare" environment, so your DJANGO_SETTINGS_MODULE is likely undefined. You may want to wrap this up in a shell script that first defines DJANGO_SETTINGS_MODULE
Something like this:
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py mycommand
Make it executable (chmod +x) and then set up cron to run the script instead.
Edit
I also wanted to say that you can "modularize" this concept a little bit and make it such that your script accepts the manage commands as arguments.
#!/bin/bash
export DJANGO_SETTINGS_MODULE=myproject.settings
./manage.py ${*}
Now, your cron job can simply pass "mycommand" or any other manage.py command you want to run from a cron job.
cd /path/to/project/myapp && python manage.py mycommand
By chaining your commands like this, python will not be executed unless cd correctly changes the directory.
If you want your Django life a lot more simple, use django-command-extensions within your project:
http://code.google.com/p/django-command-extensions/
You'll find a command named "runscript" so you simply add the command to your crontab line:
****** root python /path/to/project/myapp/manage.py runscript mycommand
And such a script will execute with the Django context environment.
This is what i have done recently in one of my projects,(i maintain venvs for every project i work, so i am assumning you have venvs)
***** /path/to/venvs/bin/python /path/to/app/manage.py command_name
This worked perfectly for me.
How to Schedule Django custom Commands on AWS EC-2 Instance?
Step -1
First, you need to write a .cron file
Step-2
Write your script in .cron file.
MyScript.cron
* * * * * /home/ubuntu/kuzo1/venv/bin/python3 /home/ubuntu/Myproject/manage.py transfer_funds >> /home/ubuntu/Myproject/cron.log 2>&1
Where * * * * * means that the script will be run at every minute. you can change according to need (https://crontab.guru/#*_*_*_*_*). Where /home/ubuntu/kuzo1/venv/bin/python3 is python virtual environment path. Where /home/ubuntu/kuzo1/manage.py transfer_funds is Django custom command path & /home/ubuntu/kuzo1/cron.log 2>&1 is a log file where you can check your running cron log
Step-3
Run this script
$ crontab MyScript.cron
Step-4
Some useful command
1. $ crontab -l (Check current running cron job)
2. $ crontab -r (Remove cron job)
The runscript extension wasn't well documented. Unlike the django command this one can go anywhere in your project and requires a scripts folder. The .py file requires a run() function.
If its a standalone script, you need to do this:
from django.conf import settings
from django.core.management import setup_environ
setup_environ(settings)
#your code here which uses django code, like django model
If its a django command, its easier: https://coderwall.com/p/k5p6ag
In (management/commands/exporter.py)
from django.core.management.base import BaseCommand, CommandError
class Command(BaseCommand):
args = ''
help = 'Export data to remote server'
def handle(self, *args, **options):
# do something here
And then, in the command line:
$ python manage.py exporter
Now, it's easy to add a new cron task to a Linux system, using crontab:
$ crontab -e
or $ sudo crontab -e if you need root privileges
In the crontab file, for example for run this command every 15 minutes, something like this:
# m h dom mon dow command
*/15 * * * * python /var/www/myapp/manage.py exporter