Docker delete file on stop of container - django

i have a django app that also uses cerlery and celery also creates a .pid file but if i restart my app with docker-compose up celery fails to start because the old .pid file within the container does not get deleted,
what can i do to solve this isse?
Is there maybe a way on linux-side (debian 9) to remove the file at restart or shutdown of the container?
snippet of my docker config:
celery:
image: echo_echo
command: bash -c "celery worker -A echo -l INFO --pidfile="/tmp/celery/celery.pid" "
volumes:
- echo:/echo
...
Thanks and br

Got the issue fixed, i simply added the (pid) file like so:
--pidfile= --schedule=/var/run/celerybeat-schedule
and also for the worker(s):
--pidfile= --schedule=/var/run/celery-schedule

Related

Attempting to restart Celery processes via Supervisor results in error

I am running supervisor/celery on an amazon aws server. Attempting to deploy a new application version eventually fails because the celery processes are not started. I have taken a look at the supervisord.conf file to ensure that the programs are included, which they are. At the end of the supervisord.conf file I have the following include:
[include]
files=celeryd.conf
files=flower.conf
I try to restart celery with
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-default celeryd-slowtasks
celeryd-defualt and celeryd-slowtaks being the names of the programs listed in celeryd.conf. I get the following error:
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
If I run
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart all
I get
flower: stopped
httpd: stopped
httpd: started
flower: started
without any mention of celery. Any idea how to start figuring this issue out?
Check /opt/python/etc/supervisord.conf, you are probably including a folder that you don't expect to be included.
Also ensure that the instance of supervisor that is running is actually using the config file you ex

docker restart container failed: "already in use", but there's no more docker image

I first got my nginx docker image:
docker pull nginx
Then I started it:
docker run -d -p 80:80 --name webserver nginx
Then I stopped it:
docker stop webserver
Then I tried to restart it:
$docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.
Well, it's an error. But in fact there's nothing in container list now:
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Why I restart nginx image failed? How to fix it?
It is because
you have used --name switch.
container is stopped and not removed
You find it stopped
docker ps -a
You can simply start it using below command:
docker start webserver
EDIT: Alternatives
If you want to start the container with below command each time,
docker run -d -p 80:80 --name webserver nginx
then use one of the following:
method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped
docker run -d -p 80:80 --rm --name webserver nginx
method 2: remove it explicitly after stopping the container before starting the command that you are currently using.
docker stop <container name>
docker rm <container name>
As the error says.
You have to remove (or rename) that container to be able to reuse that name
This leaves you two options.
You may delete the container that is using the name "webserver" using the command
docker rm 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74
and retry.
Or you may use a different name during the run command. This is not recommended, as you no longer need that docker.
It's better to remove the unwanted docker and reuse the name.
While the great answers are correct, they didn't actually solve the problem I was facing.
How To:
Safely automate starting of named docker container regardless of its prior state
The solution is to wrap the docker run command with an additional check and either do a run or a stop + run (effectively restart with the new image) based on the result.
This achieves both of my goals:
Avoids the error
Allows me to periodically update the image (say new build) and restart safely
#!/bin/bash
# Adapt the following 3 parameters to your specific case
NAME=myname
IMAGE=myimage
RUN_OPTIONS='-d -p 8080:80'
ContainerID="$(docker ps --filter name="$NAME" -q)"
if [[ ! -z "$ContainerID" ]]; then
echo "$NAME already running as container $ContainerID: stopping ..."
docker stop "$ContainerID"
fi
echo "Starting $NAME ..."
exec docker run --rm --name "$NAME" $RUN_OPTIONS "$IMAGE"
Now I can run (or stop + start if already running) the $NAME docker container in a idempotent way, without worrying about this possible failure.

docker-compose.yml file behaves differently on ECS than local docker-compose

I have the following minimal docker-compose.yml:
worker:
working_dir: /app
image: <my-repo>.dkr.ecr.us-east-1.amazonaws.com/ocean-boiler:latest
cpu_shares: 4096
mem_limit: 524288000
command: /bin/bash -c "bin/delayed-job --pool=*:1"
When I run it locally it using docker-compose, love and happiness.
When I request ECS runs it remotely, I get the following:
ecs-cli up
=>
time="2016-05-03T11:40:00-07:00" level=info msg="Stopped container..." container="<cid-redacted>/worker" desiredStatus=STOPPED lastStatus=STOPPED taskDefinition="ecscompose-spud:73"
then; we check the fallout using ps:
ecs-cli ps
=>
<cid-redacted>/worker STOPPED Reason: DockerStateError: [8] System error: exec: "/bin/bash -c \"bin/delayed-job --pool=*:1\"": stat /bin/bash -c "bin/delayed-job --pool=*:1": no such file or directory ecscompose-spud:73
I've been down the rabbit hole of not referring to any files without complete paths. My docker instance functions as intended whether I run it locally or on a remote machine, however ordering it about with ecs-cli seems to be sad-panda business.
Just running it locally with docker-compose up functions as intended... Any halp would be appreciated!
EDIT: Finally fixed. Using command: categorically worked super bad for me - my docker containers now contain the command needed to run; and my advice is to avoid the use of command unless you really need it.

How to customize the docker run command on Elastic Beanstalk?

Here's the thing, I need to tell Docker to not containerize the container’s networking, because it needs to connect to a MongoDB that is inside a VPN (enterprise private DB).
There is a Docker command that let's me do exactly that: --net=host. Reference here.
So, for example, when running the container on my local machine, I will do something like:
docker run --rm -it --net=host [image-name]:[version] bash -il
And that command will do the trick. Thanks to that, I can connect to the "private" MongoDB.
So, my question is: Is there a way customize the docker run command of a Single Docker Environment on Elastic Beanstalk so I can add the --net=host?
I have tried using the container_commands into the config.yml file to add that instruction there, but I don't think that does what I need, here is a snippet:
container_commands:
00-test_command:
command: bundle exec thin --net=host
01-networking-fix:
command: "docker run --rm -it --net=host [image-name]:[version] bash -il"
I ended up fixing it with two container commands
container_commands:
00_fix_networking:
command: sed -i 's/docker run -d/docker run --net=host -d/' /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
01_fix_docker_ip:
command: sed -i 's/server $EB_CONFIG_NGINX_UPSTREAM_IP/server localhost/' /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
Update:
I also had to fix the Upstart script. Unfortunately, I didn't write down what I did because I didn't end up needing to alter the docker run command. You would do a files directive for (I think) /etc/init/docker. AWS edits the Nginx configuration in the same manner as in 01flip.sh in that file as well.
Explanation:
In the 64bit Amazon Linux 2015.03 v2.0.2 running Docker 1.7.1 platform version, the file you need to edit is /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh. This file is now far more complex than Samar's version so I didn't want to put the actual contents in there. However, the change is basically the same. There's the line that starts with
docker run -d
I fixed it with a container command:
container_commands:
00_fix_networking:
command: sed -i 's/docker run -d/docker run --net=host -d/' /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
This successfully adds the --net=host argument but now there's another problem. The system ends up with an invalid Nginx directive. Using --net=host means that when you run docker inspect <container id> there is no IP address in the NetworkSettings. AWS uses this to create the server directive for Nginx and ends up generating server :<some port you chose> (before adding --net=host it would look like server <ip>:<port>). I needed to patch that file, too. It's generated in /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh.
01_fix_docker_ip:
command: sed -i 's/server $EB_CONFIG_NGINX_UPSTREAM_IP/server localhost/' /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
While elastic beanstalk is generally well suited for applications that work with standard set of configurations, its difficult to customize and keep things updated along with the updates AWS provides to EB stacks. Having said that, I've done something like below which is a bit hacky but works fine.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh":
mode: "000755"
owner: root
group: root
encoding: plain
content: |
#script content of original 04run.sh along with modification on docker run cmd
# eg. I injected multi-ports here
docker run -d \
"${EB_CONFIG_DOCKER_ENV_ARGS[#]}" \
"${EB_CONFIG_DOCKER_VOLUME_MOUNTS[#]}" \
"${EB_CONFIG_DOCKER_ENTRYPOINT_ARGS[#]}" \
"${PORT_ARGS[#]}" \
$EB_CONFIG_DOCKER_IMAGE_STAGING \
"${EB_CONFIG_DOCKER_COMMAND_ARGS[#]}" 2>&1 | tee /tmp/docker_run.log | tee $EB_CONFIG_DOCKER_STAGING_APP_FILE
This is not very neat, at least I have to make sure that it does not break with updates on elastic beanstalk. The above one is for docker 1.5 stack but you can do something similar with the version you're running.
Note that the latest version of the AWS stack (with Docker 1.7.1) has a slightly different pre-deploy setup. You'll need to update the file at the location: /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
or, for example, if you want to pass args to your Docker image:
commands:
00001_modify_docker_run:
cwd: /tmp
command: 'sed -i "s/\$EB_CONFIG_DOCKER_IMAGE_STAGING/\$EB_CONFIG_DOCKER_IMAGE_STAGING -gzip -enable-url-source/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'

AWS EB, Play Framework and Docker: Application Already running

I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...