Running Docker as a Service - Environment Variables - amazon-web-services

I am attempting to run my docker container in my linux server and configure it as a systemd unit to manage itself.
My /etc/systemd/system/system.service file features this line:
[Unit]
Description=Your Container Name
After=docker.service
Requires=docker.service
StartLimitInterval=200
StartLimitBurst=10
[Service]
TimeoutStartSec=0
Restart=always
RestartSec=2
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/bash -c 'docker login -u AWS -p $(aws ecr get-login-password --region eu-west-1) 0123456789.dkr.ecr.eu-west-1.amazonaws.com'
ExecStartPre=/usr/bin/docker pull 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest
ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def
[Install]
WantedBy=multi-user.target
This has proven problematic because when I restart the service and check the status it shows this error:
● docker.name.service - name
Loaded: loaded (/etc/systemd/system/docker.name.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Thu 2022-03-10 19:28:06 UTC; 6min ago
Process: 11029 ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def (code=exited, status=127)
Process: 11018 ExecStartPre=/usr/bin/docker pull 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest (code=exited, status=0/SUCCESS)
Process: 10984 ExecStartPre=/usr/bin/bash -c docker login -u AWS -p $(aws ecr get-login-password --region eu-west-1) 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest (code=exited, status=0/SUCCESS)
Process: 10973 ExecStartPre=/usr/bin/docker rm %n (code=exited, status=1/FAILURE)
Process: 10951 ExecStartPre=/usr/bin/docker exec %n stop (code=exited, status=1/FAILURE)
Main PID: 11029 (code=exited, status=127)
Process: 8174 ExecStart=/usr/bin/docker run --rm --name %n 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest -e env_var1=abc -e env_var2=def (code=exited, status=127)
Removing the docker -e options env_var1=abc -e env_var2=def and restarting the service then allows the service to start correctly. How do I get these environment variables to be passed to the docker container from the service? It is critical they are.

docker run considers everything after the image name to be a command that's passed to the container, overriding whatever is configured in the Dockerfile with CMD.
To provide environment variables to the container itself, your -e options need to appear before the image name:
ExecStart=/usr/bin/docker run --rm --name %n -e env_var1=abc -e env_var2=def 0123456789.dkr.ecr.eu-west-1.amazonaws.com/your-container:latest

Related

Running GPU Monitoring on GCP in a container optimized OS

Title has most of the question, but more context is below
Tried following the directions found here:
https://cloud.google.com/compute/docs/gpus/monitor-gpus
I modified the code a bit, but haven't been able to get it working. Here's the abbreviated cloud config I've been running that should show the relevant parts:
- path: /etc/scripts/gpumonitor.sh
permissions: "0644"
owner: root
content: |
#!/bin/bash
echo "Starting script..."
sudo mkdir -p /etc/google
cd /etc/google
sudo git clone https://github.com/GoogleCloudPlatform/compute-gpu-monitoring.git
echo "Downloaded Script..."
echo "Starting up monitoring service..."
sudo systemctl daemon-reload
sudo systemctl --no-reload --now enable /etc/google/compute-gpu-monitoring/linux/systemd/google_gpu_monitoring_agent.service
echo "Finished Script..."
- path: /etc/systemd/system/install-monitoring-gpu.service
permissions: "0644"
owner: root
content: |
[Unit]
Description=Install GPU Monitoring
Requires=install-gpu.service
After=install-gpu.service
[Service]
User=root
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/bash /etc/scripts/gpumonitor.sh
StandardOutput=journal+console
StandardError=journal+console
runcmd:
- systemctl start install-monitoring-gpu.service
Edit:
Turned out it was best to build a docker container with the monitoring script in it and run the docker container in my config script by passing the GPU into the docker container like shown in the following link
https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus

How to respawn automatically a process(wso2am)?

I have an upstart script as
# Ubuntu upstart file at /etc/init/wso2am.conf
#!upstart
description "wso2am"
pre-start script
mkdir -p /var/log/wso2am/
end script
respawn
respawn limit 15 5
start on runlevel [2345]
stop on runlevel [06]
script
# Not sure why $HOME is needed, but we found that it is:
export JAVA_HOME="/usr/lib/jvm/jdk1.8.0_111"
#exec /usr/local/bin/node $JAVA_HOME/node/notify.js 13002 >> /var/log/node.log 2>&1
end script
And my service file also created as
# this is /usr/lib/systemd/system/wso2am.service
# (or /lib/systemd/system/wso2am.service dependent on
# your linux distribution flavor )
[Unit]
Description=wso2am server daemon
Documentation=https://docs.wso2.com/
After==network.target wso2am.service
[Service]
# see man systemd.service
User=tel
Group=tel
TimeoutStartSec=0
Type=simple
KillMode=process
ExecStart= /bin/bash -lc '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh --start'
RemainAfterExit=true
ExecStop = /bin/bash -lc '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh --stop'
StandardOutput=journal
Restart = always
RestartSec=2
[Install]
WantedBy=default.target
I try to kill process (wso2am)
ps -ef | grep wso2am
Kill -9 process_id
But i can't able to find process automatically respawn/restart. How to check auto-respawn mechanism in ubuntu?
You can achieve this through systemd with your wso2am.service file modified as follows.
[Unit]
Description=wso2am server daemon
Documentation=https://docs.wso2.com/
After=network.target
[Service]
ExecStart=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh start'
ExecStop=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh stop'
ExecRestart=/bin/sh -c '/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/bin/wso2server.sh restart'
PIDFile=/home/tel/Documents/vz/wso2am-2.1.0/wso2am-2.1.0/wso2carbon.pid
User=tel
Group=tel
Type=forking
Restart=always
RestartSec=2
StartLimitInterval=60s
StartLimitBurst=3
StandardOutput=journal
[Install]
WantedBy=multi-user.target
Now when you search for the wso2am process, use the below command.
ps -ef | grep java
Then pick the PID for the wso2am java process and kill it.
kill -9 <wso2_server_PID>
Immediately run
ps -ef | grep java
again and see that the process is not there now. Then within 2 seconds as we have specified RestartSec=2, you will see the wso2 server process is back up and running with a different PID. Then you can make sure that the wso2 instance has respawned on failure.

Not able to start 2 tasks using Dockerfile CMD

I have a question about Dockerfile with CMD command. I am trying to setup a server that needs to run 2 commands in the docker container at startup. I am able to run either 1 or the other service just fine on their own but if I try to script it to run 2 services at the same time, it fails. I have tried all sorts of variations of nohup, &, linux task backgrounding but I haven't been able to solve it.
Here is my project where I am trying to achieve this:
https://djangofan.github.io/mountebank-with-ui-node/
#entryPoint.sh
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
jobs -l
Displays this output but the ports are not listening:
djangofan#MACPRO ~/workspace/mountebank-container (master)*$ ./run-container.sh
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878 djangofan/mountebank-example "/bin/bash -c /scripts/entryPoint.sh" Less than a second ago Up Less than a second 0.0.0.0:2525->2525/tcp, 0.0.0.0:4546->4546/tcp, 0.0.0.0:5555->5555/tcp, 2424/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2424->80/tcp nervous_lalande
[1]- 5 Running nohup /bin/bash -c "http-server -p 80 /ui" &
[2]+ 6 Running nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
And here is my Dockerfile:
FROM node:8-alpine
ENV MOUNTEBANK_VERSION=1.14.0
RUN apk add --no-cache bash gawk sed grep bc coreutils
RUN npm install -g http-server
RUN npm install -g mountebank#${MOUNTEBANK_VERSION} --production
EXPOSE 2525 2424 4546 5555 9000
ADD imposters /mb/
ADD ui /ui/
ADD *.sh /scripts/
# these work when ran 1 or the other
#CMD ["http-server", "-p", "80", "/ui"]
#CMD ["mb", "--port", "2525", "--configfile", "/mb/imposters.ejs", "--allowInjection"]
# this doesnt yet work
CMD ["/bin/bash", "-c", "/scripts/entryPoint.sh"]
One process inside docker container has to run not in background mode, because docker container is running while main process inside it is running.
The /scripts/entryPoint.sh should be:
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection"
Everything else is fine in your Dockerfile.

systemctl strange error: Invalid arguments

Here's my service file:
[Unit]
Description=Daphne Interface
[Service]
ExecStartPre=cd /home/git/hsfzmun/server
ExecStart=/bin/bash/ -c "cd /home/git/hsfzmun/server && /home/git/virtualenvs/hsfzmun/bin/daphne -b 0.0.0.0 -p 8001 -v2 config.asgi:channel_layer"
Restart=always
KillSignal=SIGQUIT
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target
When I execute sudo systemctl start daphnei I get:
Failed to start daphnei.service: Unit daphnei.service is not loaded properly: Invalid argument.
See system logs and 'systemctl status daphnei.service' for details.
And the result of systemctl status daphnei.service:
* daphnei.service - Daphne Interface
Loaded: error (Reason: Invalid argument)
Active: inactive (dead) (Result: exit-code) since Mon 2017-02-13 19:55:10 CST; 13min ago
Main PID: 16667 (code=exited, status=1/FAILURE)
What's wrong? I am using Ubuntu Server 16.04.
Generally, to debug exact cause of "Invalid argument", you can use:
sudo systemd-analyze verify daphnei.service
or in case of user's local service:
sudo systemd-analyze --user verify daphnei.service
Maybe you've figured it out by now, but there's an extra / after /bin/bash in your ExecStart line.
I may have a slightly newer version of systemd -- when I tried it, the output of systemctl status included this message:
Executable path specifies a directory, ignoring: /bin/bash/ -c "cd /home/git/hsfzmun/server && /home/git/virtualenvs/hsfzmun/bin/daphne -b 0.0.0.0 -p 8001 -v2 config.asgi:channel_layer"
Also, you might consider using a WorkingDirectory line in the service, instead of cd &&

Unable to connect to docker container

I setup two swarm manager nodes(mgr1, mgr2). But when I try to connect to the container it throws an error message .
[root#ip-10-3-2-24 ec2-user]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise 10.3.2.24:3375 consul://10.3.2.24:8500/
[root#ip-10-3-2-24 ec2-user]# docker exec -it mgr1 /bin/bash
rpc error: code = 2 desc = "oci runtime error: exec failed: exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
It's happening in both the servers(mgr1, mgr2). I'm also running consul container on each node and able to connect to the consul containers.
/bin/bash might not be available there in the container. You may use sh as shown below
docker exec -it mgr1 sh or
docker exec -it mgr1 /bin/sh or
docker exec -it mgr1 bash or
docker attach mgr1
UPDATE: Based on the comments
busybox is very light weight linux based image and some of the above works perfectly fine:
bash $ sudo docker exec -it test1 bash
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n"
bash $ sudo docker exec -it test1 sh
/ # exit
bash $ sudo docker exec -it test1 /bin/sh
/ # exit
bash $ sudo docker attach test1
/ # exit
bash $