Startup scripts are not behaving the way that I expected them to.
I have a .sh file in a storage bucket and have included a startup-script-url meta tag with the value gs://bucket-name/start-script.sh
[[0;32m OK [0m] Started Google Compute Engine Accounts Daemon.
Starting Google Compute Engine Startup Scripts...
[[0;32m OK [0m] Started Google Compute Engine Shutdown Scripts.
[[0;32m OK [0m] Started Google Compute Engine Startup Scripts.
[[0;32m OK [0m] Started Docker Application Container Engine.
[[0;32m OK [0m] Started Wait for Google Container Registry (GCR) to be accessible.
[[0;32m OK [0m] Reached target Google Container Registry (GCR) is Online.
[[0;32m OK [0m] Started Containers on GCE Setup.
[ 8.001248] konlet-startup[1018]: 2018/03/08 20:23:56 Starting Konlet container startup agent
The below script is not executed as expected. I tried using the startup-script metadata tag and using something simple like echo "hello" but that's not working either. I have full Cloud API access scopes enabled.
If I copy the contents of the below file and paste it into the ssh terminal it works perfectly.
Could really use some help =D
start-script.sh
#! /bin/bash
image_name=gcr.io/some-image:version-2
docker_images=$(docker inspect --type=image $image_name)
array_count=${#docker_images[0]}
# Check if docker image is available
while ((array_count = 2));
do
docker_images=$(docker inspect --type=image ${image_name})
array_count=${#docker_images[0]}
if (($array_count > 2)); then
break
fi
done
################################
#
# Docker image now injected
# by google compute engine
#
################################
echo "docker image ${image_name} available"
container_ids=($(docker container ls | grep ${image_name} | awk '{ print $1}'))
for (( i=0; i < ${#container_ids[#]}; i++ ));
do
# run command for each container
container_id=${container_ids[i]}
echo "running commands for container: ${container_ids[i]}"
# start cloud sql proxy
docker container run -d -p $HOST_PORT:$APPLICATION_PORT \
${container_ids[i]} \
./cloud_sql_proxy \
-instances=$PG_INSTANCE_CONNECTION_NAME=tcp:$PG_PORT \
-credential_file=$PG_CREDENTIAL_FILE_LOCATION
# HTTP - Start unicorn webserver
docker exec -d \
${container_ids[i]} \
bundle exec unicorn -c config/unicorn.rb
done
Okay... after some scenario testing... found out that startup scripts do not run if you use the "Deploy a container image to this VM instance" option. Hope this saves you from tearing your hair out haha.
startup-script always run when use the "Deploy a container image to this VM instance" option.
You can use cmd sudo journalctl -u google-startup-scripts.service -f to check script run result.
You can use cmd sudo google_metadata_script_runner -d --script-type startup to debug script.
2021.11.09 Updated: sudo google_metadata_script_runner startup.
doc: https://cloud.google.com/compute/docs/instances/startup-scripts
Startup scripts for Container-Optimised OS must be configured differently. Use the user-data metadata tag, and pass it a cloud-config configuration. The example from the docs is below.
#cloud-config
bootcmd:
- fsck.ext4 -tvy /dev/DEVICE_ID
- mkdir -p /mnt/disks/MNT_DIR
- mount -t ext4 -O ... /dev/DEVICE_ID /mnt/disks/MNT_DIR
I had a similar issue, after I've removed execution rights for files from /tmp. Take into account that startup scripts are copied to /tmp/ and afterwards ran from there.
Related
I'm running questdb in docker with this command:
docker run -p 9000:9000 -p 8812:8812 -d questdb/questdb
How can I check the log output of this service? It's not really convenient for me to write the database logs to disk, but I would like to check for troubleshooting.
When you run this with the -d flag, you are running in detached mode, so logs will not be visible in stdout. To check the logs for this container, run
docker logs <container_id>
You can find out the container ID using:
docker ps
I have a docker container running in a small AWS instance with limited disk space. The logs were getting bigger, so I used the commands below to delete the evergrowing log files:
sudo -s -H
find /var -name "*json.log" | grep docker | xargs -r rm
journalctl --vacuum-size=50M
Now I want to see what's the behaviour of one of the running docker containers, but it claims the log file has disappeared (from the rm command above):
ubuntu#x-y-z:~$ docker logs --follow name_of_running_docker_1
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log: no such file or directory
I would like to be able to see again what's going on in the running container, so I tried:
sudo touch /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log
And again docker follow, but while interacting with the software that should produce logs, I can see that nothing is happening.
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Yes, but it's more of a trick than a real solution. You should never interact with /var/lib/docker data directly. As per Docker docs:
part of the host filesystem [which] is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
For this trick to work, you need to configure your Docker Daemon to keep containers alive during downtime before first running our container. For example, by setting your /etc/docker/daemon.json with:
{
"live-restore": true
}
This requires Daemon restart such as sudo systemctl restart docker.
Then create a container and delete its .log file:
$ docker run --name myhttpd -d httpd:alpine
$ sudo rm $(docker inspect myhttpd -f '{{ .LogPath }}')
# Docker is not happy
$ docker logs myhttpd
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/xxx-json.log: no such file or directory
Restart Daemon (with live restore), this will cause Docker to somehow re-take management of our container and create our log file back. However, any logs generate before log file deletion are lost.
$ sudo systemctl restart docker
$ docker logs myhttpd # works! and log file is created back
Note: this is not a documented or official Docker feature, simply a behavior I observed with my own experimentations using Docker 19.03. It may not work with other Docker versions
With live restore enabled, our container process keeps running even though Docker Daemon is stopped. On Docker daemon restart, it probably somehow try to re-read from the still alive process stdout and stderr and redirect output to our log file (hence re-creating it)
I'm deploying a web scraping application composed of Scrapy spiders that scrape content from websites as well as screenshot webpages with the Splash javascript rendering service. I want to deploy the whole application to a single Ec2 instance. But for the application to work I must run a splash server from a docker image at the same time I'm running my spiders. How can I run multiple processes on an Ec2 instance? Any advice on best practices would be most appreciated.
Total noob question. I found the best way to run a Splash server and Scrapy spiders on an Ec2 instance after configuration is via a bash script scheduled to run with a cronjob. Here is the bash script I came up with:
#!bin/bash
# Change to proper directory to run Scrapy spiders.
cd /home/ec2-user/project_spider/project_spider
# Activate my virtual environment.
source /home/ec2-user/venv/python36/bin/activate # activate my virtual environment
# Create a shell variable to store date at runtime
LOGDATE=$(date +%Y%m%dT%H%M%S);
# Spin up splash instance from docker image.
sudo docker run -d -p 8050:8050 -p 5023:5023 scrapinghub/splash --max-timeout 3600
# Scrape first site and store dated log file in logs directory.
scrapy crawl anhui --logfile /home/ec2-user/project_spider/project_spider/logs/anhui_spider/anhui_spider_$LOGDATE.log
...
# Spin down splash instance via docker image.
sudo docker rm $(sudo docker stop $(sudo docker ps -a -q --filter ancestor=scrapinghub/splash --format="{{.ID}}"))
# Exit virtual environment.
deactivate
# Send an email to confirm cronjob was successful.
# Note that sending email from Ec2 is difficult and you can not use 'MAILTO'
# in your cronjob without setting up something like postfix or sendmail.
# Using Mailgun is an easy way around that.
curl -s --user 'api:<YOURAPIHERE>' \
https://api.mailgun.net/v3/<YOURDOMAINHERE>/messages \
-F from='<YOURDOMAINADDRESS>' \
-F to=<RECIPIENT> \
-F subject='Cronjob Run Successfully' \
-F text='Cronjob completed.'
I'm packaging a project into a docker jetty image and I'm trying to access the logs, but no logs.
Dockerfile
FROM jetty:9.2.10
MAINTAINER Me "me#me.com"
ADD ./target/abc-1.0.0 /var/lib/jetty/webapps/ROOT
EXPOSE 8080
Bash script to start docker image:
docker pull me/abc
docker stop abc
docker rm abc
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
The image is running, but I'm not seeing any jetty logs in /var/log.
I've tried a docker run -it jetty bash, but not seeing any jetty logs in /var/log either.
Am I missing a parameter to make jetty output logs or does it output it somewhere other than /var/log/jetty?
Why you aren't seeing logs
2 things to note:
Running docker run -it jetty bash will start a new container instead of connecting you to your existing daemonized container.
And it would invoke bash instead of starting jetty in that container, so it won't help you to get logs from either container.
So this interactive container won't help you in any case.
But also...
JettyLogs are disabled anyways
Also, you won't see the logs in the standard location (say, if you tried to use docker exec to read the logs, or to get them in a volume), quite simply because the Jetty Docker file is aptly disabling logging entirely.
If you look at the jetty:9.2.10 Dockerfile, you will see this line:
&& sed -i '/jetty-logging/d' etc/jetty.conf \
Which nicely removes the entire line referencing the jetty-logging.xml default logging configuration.
What to do then?
Reading logs with docker logs
Docker gives you access to the container's standard output.
After you did this:
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
You can simply do this:
docker logs abc
And be greeted with somethig similar to this:
Running Jetty:
2015-05-15 13:33:00.729:INFO::main: Logging initialized #2295ms
2015-05-15 13:33:02.035:INFO:oejs.SetUIDListener:main: Setting umask=02
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Opened ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Setting GID=999
2015-05-15 13:33:02.106:INFO:oejs.SetUIDListener:main: Setting UID=999
2015-05-15 13:33:02.133:INFO:oejs.Server:main: jetty-9.2.10.v20150310
2015-05-15 13:33:02.170:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/var/lib/jetty/webapps/] at interval 1
2015-05-15 13:33:02.218:INFO:oejs.ServerConnector:main: Started ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.219:INFO:oejs.Server:main: Started #3785ms
Use docker help logs for more details.
Customize
Obviously your other option is to revert what the default Dockerfile for jetty is doing, or to create your own dockerized Jetty.
I was working on docker on AWS instance and it was working fine. On one day, docker stopped working. When i restarted docker "service docker start", it started and "service docker status" returned "docker dead but pidfile exists" message and docker commands did not executed. When i inspected log file, it showed following messages:
msg="+job serveapi(unix:///var/run/docker.sock)"
msg="Listening for HTTP on unix (/var/run/docker.sock)"
msg="There are no more loopback devices available."
msg="loopback mounting failed"
To start docker, i removed pid file from /var/run/docker.pid, /var/run/docker.sock and also removed docker from /var/lock/subsys/docker and restarted docker. But no gain. It still gives same error on start "docker dead but pidfile exists".
Please help.
This ticket could be related with the loopback issue.
https://github.com/docker/docker/issues/7058
So, please check output from losetup -l and ls -l /dev/loop*
EDIT: If ls -l /dev/loop* returns an error, most likely cause is the github ticket I indicated, and then you would need something like
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(taken from the mentioned issue)
Also, if you only want to restart, you may need to umount /var/lib/docker/devicemapper or any mounted volume of type aufs