AWS CloudWatch log stream is empty while Docker container logs are full - amazon-web-services

I'm creating cron docker image, based on ubuntu image. I would use this tutorial,
but cron won't have PID 1, not 2 and not even 10. I found a solution to
run cron in foreground, check it using healthcheck and view logs of command using tail -f
Locally all is perfect, but when I deploy it to ECS Fargate logs are empty
Main issue
docker logs -f cron displays output, but
logs in cloudwatch are empty
Additional information
I created Dockerfile with following ENTRYPOINT and CMD
COPY crontab /etc/crontab
COPY script.sh /root/script.sh
ENTRYPOINT ["tini", "--", "/usr/bin/docker-entrypoint.sh"]
CMD prepare.sh tail -f /var/log/script.log
prepare.sh content:
# some preparation commands...
cron -L 15
exec "$#"
docker-entrypoint.sh also has at the end exec "$#".
crontab content:
*/3 * * * * root /root/script.sh >> /var/log/script.log
So, the last command is tail -f /var/log/script.log

In order for your container's logs to appear in CloudWatch logs, you need to setup your Task Definition to use the awslogs log driver.
Documentation on how to do this can be found here.

Related

How can I see a service's logs when running it from Docker?

I'm running questdb in docker with this command:
docker run -p 9000:9000 -p 8812:8812 -d questdb/questdb
How can I check the log output of this service? It's not really convenient for me to write the database logs to disk, but I would like to check for troubleshooting.
When you run this with the -d flag, you are running in detached mode, so logs will not be visible in stdout. To check the logs for this container, run
docker logs <container_id>
You can find out the container ID using:
docker ps

how to reconnect to a docker logs --follow where the log file was deleted

I have a docker container running in a small AWS instance with limited disk space. The logs were getting bigger, so I used the commands below to delete the evergrowing log files:
sudo -s -H
find /var -name "*json.log" | grep docker | xargs -r rm
journalctl --vacuum-size=50M
Now I want to see what's the behaviour of one of the running docker containers, but it claims the log file has disappeared (from the rm command above):
ubuntu#x-y-z:~$ docker logs --follow name_of_running_docker_1
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log: no such file or directory
I would like to be able to see again what's going on in the running container, so I tried:
sudo touch /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log
And again docker follow, but while interacting with the software that should produce logs, I can see that nothing is happening.
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Yes, but it's more of a trick than a real solution. You should never interact with /var/lib/docker data directly. As per Docker docs:
part of the host filesystem [which] is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
For this trick to work, you need to configure your Docker Daemon to keep containers alive during downtime before first running our container. For example, by setting your /etc/docker/daemon.json with:
{
"live-restore": true
}
This requires Daemon restart such as sudo systemctl restart docker.
Then create a container and delete its .log file:
$ docker run --name myhttpd -d httpd:alpine
$ sudo rm $(docker inspect myhttpd -f '{{ .LogPath }}')
# Docker is not happy
$ docker logs myhttpd
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/xxx-json.log: no such file or directory
Restart Daemon (with live restore), this will cause Docker to somehow re-take management of our container and create our log file back. However, any logs generate before log file deletion are lost.
$ sudo systemctl restart docker
$ docker logs myhttpd # works! and log file is created back
Note: this is not a documented or official Docker feature, simply a behavior I observed with my own experimentations using Docker 19.03. It may not work with other Docker versions
With live restore enabled, our container process keeps running even though Docker Daemon is stopped. On Docker daemon restart, it probably somehow try to re-read from the still alive process stdout and stderr and redirect output to our log file (hence re-creating it)

docker restart container failed: "already in use", but there's no more docker image

I first got my nginx docker image:
docker pull nginx
Then I started it:
docker run -d -p 80:80 --name webserver nginx
Then I stopped it:
docker stop webserver
Then I tried to restart it:
$docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.
Well, it's an error. But in fact there's nothing in container list now:
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Why I restart nginx image failed? How to fix it?
It is because
you have used --name switch.
container is stopped and not removed
You find it stopped
docker ps -a
You can simply start it using below command:
docker start webserver
EDIT: Alternatives
If you want to start the container with below command each time,
docker run -d -p 80:80 --name webserver nginx
then use one of the following:
method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped
docker run -d -p 80:80 --rm --name webserver nginx
method 2: remove it explicitly after stopping the container before starting the command that you are currently using.
docker stop <container name>
docker rm <container name>
As the error says.
You have to remove (or rename) that container to be able to reuse that name
This leaves you two options.
You may delete the container that is using the name "webserver" using the command
docker rm 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74
and retry.
Or you may use a different name during the run command. This is not recommended, as you no longer need that docker.
It's better to remove the unwanted docker and reuse the name.
While the great answers are correct, they didn't actually solve the problem I was facing.
How To:
Safely automate starting of named docker container regardless of its prior state
The solution is to wrap the docker run command with an additional check and either do a run or a stop + run (effectively restart with the new image) based on the result.
This achieves both of my goals:
Avoids the error
Allows me to periodically update the image (say new build) and restart safely
#!/bin/bash
# Adapt the following 3 parameters to your specific case
NAME=myname
IMAGE=myimage
RUN_OPTIONS='-d -p 8080:80'
ContainerID="$(docker ps --filter name="$NAME" -q)"
if [[ ! -z "$ContainerID" ]]; then
echo "$NAME already running as container $ContainerID: stopping ..."
docker stop "$ContainerID"
fi
echo "Starting $NAME ..."
exec docker run --rm --name "$NAME" $RUN_OPTIONS "$IMAGE"
Now I can run (or stop + start if already running) the $NAME docker container in a idempotent way, without worrying about this possible failure.

Crontab visible in logs but still doesn't seem to run?

I'm on an AWS server. I wrote a crontab and placed it on the server under /etc/cron.d. The contents of the crontab are the following:
SHELL=/bin/bash
PATH=/sbin:/bin:usr/sbin:/usr/bin
MAILTO=root
HOME=/
*/5 * * * * root <full-path-to-write-command> >> <full-path-to-txt-output-file>
After running sudo service crond restart, I check the logs by doing sudo tail -f /var/log/cron.
I can observe the cronjob in the logs:
<date-time-stamp> ip-<ip-address> CROND[12930]: (root) CMD (<full-path-to-write-command> >> <full-path-to-txt-output-file>)
However, when I check the <full-path-to-txt-output-file>, I don't see file being written to.
What could be the problem, if I see that the cronjob is executing? Thanks

Where does dockerized jetty store its logs?

I'm packaging a project into a docker jetty image and I'm trying to access the logs, but no logs.
Dockerfile
FROM jetty:9.2.10
MAINTAINER Me "me#me.com"
ADD ./target/abc-1.0.0 /var/lib/jetty/webapps/ROOT
EXPOSE 8080
Bash script to start docker image:
docker pull me/abc
docker stop abc
docker rm abc
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
The image is running, but I'm not seeing any jetty logs in /var/log.
I've tried a docker run -it jetty bash, but not seeing any jetty logs in /var/log either.
Am I missing a parameter to make jetty output logs or does it output it somewhere other than /var/log/jetty?
Why you aren't seeing logs
2 things to note:
Running docker run -it jetty bash will start a new container instead of connecting you to your existing daemonized container.
And it would invoke bash instead of starting jetty in that container, so it won't help you to get logs from either container.
So this interactive container won't help you in any case.
But also...
JettyLogs are disabled anyways
Also, you won't see the logs in the standard location (say, if you tried to use docker exec to read the logs, or to get them in a volume), quite simply because the Jetty Docker file is aptly disabling logging entirely.
If you look at the jetty:9.2.10 Dockerfile, you will see this line:
&& sed -i '/jetty-logging/d' etc/jetty.conf \
Which nicely removes the entire line referencing the jetty-logging.xml default logging configuration.
What to do then?
Reading logs with docker logs
Docker gives you access to the container's standard output.
After you did this:
docker run --name='abc' -d -p 10908:8080 -v /var/log/abc:/var/log/jetty me/abc:latest
You can simply do this:
docker logs abc
And be greeted with somethig similar to this:
Running Jetty:
2015-05-15 13:33:00.729:INFO::main: Logging initialized #2295ms
2015-05-15 13:33:02.035:INFO:oejs.SetUIDListener:main: Setting umask=02
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Opened ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.102:INFO:oejs.SetUIDListener:main: Setting GID=999
2015-05-15 13:33:02.106:INFO:oejs.SetUIDListener:main: Setting UID=999
2015-05-15 13:33:02.133:INFO:oejs.Server:main: jetty-9.2.10.v20150310
2015-05-15 13:33:02.170:INFO:oejdp.ScanningAppProvider:main: Deployment monitor [file:/var/lib/jetty/webapps/] at interval 1
2015-05-15 13:33:02.218:INFO:oejs.ServerConnector:main: Started ServerConnector#73ec519{HTTP/1.1}{0.0.0.0:8080}
2015-05-15 13:33:02.219:INFO:oejs.Server:main: Started #3785ms
Use docker help logs for more details.
Customize
Obviously your other option is to revert what the default Dockerfile for jetty is doing, or to create your own dockerized Jetty.