Docker Image creation for aws log agent - ERROR - amazon-web-services

Hi I would like to create Docker image with aws log agent service.
following script i have written
My Dockerfile
FROM ubuntu:latest
ENV AWS_REGION ap-northeast-1
RUN apt-get update && apt-get install -y curl python python-pip \
&& rm -rf /var/lib/apt/lists/*
COPY awslogs.conf ./
RUN curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
RUN chmod +x ./awslogs-agent-setup.py
RUN ./awslogs-agent-setup.py --non-interactive --region ${AWS_REGION} --configfile ./awslogs.conf
RUN apt-get purge curl -y
RUN mkdir /var/log/awslogs
WORKDIR /var/log/awslogs
CMD /bin/sh /var/awslogs/bin/awslogs-agent-launcher.sh
*********************END OF FILE *****************************
during building image i get following error
Step 1 of 5: Installing pip ...DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE
Step 5 of 5: Setting up agent as a daemon ...Traceback (most recent call last):
File "/awslogs-agent-setup.py", line 1272, in <module>
main()
File "/awslogs-agent-setup.py", line 1268, in main
setup.setup_artifacts()
File "/awslogs-agent-setup.py", line 827, in setup_artifacts
self.setup_daemon()
File "/awslogs-agent-setup.py", line 773, in setup_daemon
self.setup_agent_nanny()
File "/awslogs-agent-setup.py", line 764, in setup_agent_nanny
self.setup_cron_jobs()
File "/awslogs-agent-setup.py", line 734, in setup_cron_jobs
with open (nanny_cron_path, "w") as cron_fragment:
IOError: [Errno 2] No such file or directory: '/etc/cron.d/awslogs'
The command '/bin/sh -c python /awslogs-agent-setup.py -n -r eu-west-1 -c ./awslogs.conf.dummy' returned a non-zero code: 1
Please help me to fix this.

You are just missing the cron package in your dockerfile. It doesn't matter if you have installed cron on your system.
FROM ubuntu:latest
ENV AWS_REGION ap-northeast-1
RUN apt-get update && apt-get install -y curl python cron python-pip \
&& rm -rf /var/lib/apt/lists/*
COPY awslogs.conf ./
RUN curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
RUN chmod +x ./awslogs-agent-setup.py
RUN ./awslogs-agent-setup.py --non-interactive --region ${AWS_REGION} --configfile ./awslogs.conf
RUN apt-get purge curl -y
RUN mkdir /var/log/awslogs
WORKDIR /var/log/awslogs
CMD /bin/sh /var/awslogs/bin/awslogs-agent-launcher.sh
After adding cron package to dockerfile everything works fine
Step 1 of 5: Installing pip ...DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE
Step 5 of 5: Setting up agent as a daemon ...DONE
------------------------------------------------------
- Configuration file successfully saved at: /var/awslogs/etc/awslogs.conf
- You can begin accessing new log events after a few moments at https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#logs:
- You can use 'sudo service awslogs start|stop|status|restart' to control the daemon.
- To see diagnostic information for the CloudWatch Logs Agent, see /var/log/awslogs.log
- You can rerun interactive setup using 'sudo python ./awslogs-agent-setup.py --region us-west-2 --only-generate-config'
------------------------------------------------------

Have you checked that a cron daemon is actually installed on your system ?
On other hand you can you can try manually installing pip3.5 install awscli-cwlogs or apt-get update && apt-get install -y python-pip libpython-dev
You can refer to this question

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

Run docker commands in AWS Lambda Function

Goal
I'm curious to know if it's possible to run docker commands within AWS Lambda Function invocations. Specifically I'm running docker compose up -d to run one-off ECS tasks (see this aws article for more info). I know it's easily possible with AWS CodeBuild but for my use case where the workload duration is usually below 10 seconds, it would be more cost effective to use Lambda.
AFAIK Docker DooD is not available given Lambda Functions hosts can not be toggled to mount the host's docker daemon onto the Lambda Function's container.
Attempts
I've tried the following Docker DinD approach below with no luck:
Lambda custom container image:
ARG FUNCTION_DIR="/function"
FROM python:buster as build-image
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY ./* ${FUNCTION_DIR}
RUN pip install --target ${FUNCTION_DIR} -r requirements.txt
FROM python:buster
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie ./entrypoint.sh ./runner_install_docker.sh
RUN sh ./runner_install_docker.sh
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "lambda_function.lambda_handler" ]
contents ofrunner_install_docker.sh (script that installs docker)
#!/bin/bash
apt-get -y update
apt-get install -y \
software-properties-common build-essential \
apt-transport-https ca-certificates gnupg lsb-release curl sudo
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo chmod u+x /usr/bin/*
sudo chmod u+x /usr/local/bin/*
sudo apt-get clean
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /tmp/*
When I run docker compose or other docker commands, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Docker isn't available inside the AWS Lambda runtime. Even if you built it into the custom container, the Lambda function would need to run as a privileged docker container for docker-in-docker to work, which is not something supported by AWS Lambda.
Specifically I'm running docker compose up -d to run one-off ECS tasks
Instead of trying to do this with the docker-compose ECS functionality, you need to look at invoking an ECS RunTask command via one of the AWS SDKs.

Docker file throwing error when i try to run it "AH00111: Config variable ${APACHE_RUN_DIR} is not defined"

I am trying my hands with Docker.
I am trying to install apche2 into ubuntu images.
FROM ubuntu
RUN echo "welcome to yellow pages"
RUN apt-get update
RUN apt-get install -y tzdata
RUN apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN echo 'Hello, docker' > /var/www/index.html
ENTRYPOINT ["/usr/sbin/apache2"]
CMD ["-D", "FOREGROUND"]
I found a reference online reference
I have added this line "RUN apt-get install -y tzdata" because it was asking for an option of tzdata and stopping image creation.
Now when I run my image I am getting the below error
[Thu Jan 07 09:43:57.213998 2021] [core:warn] [pid 1] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
I am new to docker and it's a bit of a task for me to understand it.
Could anyone help me out of this?
This seems to be Apache issue, not docker issue. Your conf seems to have errors. You have a parameter there called DefaultRuntimeDir which is pointing ad directory which does not exist in docker. Review your config file and ensure directories you specified in there exist in docker.
You can play within docker by simply:
docker build -t my_image_name .
docker run -it --rm --entrypoint /bin/bash my_image_name
# now you are in your docker container, you can check if your directories exist
Without knowing your config I would simply add one more RUN (I made this path up, you can change it to whatever you like)
ENV APACHE_RUN_DIR /var/lib/apache/runtime
RUN mkdir -p ${APACHE_RUN_DIR}
As a side note I would also combine all RUN into single like this:
RUN echo "welcome to yellow pages" \
&& apt-get update \
&& apt-get install -y tzdata apache2 \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/www \
&& echo 'Hello, docker' > /var/www/index.html

sdkman does not install java in a dockerfile

I have this docker file:
# We are going to star from the jhipster image
FROM jhipster/jhipster
# install as root
USER root
### Setup docker cli (don't need docker daemon) ###
# Install some packages
RUN apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
# Add Dockers official GPG key:
RUN ["/bin/bash", "-c", "set -o pipefail && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -"]
# Add a stable repository
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Setup aws credentials as environment variables
ENV AWS_ACCESS_KEY_ID "change it!"
ENV AWS_SECRET_ACCESS_KEY "change it!"
# noninteractive install for tzdata
ARG DEBIAN_FRONTEND=noninteractive
# set timezone for tzdata
ENV TZ=America/Sao_Paulo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install the latest version of Docker Engine - Community and also aws cli
RUN apt-get update && apt-get install docker-ce docker-ce-cli containerd.io awscli -y
# change back to default user
USER jhipster
# install skd and java version 1.8
RUN curl -s "https://get.sdkman.io" | bash
RUN bash $HOME/.sdkman/bin/sdkman-init.sh
RUN bash -c "sdk install java 8.0.222.j9-adpt"
When I run a command to build an image from this dockerfile it fails on the last step with a message:
/bin/sh: 1: sdk: not found
When I install it on my local machine it runs sdkman (sdk) on bash. But on this script it calls it from sh not bash. How can I make it calls skdman (sdk) from sh? What I actually want to do is install a specific java version through sdkman (sdk). Is there another way to do it?
For sdk command to be available you need to run source sdkman-init.sh.
Here is a working sample with java 11 on centos.
FROM centos:latest
ARG CANDIDATE=java
ARG CANDIDATE_VERSION=11.0.6-open
ENV SDKMAN_DIR=/root/.sdkman
# update the image
RUN yum -y upgrade
# install requirements, install and configure sdkman
# see https://sdkman.io/usage for configuration options
RUN yum -y install curl ca-certificates zip unzip openssl which findutils && \
update-ca-trust && \
curl -s "https://get.sdkman.io" | bash && \
echo "sdkman_auto_answer=true" > $SDKMAN_DIR/etc/config && \
echo "sdkman_auto_selfupdate=false" >> $SDKMAN_DIR/etc/config
# Source sdkman to make the sdk command available and install candidate
RUN bash -c "source $SDKMAN_DIR/bin/sdkman-init.sh && sdk install $CANDIDATE $CANDIDATE_VERSION"
# Add candidate path to $PATH environment variable
ENV JAVA_HOME="$SDKMAN_DIR/candidates/java/current"
ENV PATH="$JAVA_HOME/bin:$PATH"
ENTRYPOINT ["/bin/bash", "-c", "source $SDKMAN_DIR/bin/sdkman-init.sh && \"$#\"", "-s"]
CMD ["sdk", "help"]
The problem is every RUN command in Dockerfile is executed within a new bash environment, so you need to put both of your last two commands under the same line to look like this:
RUN bash $HOME/.sdkman/bin/sdkman-init.sh && bash -c "sdk install java 8.0.222.j9-adpt"

Installing CPhalcon on an AWS Docker image

I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?