I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.
Related
Goal
I'm curious to know if it's possible to run docker commands within AWS Lambda Function invocations. Specifically I'm running docker compose up -d to run one-off ECS tasks (see this aws article for more info). I know it's easily possible with AWS CodeBuild but for my use case where the workload duration is usually below 10 seconds, it would be more cost effective to use Lambda.
AFAIK Docker DooD is not available given Lambda Functions hosts can not be toggled to mount the host's docker daemon onto the Lambda Function's container.
Attempts
I've tried the following Docker DinD approach below with no luck:
Lambda custom container image:
ARG FUNCTION_DIR="/function"
FROM python:buster as build-image
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY ./* ${FUNCTION_DIR}
RUN pip install --target ${FUNCTION_DIR} -r requirements.txt
FROM python:buster
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie ./entrypoint.sh ./runner_install_docker.sh
RUN sh ./runner_install_docker.sh
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "lambda_function.lambda_handler" ]
contents ofrunner_install_docker.sh (script that installs docker)
#!/bin/bash
apt-get -y update
apt-get install -y \
software-properties-common build-essential \
apt-transport-https ca-certificates gnupg lsb-release curl sudo
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo chmod u+x /usr/bin/*
sudo chmod u+x /usr/local/bin/*
sudo apt-get clean
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /tmp/*
When I run docker compose or other docker commands, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Docker isn't available inside the AWS Lambda runtime. Even if you built it into the custom container, the Lambda function would need to run as a privileged docker container for docker-in-docker to work, which is not something supported by AWS Lambda.
Specifically I'm running docker compose up -d to run one-off ECS tasks
Instead of trying to do this with the docker-compose ECS functionality, you need to look at invoking an ECS RunTask command via one of the AWS SDKs.
I'm having trouble getting a docker container to update the images (like .png's) on my local system.
The docker container has script in the dockerfile that copies the images into a folder in the container. Then, that folder gets copied into a new directory that is a set up as a shared volume. This process is split between the dockerfile for this container, and a "command" entry in the docker-compose.yaml.
Everything seems to run fine; following the output of the copy command looks right. I don't get any errors, and once the command stops running the container stops.
I've tried destroying the container and image completely and recreating it, but I still see the old images in the application. I'm guessing that it's not overwriting the existing images, but I don't know why.
Dockerfile:
FROM ubuntu:18.04
# Install packages
RUN apt-get update
RUN apt-get install -y apt-utils
RUN apt-get install -y curl unzip
# Intall AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Load AWS Access Keys
COPY ./config /root/.aws/
COPY ./credentials /root/.aws/
RUN chmod 600 /root/.aws/config /root/.aws/credentials
# Download media
RUN mkdir -p /var/media
RUN aws s3 cp --recursive s3://images/ /var/media
Snippet from the docker-compose.yaml:
version: '3.1'
services:
client:
[[other stuff here]]
media:
image: [[our own image stored with AWS]]
command: 'cp -R -v -f /var/media/images/ /var/media-access/'
volumes:
- ./src/media:/var/media-access
[[other stuff here]]
Any advice is appreciated!
I am trying to mount efs file system on my AWS EC2 folder /efs and I am using the following codes.
RUN sudo chmod -R 777 /efs
RUN sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 172.XX.X.XX:/ /efs
I am also getting this error message
command '/bin/bash -c sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 172.XX.X.XX:/ /efs' returned a non-zero code: 32
I am on my dockerfile and using ubuntu to figure this out and I am really hoping for someone help me with this.
Thank you!
please take a look at the code sample below. It is confirmed working on Ubuntu 18
# EFS
echo "INSTALLING EFS START"
cd /tmp/
sudo apt-get -y install binutils
git clone https://github.com/aws/efs-utils
cd efs-utils
./build-deb.sh
sudo apt-get -y install ./build/amazon-efs-utils*deb
cd /tmp/
sudo mkdir -p /mnt/efs
sudo mount -t efs ${ElasticFileSystemId}:/ /mnt/efs/
echo "INSTALLING EFS END"
I need to have LibreOffice installed on my web server. Since I'm using autoscaling with AWS Elastic Beanstalk, I need to install it on deployment. To do so, I am using .ebextensions files, but can't get it to work. This is my config file in .ebextensions folder:
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/6.0.2/rpm/x86_64/LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
02-untar:
command: sudo tar -xvf LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
03-install:
command: |
if [ ${APP_ENV} == "production" ]; then
cd LibreOffice_6.0.2.1_Linux_x86-64_rpm/RPMS
sudo yum localinstall *.rpm
fi
04-symlink:
command: sudo ln -fs /opt/libreoffice6.0/program/soffice /usr/bin/soffice
I tried to run these commands myself on my ec2-instance one after another as the root user, and everything worked. Only thing I might suspect: when I run the localinstall command, I need to confirm (there is a [y/n] prompt) to start the installation.
If this was the problem, I think I would still find a zipped LibreOffice file on my server or even untared LibreOffice files, but I can't find anything when I ssh into the ec2 instance after deployment.
There is no error message on deployment. Also, I can see that other .ebextensions scripts are running fine since some processes are running as asked in these scripts.
Any idea where the problem could be?
If it can be of any help, here is how I manage to install Libreoffice on my EC2 instances on deployment. This will install libreoffice 5.4 in /opt/libreoffice5.4
The following code is placed in this file : .ebextensions/01-libreoffice-setup.config
packages:
yum:
libXinerama.x86_64: []
cups-libs: []
dbus-glib: []
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/5.4.6/rpm/x86_64/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -f /tmp/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz ]"
02-untar:
command: sudo tar -xvf LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -d /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm ]"
03-install:
command: sudo yum localinstall *.rpm -y
cwd: /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm/RPMS
test: "[ ! -d /opt/libreoffice5.4 ]"
I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?