ECS Docker container won't start - amazon-web-services

I have a Docker container with this Dockerfile:
FROM node:8.1
RUN rm -fR /var/lib/apt/lists/*
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
RUN echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
RUN apt-get update
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | \
debconf-set-selections
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/
RUN npm install
# Bundle app source
COPY . /app
# Environment Variables
ENV PORT 8080
# start the SSH daemon service
RUN service ssh start
# create a non-root user & a home directory for them
RUN useradd --create-home --shell /bin/bash tunnel-user
# set their password
RUN echo 'tunnel-user:93wcBjsp' | chpasswd
# Copy the SSH key to authorized_keys
COPY tunnel.pub /app/
RUN mkdir -p /home/tunnel-user/.ssh
RUN cat tunnel.pub >> /home/tunnel-user/.ssh/authorized_keys
# Set permissions
RUN chown -R tunnel-user:tunnel-user /home/tunnel-user/.ssh
RUN chmod 0700 /home/tunnel-user/.ssh
RUN chmod 0600 /home/tunnel-user/.ssh/authorized_keys
# allow the tunnel-user to SSH into this machine
RUN echo 'AllowUsers tunnel-user' >> /etc/ssh/sshd_config
EXPOSE 8080
EXPOSE 22
CMD [ "npm", "start" ]
My ECS task has this definition. I'm using a role which has AmazonEC2ContainerServiceforEC2Role.
When I try to start it as a task in my ECS cluster I get this error:
CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-ssh-4-ssh-8cc68dbfaa8edbdc0500 (387e024a87752293f51e5b62de9e2b26102d735e8da500c8e7fa5e1b4b4f0983): Error starting userland proxy: listen tcp 0.0.0
How do I fix this?

Related

Trigger Workflow Job with commands in Linux (Cloud-init)

im trying to deploy my github repository on my ec2 instance with a terraform and cloud-init file. For the Github part i use Github Actions. There i have already made a workflow file for nodejs and this runs correctly. Every time i create a new Instance with the terraform file and cloud.init, the workflow file should build again to create a _work folder, which is important for further processes. Now my question is, how can i trigger this workflow file build with Linux commands in yaml?
Cloud-init File:
#cloud-config
runcmd:
- apt-get update && apt-get install -y expect
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.300.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
I found this Github Page, but i can't figure out how to do it:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
The code should be between after this commands:
- yes "" | sudo apt install nginx

GitHub Actions Self-hosted Runner sibling container: permission denied... Docker daemon socket unix:///var/run/docker.sock

I need to execute on GPU hardware so I have to create a self-hosted runner for github actions to execute my code. The self-hosted runner is hosted on my local machine (ubuntu 20.04).
I'm running the self hosted runner container locally with -v and binding the socks using: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -e GITHUB_OWNER=<xxx> -e GITHUB_REPOSITORY=<xxxx>-e GITHUB_PAT=<xxxx>
This local self-hosted runner executes successfully until I try to build the second "project" container I need for my project code. I get a permission issue with the docker sock when I try to build the container not run the container. I'm about 70% certain that with the -v binding when running the self-hosted runner locally this enables sibling containers versus Docker in Docker (which I've read isn't cool anymore).
Permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
I've tried building the project container with -v /var/run/docker.sock:/var/run/docker.sock in the docker build command but it doesn't like the -v and I've also tried the following approaches in the "project" docker container:
Approach 1.
useradd -m cnncontainer && \
usermod -aG sudo cnncontainer && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
curl -sSL https://get.docker.com/ | sh
usermod -aG docker cnncontainer
Approach 2.
sudo groupadd docker && \
sudo usermod -aG docker "$USER" &&\
newgrp docker
docker run hello-world
Approach 3.
sudo usermod -aG docker $USER
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
docker run hello-world
GitHub actions self-hosted runner Dockerfile:
FROM debian:buster
#tensorflow/tensorflow:2.3.4-gpu - this image doesn't work either
ARG RUNNER_VERSION="2.298.2"
ENV GITHUB_PERSONAL_TOKEN ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
RUN apt-get update \
&& apt-get install -y \
curl \
sudo \
git \
jq \
tar \
gnupg2 \
apt-transport-https \
ca-certificates \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m github && \
usermod -aG sudo github && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
#setup docker runner
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker github
USER github
WORKDIR /home/github
#install github actions cli
RUN curl -O -L https://github.com/actions/runner/releases/download/v$RUNNER_VERSION/actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN tar xzf ./actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN sudo ./bin/installdependencies.sh
COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh
ENTRYPOINT ["/home/github/entrypoint.sh"]```
Self-hosted runner entrypoint.sh:
#!/bin/sh
registration_url="https://api.github.com/repos/${GITHUB_OWNER}/${GITHUB_REPOSITORY}/actions/runners/registration-token"
echo "Requesting registration URL at '${registration_url}'"
payload=$(curl -sX POST -H "Authorization: token ${GITHUB_PAT}" ${registration_url})
export RUNNER_TOKEN=$(echo $payload | jq .token --raw-output)
./config.sh \
--name $(hostname) \
--token ${RUNNER_TOKEN} \
--url https://github.com/${GITHUB_OWNER}/${GITHUB_REPOSITORY} \
--work ${RUNNER_WORKDIR} \
--unattended \
--replace
remove() {
./config.sh remove --unattended --token "${RUNNER_TOKEN}"
}
trap 'remove; exit 130' INT
trap 'remove; exit 143' TERM
./run.sh "$*" & #changed from run.sh
### BEGIN
sudo systemctl start docker
sudo systemctl enable docker
export RUNNER_ALLOW_RUNASROOT=true
export AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache
mkdir actions-runner
sudo mkdir /opt/hostedtoolcache
cd actions-runner
# Make /actions-runner/_work
mkdir _work
# Link /opt/hostedtoolcache as /actions-runner/_work/_tool
ln -s /opt/hostedtoolcache _work/_tool
### END
wait $!
Dockerfile I want to run in/with the self-hosted runner
FROM tensorflow/tensorflow:2.3.4-gpu
RUN mkdir -p /app
COPY . main.py /app/
WORKDIR /app
RUN sudo apt install -y make && sudo apt-get install python3-pip -y
RUN pip install -r requirements.txt
RUN sudo usermod -aG docker $USER
RUN sudo setfacl --modify user:$USER:rw /var/run/docker.sock
RUN docker run hello-world
CMD [ "main.py" ]
ENTRYPOINT [ "python" ]

How to run command inside Docker container

I'm new to Docker and I'm trying to understand the following setup.
I want to debug my docker container to see if it is receiving AWS credentials when running as a task in Fargate. It is suggested that I run the command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
But I'm not sure how to do so.
The setup uses Gitlab CI to build and push the docker container to AWS ECR.
Here is the dockerfile:
FROM rocker/tidyverse:3.6.3
RUN apt-get update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y liblzma-dev && \
apt-get install -y libbz2-dev && \
apt-get install -y libnetcdf-dev
COPY ./packrat/packrat.lock /home/project/packrat/
COPY initiate.R /home/project/
COPY hello.Rmd /home/project/
RUN install2.r packrat
RUN which nc-config
RUN Rscript -e 'packrat::restore(project = "/home/project/")'
RUN echo '.libPaths("/home/project/packrat/lib/x86_64-pc-linux-gnu/3.6.3")' >> /usr/local/lib/R/etc/Rprofile.site
WORKDIR /home/project/
CMD Rscript initiate.R
Here is the gitlab-ci.yml file:
image: docker:stable
variables:
ECR_PATH: XXXXX.dkr.ecr.eu-west-2.amazonaws.com/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- docker:dind
stages:
- build
- deploy
before_script:
- docker info
- apk add --no-cache curl jq py-pip
- pip install awscli
- chmod +x ./build_and_push.sh
build-rmarkdown-task:
stage: build
script:
- export REPO_NAME=edelta/rmarkdown_report
- export BUILD_DIR=rmarkdown_report
- export REPOSITORY_URL=$ECR_PATH$REPO_NAME
- ./build_and_push.sh
when: manual
Here is the build and push script:
#!/bin/sh
$(aws ecr get-login --no-include-email --region eu-west-2)
docker pull $REPOSITORY_URL || true
docker build --cache-from $REPOSITORY_URL -t $REPOSITORY_URL ./$BUILD_DIR/
docker push $REPOSITORY_URL
I'd like to run this command on my docker container:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
How I run this command on container startup in fargate?
For running a command inside docker container you need to be inside the docker container.
Step 1: Find the container ID / Container Name that you want to debug
docker ps A list of containers will be displayed, pick one of them
Step 2 run following command
docker exec -it <containerName/ConatinerId> bash and then enter wait for few seconds and you will be inside the docker container with interactive mode Bash
for more info read https://docs.docker.com/engine/reference/commandline/exec/
Short answer, just replace the CMD
CMD ["sh", "-c", " curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_UR && Rscript initiate.R"]
Long answer, You need to replace the CMD of the DockerFile, as currently running only Rscript.
you have two option add entrypoint or change CMD, for CMD check above
create entrypoint.sh and run run only when you want to debug.
#!/bin/sh
if [ "${IS_DEBUG}" == true ];then
echo "Container running in debug mode"
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
# uncomment below section if you still want to execute R script.
# exec "$#"
else
exec "$#"
fi
Changes that will required on Dockerfile side
WORKDIR /home/project/
ENV IS_DEBUG=true
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint ["/entrypoint.sh"]
CMD Rscript initiate.R

Cloud sql proxy not working from docker container

My application is running on docker container and deployed with google compute groups and autoscalling enabled.
The problem iam facing is connecting mysql instance from auto-scaled compute instances but its not working expected.
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y software-properties-common && \
...installation other extenstion
RUN curl -sS https://getcomposer.org/installer | \
php -- --install-dir=/usr/bin/ --filename=composer
COPY . /var/www/html
CMD cd /var/www/html
RUN composer install
ADD nginx.conf/default /etc/nginx/sites-available/default
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN mkdir /cloudsql
RUN chmod 777 /cloudsql
RUN chmod 777 -R storage bootstrap/cache
EXPOSE 80
**CMD service php7.1-fpm start && nginx -g "daemon off;" && ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json &**
The last line ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & is not getting executed when I run my container.
If I run this ./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=file.json & inside container (by going to container via docker command) it's working and when i close the terminal again its stop working.
Even I tried to run in background, but no luck.
Anyone have a idea of it?
Has been fixed by
Create start.sh file and move all command to start.sh
After start sql proxy I put sleep 10 and start the nginx and php
Now it's works as expected.
Dockerfile
FROM ubuntu:16.04
...other command
ADD start.sh /
RUN chmod +x /start.sh
EXPOSE 80
CMD ["/start.sh"]
and this is start.sh file
//start.sh
#!/bin/sh
./cloud_sql_proxy -dir=/cloudsql -instances=<connectionname>=tcp:0.0.0.0:3306 -credential_file=<file>.json &
sleep 10
service php7.1-fpm start
nginx -g "daemon off;"

Installing CPhalcon on an AWS Docker image

I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?