Run docker commands in AWS Lambda Function - amazon-web-services

Goal
I'm curious to know if it's possible to run docker commands within AWS Lambda Function invocations. Specifically I'm running docker compose up -d to run one-off ECS tasks (see this aws article for more info). I know it's easily possible with AWS CodeBuild but for my use case where the workload duration is usually below 10 seconds, it would be more cost effective to use Lambda.
AFAIK Docker DooD is not available given Lambda Functions hosts can not be toggled to mount the host's docker daemon onto the Lambda Function's container.
Attempts
I've tried the following Docker DinD approach below with no luck:
Lambda custom container image:
ARG FUNCTION_DIR="/function"
FROM python:buster as build-image
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY ./* ${FUNCTION_DIR}
RUN pip install --target ${FUNCTION_DIR} -r requirements.txt
FROM python:buster
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie ./entrypoint.sh ./runner_install_docker.sh
RUN sh ./runner_install_docker.sh
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "lambda_function.lambda_handler" ]
contents ofrunner_install_docker.sh (script that installs docker)
#!/bin/bash
apt-get -y update
apt-get install -y \
software-properties-common build-essential \
apt-transport-https ca-certificates gnupg lsb-release curl sudo
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo chmod u+x /usr/bin/*
sudo chmod u+x /usr/local/bin/*
sudo apt-get clean
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /tmp/*
When I run docker compose or other docker commands, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Docker isn't available inside the AWS Lambda runtime. Even if you built it into the custom container, the Lambda function would need to run as a privileged docker container for docker-in-docker to work, which is not something supported by AWS Lambda.
Specifically I'm running docker compose up -d to run one-off ECS tasks
Instead of trying to do this with the docker-compose ECS functionality, you need to look at invoking an ECS RunTask command via one of the AWS SDKs.

Related

GitHub Actions Self-hosted Runner sibling container: permission denied... Docker daemon socket unix:///var/run/docker.sock

I need to execute on GPU hardware so I have to create a self-hosted runner for github actions to execute my code. The self-hosted runner is hosted on my local machine (ubuntu 20.04).
I'm running the self hosted runner container locally with -v and binding the socks using: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -e GITHUB_OWNER=<xxx> -e GITHUB_REPOSITORY=<xxxx>-e GITHUB_PAT=<xxxx>
This local self-hosted runner executes successfully until I try to build the second "project" container I need for my project code. I get a permission issue with the docker sock when I try to build the container not run the container. I'm about 70% certain that with the -v binding when running the self-hosted runner locally this enables sibling containers versus Docker in Docker (which I've read isn't cool anymore).
Permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
I've tried building the project container with -v /var/run/docker.sock:/var/run/docker.sock in the docker build command but it doesn't like the -v and I've also tried the following approaches in the "project" docker container:
Approach 1.
useradd -m cnncontainer && \
usermod -aG sudo cnncontainer && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
curl -sSL https://get.docker.com/ | sh
usermod -aG docker cnncontainer
Approach 2.
sudo groupadd docker && \
sudo usermod -aG docker "$USER" &&\
newgrp docker
docker run hello-world
Approach 3.
sudo usermod -aG docker $USER
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
docker run hello-world
GitHub actions self-hosted runner Dockerfile:
FROM debian:buster
#tensorflow/tensorflow:2.3.4-gpu - this image doesn't work either
ARG RUNNER_VERSION="2.298.2"
ENV GITHUB_PERSONAL_TOKEN ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
RUN apt-get update \
&& apt-get install -y \
curl \
sudo \
git \
jq \
tar \
gnupg2 \
apt-transport-https \
ca-certificates \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m github && \
usermod -aG sudo github && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
#setup docker runner
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker github
USER github
WORKDIR /home/github
#install github actions cli
RUN curl -O -L https://github.com/actions/runner/releases/download/v$RUNNER_VERSION/actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN tar xzf ./actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN sudo ./bin/installdependencies.sh
COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh
ENTRYPOINT ["/home/github/entrypoint.sh"]```
Self-hosted runner entrypoint.sh:
#!/bin/sh
registration_url="https://api.github.com/repos/${GITHUB_OWNER}/${GITHUB_REPOSITORY}/actions/runners/registration-token"
echo "Requesting registration URL at '${registration_url}'"
payload=$(curl -sX POST -H "Authorization: token ${GITHUB_PAT}" ${registration_url})
export RUNNER_TOKEN=$(echo $payload | jq .token --raw-output)
./config.sh \
--name $(hostname) \
--token ${RUNNER_TOKEN} \
--url https://github.com/${GITHUB_OWNER}/${GITHUB_REPOSITORY} \
--work ${RUNNER_WORKDIR} \
--unattended \
--replace
remove() {
./config.sh remove --unattended --token "${RUNNER_TOKEN}"
}
trap 'remove; exit 130' INT
trap 'remove; exit 143' TERM
./run.sh "$*" & #changed from run.sh
### BEGIN
sudo systemctl start docker
sudo systemctl enable docker
export RUNNER_ALLOW_RUNASROOT=true
export AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache
mkdir actions-runner
sudo mkdir /opt/hostedtoolcache
cd actions-runner
# Make /actions-runner/_work
mkdir _work
# Link /opt/hostedtoolcache as /actions-runner/_work/_tool
ln -s /opt/hostedtoolcache _work/_tool
### END
wait $!
Dockerfile I want to run in/with the self-hosted runner
FROM tensorflow/tensorflow:2.3.4-gpu
RUN mkdir -p /app
COPY . main.py /app/
WORKDIR /app
RUN sudo apt install -y make && sudo apt-get install python3-pip -y
RUN pip install -r requirements.txt
RUN sudo usermod -aG docker $USER
RUN sudo setfacl --modify user:$USER:rw /var/run/docker.sock
RUN docker run hello-world
CMD [ "main.py" ]
ENTRYPOINT [ "python" ]

How to run command inside Docker container

I'm new to Docker and I'm trying to understand the following setup.
I want to debug my docker container to see if it is receiving AWS credentials when running as a task in Fargate. It is suggested that I run the command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
But I'm not sure how to do so.
The setup uses Gitlab CI to build and push the docker container to AWS ECR.
Here is the dockerfile:
FROM rocker/tidyverse:3.6.3
RUN apt-get update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y liblzma-dev && \
apt-get install -y libbz2-dev && \
apt-get install -y libnetcdf-dev
COPY ./packrat/packrat.lock /home/project/packrat/
COPY initiate.R /home/project/
COPY hello.Rmd /home/project/
RUN install2.r packrat
RUN which nc-config
RUN Rscript -e 'packrat::restore(project = "/home/project/")'
RUN echo '.libPaths("/home/project/packrat/lib/x86_64-pc-linux-gnu/3.6.3")' >> /usr/local/lib/R/etc/Rprofile.site
WORKDIR /home/project/
CMD Rscript initiate.R
Here is the gitlab-ci.yml file:
image: docker:stable
variables:
ECR_PATH: XXXXX.dkr.ecr.eu-west-2.amazonaws.com/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- docker:dind
stages:
- build
- deploy
before_script:
- docker info
- apk add --no-cache curl jq py-pip
- pip install awscli
- chmod +x ./build_and_push.sh
build-rmarkdown-task:
stage: build
script:
- export REPO_NAME=edelta/rmarkdown_report
- export BUILD_DIR=rmarkdown_report
- export REPOSITORY_URL=$ECR_PATH$REPO_NAME
- ./build_and_push.sh
when: manual
Here is the build and push script:
#!/bin/sh
$(aws ecr get-login --no-include-email --region eu-west-2)
docker pull $REPOSITORY_URL || true
docker build --cache-from $REPOSITORY_URL -t $REPOSITORY_URL ./$BUILD_DIR/
docker push $REPOSITORY_URL
I'd like to run this command on my docker container:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
How I run this command on container startup in fargate?
For running a command inside docker container you need to be inside the docker container.
Step 1: Find the container ID / Container Name that you want to debug
docker ps A list of containers will be displayed, pick one of them
Step 2 run following command
docker exec -it <containerName/ConatinerId> bash and then enter wait for few seconds and you will be inside the docker container with interactive mode Bash
for more info read https://docs.docker.com/engine/reference/commandline/exec/
Short answer, just replace the CMD
CMD ["sh", "-c", " curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_UR && Rscript initiate.R"]
Long answer, You need to replace the CMD of the DockerFile, as currently running only Rscript.
you have two option add entrypoint or change CMD, for CMD check above
create entrypoint.sh and run run only when you want to debug.
#!/bin/sh
if [ "${IS_DEBUG}" == true ];then
echo "Container running in debug mode"
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
# uncomment below section if you still want to execute R script.
# exec "$#"
else
exec "$#"
fi
Changes that will required on Dockerfile side
WORKDIR /home/project/
ENV IS_DEBUG=true
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint ["/entrypoint.sh"]
CMD Rscript initiate.R

i am not able execute next commands after localstack --host command

FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y apt-utils && \
apt-get install -y python3-pip python3-dev\
pypy-setuptools
COPY . .
WORKDIR .
RUN pip3 install boto3
RUN pip3 install awscli
RUN apt-get install libsasl2-dev
ENV HOST_TMP_FOLDER=/tmp/localstack
RUN apt-get install -y git
RUN apt-get install -y npm
RUN mkdir -p .localstacktmp
ENV TMPDIR=.localstacktmp
RUN pip3 install localstack[full]
RUN SERVICES=s3,lambda,es DEBUG=1 localstack start --host
WORKDIR ./boto3Tools
ENTRYPOINT [ "python3" ]
CMD [ "script.py" ]
You can't start services in a Dockerfile.
In your case what's happening is that your Dockerfile is running RUN localstack start. That goes ahead and starts up the selected set of services and stays running, waiting for connections. Meanwhile, the Dockerfile is waiting for the command you launched to finish before it moves on.
The usual answer to this is to start servers and clients in separate containers (or start a server in a container and run clients directly from your host). In this case, there is already a localstack/localstack Docker image and a prebuilt Docker Compose setup, so you can just run it:
curl -LO https://github.com/localstack/localstack/raw/master/docker-compose.yml
docker-compose up
The localstack GitHub repo has more information on using it.
If you wanted to use a Boto-based application with this, the easiest way is to add it to the same docker-compose.yml file (or, conversely, add Localstack to the Compose setup you already have). At this point you can use normal Docker inter-container communication to reach the mock AWS, but you have to configure this in your code
s3 = boto3.client('s3',
endpoint_url='http://localstack:4566')
You have to make similar changes anyways to use localstack, so the only difference is the hostname you're setting.

How to run the bash when we trigger docker run command without -it?

I have a Dockerfile as follow:
FROM centos
RUN mkdir work
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip
RUN pip install pandas
RUN pip install boto3
RUN pip install pynt
WORKDIR ./work
CMD ["bash"]
where i am installing some basic dependencies.
Now when I run
docker run imagename
it does nothing but when I run
docker run -it imageName
I lands into the bash shell. But I want to get into the bash shell as soon as I trigger the run command without any extra parameters.
I am using this docker container in AWS codebuild and there I can't specify any parameters like -it but I want to execute my code in the docker container itself.
Is it possible to modify CMD/ENTRYPOINT in such a way that when running the docker image I land right inside the container?
I checked your container, it will not even build due to missing pip. So I modified it a bit so that it at least builds:
FROM centos
RUN mkdir glue
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip python3-pip
RUN pip3 install pandas
RUN pip3 install boto3
RUN pip3 install pynt
WORKDIR ./glue
Build it using, e.g.:
docker build . -t glue
Then you can run command in it using for example the following syntax:
docker run --rm glue bash -c "mkdir a; ls -a; pwd"
I use --rm as I don't want to keep the container.
Hope this helps.
We cannot login to the docker container directly.
If you want to run any specific commands when the container start in detach mode than either you can give it in CMD and ENTRYPOINT command of the Dockerfile.
If you want to get into the shell directly, you can run
docker -it run imageName
or
docker run imageName bash -c "ls -ltr;pwd"
and it will return the output.
If you have triggered the run command without -it param then you can get into the container using:
docker exec -it imageName
and you will land up into the shell.
Now, if you are using AWS codebuild custom images and concerned about how the commands can be submitted to the container than you have to put your commands into the build_spec.yaml file and put your commands either in pre_build, build or post_build parameter and those commands will be submitted to the docker container.
-build_spec.yml
version: 0.2
phases:
pre_build:
commands:
- pip install boto3 #or any prebuild configuration
build:
commands:
- spark-submit job.py
post_build:
commands:
- rm -rf /tmp/*
More about build_spec here

Unable to resolve AWS account to use when running CDK in a docker container

I tried to run cdk inside a docker container. Everything works fine until I try to deploy using command:
cdk deploy myStack --profile testing --require-approval never
Error
❌ MyStack failed: Error: Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment
I have created both config and credentials file under docker container's /root/.aws/ folder, since it will match the ~/.aws
I use this setting in my laptop and it works fine. In my laptop, those two files are under /Users/<my user name>/.aws.
My docker file:
FROM openjdk:8-jdk-slim
ARG MAVEN_VERSION=3.6.3
ARG USER_HOME_DIR="/root"
ARG SHA=c35a1803a6e70a126e80b2b3ae33eed961f83ed74d18fcd16909b2d44d7dada3203f1ffe726c17ef8dcca2dcaa9fca676987befeadc9b9f759967a8cb77181c0
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
RUN apt-get update
RUN apt-get -y install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get -y install nodejs
RUN npm install
RUN node -v
RUN npm -v
RUN npm install -g aws-cdk
RUN mkdir /usr/local/TestingCDK;
COPY ./src /usr/local/TestingCDK/src/
COPY pom.xml /usr/local/TestingCDK/
COPY cdk.json /usr/local/TestingCDK/
RUN cd /usr/local/TestingCDK/ && mvn compile
RUN mkdir ~/.aws
RUN cd ~ && pwd
COPY config /root/.aws/
COPY credentials /root/.aws/
CMD cdk doctor ; cat ~/.aws/config ; cd /usr/local/TestingCDK/ ; cdk deploy myStack --profile myProfile --require-approval never
You should pass the keys and other variables into the container and set AWS_ environment variables instead, to name a few
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
see here:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
saving and copying your access/secret keys into the container is a very bad practice.