Running 32-bit windows program in AWS Lambda - amazon-web-services

I have an 32-bit windows program that I want to run using wine in a container in AWS Lambda.
According to this question on how to run 64-bit wine in AWS Lambda and this blog posting on how to run 32-bit binaries in Lambda it seems possible.
So far my Dockerfile looks like this:
FROM ubuntu:20.04
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq \
wget \
software-properties-common \
gnupg2 \
xvfb \
python3 \
python3-pip
# Install the runtime interface client
RUN pip3 install \
awslambdaric
RUN dpkg --add-architecture i386
RUN wget -nc https://dl.winehq.org/wine-builds/winehq.key
RUN apt-key add winehq.key
RUN add-apt-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ focal main'
RUN apt-get update && \
apt-get install -y \
winehq-stable \
winetricks \
winbind
RUN apt-get clean -y
RUN apt-get autoremove -y
ENV WINEDEBUG=fixme-all
ENV WINEARCH=win32
ENV DISPLAY=""
ENV WINEPREFIX="/tmp/wineprefix"
RUN winecfg
RUN winetricks msxml6
WORKDIR /lambda-poc
COPY handler.py ./
COPY hello.exe ./
COPY qemu-i386-static ./
ENTRYPOINT [ "/usr/bin/python3", "-m", "awslambdaric" ]
CMD [ "handler.lambda_handler" ]
Where the lambda handler calls hello.exe: qemu-i386-static /usr/bin/wine hello.exe
This works when I run it on my computer or on an EC2 but when in the Lambda it just hangs and the call will time out. Tried increasing the timeout to 10 min with the same result.
I tried to turn on logging in wine with WINEDEBUG="+all" and on my local computer I get several megabytes of log but nothing at all when running in Lambda.
What is wrong?

Related

Run docker commands in AWS Lambda Function

Goal
I'm curious to know if it's possible to run docker commands within AWS Lambda Function invocations. Specifically I'm running docker compose up -d to run one-off ECS tasks (see this aws article for more info). I know it's easily possible with AWS CodeBuild but for my use case where the workload duration is usually below 10 seconds, it would be more cost effective to use Lambda.
AFAIK Docker DooD is not available given Lambda Functions hosts can not be toggled to mount the host's docker daemon onto the Lambda Function's container.
Attempts
I've tried the following Docker DinD approach below with no luck:
Lambda custom container image:
ARG FUNCTION_DIR="/function"
FROM python:buster as build-image
ARG FUNCTION_DIR
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
RUN mkdir -p ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY ./* ${FUNCTION_DIR}
RUN pip install --target ${FUNCTION_DIR} -r requirements.txt
FROM python:buster
ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
RUN chmod 755 /usr/bin/aws-lambda-rie ./entrypoint.sh ./runner_install_docker.sh
RUN sh ./runner_install_docker.sh
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "lambda_function.lambda_handler" ]
contents ofrunner_install_docker.sh (script that installs docker)
#!/bin/bash
apt-get -y update
apt-get install -y \
software-properties-common build-essential \
apt-transport-https ca-certificates gnupg lsb-release curl sudo
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo chmod u+x /usr/bin/*
sudo chmod u+x /usr/local/bin/*
sudo apt-get clean
sudo rm -rf /var/lib/apt/lists/*
sudo rm -rf /tmp/*
When I run docker compose or other docker commands, I get the following error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Docker isn't available inside the AWS Lambda runtime. Even if you built it into the custom container, the Lambda function would need to run as a privileged docker container for docker-in-docker to work, which is not something supported by AWS Lambda.
Specifically I'm running docker compose up -d to run one-off ECS tasks
Instead of trying to do this with the docker-compose ECS functionality, you need to look at invoking an ECS RunTask command via one of the AWS SDKs.

Using Kaniko cache with Google Cloud Build for Google Cloud Kubernetes Deployments

We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.

"Timeout in polling result file" error when executing a Dataflow flex-template job

I've tried a lot of different things found online, but I'm still unable to solve the below timeout error:
2021-11-27T14:51:21.844520452ZTimeout in polling result file: gs://...
when submitting a Dataflow flex-template job. It goes into Queued state and after 14 mins {x} secs goes to Failed state with the above log message. My Dockerfile is as follows:
FROM gcr.io/dataflow-templates-base/python3-template-launcher-base
ARG WORKDIR=/dataflow/template
RUN mkdir -p ${WORKDIR}
WORKDIR ${WORKDIR}
COPY requirements.txt .
COPY test-beam.py .
# Do not include `apache-beam` in requirements.txt
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="${WORKDIR}/requirements.txt"
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/test-beam.py"
# Setting Proxy
ENV http_proxy=http://proxy-web.{company_name}.com:80 \
https_proxy=http://proxy-web.{company_name}.com:80 \
no_proxy=127.0.0.1,localhost,.{company_name}.com,{company_name}.com,.googleapis.com
# Company Cert
RUN apt-get update && apt-get install -y curl \
&& curl http://{company_name}.com/pki/{company_name}%20Issuing%20CA.pem -o - | tr -d '\r' > /usr/local/share/ca-certificates/{company_name}.crt \
&& curl http://{company_name}.com/pki/{company_name}%20Root%20CA.pem -o - | tr -d '\r' > /usr/local/share/ca-certificates/{company_name}-root.crt \
&& update-ca-certificates \
&& apt-get remove -y --purge curl \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Set pip config to point to Company Cert
RUN pip config set global.cert /etc/ssl/certs/ca-certificates.crt
# Install apache-beam and other dependencies to launch the pipeline
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir apache-beam[gcp]==2.32.0 \
&& pip install --no-cache-dir -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \
# Download the requirements to s7peed up launching the Dataflow job.
&& pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE
# Since we already downloaded all the dependencies, there's no need to rebuild everything.
ENV PIP_NO_DEPS=True
ENV http_proxy= \
https_proxy= \
no_proxy=
ENTRYPOINT ["/opt/google/dataflow/python_template_launcher"]
And requirements.py:
numpy
setuptools
scipy
wavefile
I know my Python script used above test-beam.py works as it executes successfully locally using a DirectRunner.
I have gone through many SO posts and GCP's own troubleshooting guide here aimed at this error, however to no success. As you can see from my Dockerfile, I have done the following in it:
Installing apache-beam[gcp] separately and not including it in my requirements.txt file.
Pre-downloading all dependencies using pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE.
Setting ENTRYPOINT ["/opt/google/dataflow/python_template_launcher"] explicitly as it seems this is not set in the base image gcr.io/dataflow-templates-base/python3-template-launcher-base as found by executing docker inspect on it (am I correct about this?).
Unsetting company proxy at the end as it seems to be the cause of timeout issues seen in job logs from previous runs.
What am I missing? How can I fix this issue?

Docker, WSL2 & vs code - bad git/ssh path

I set up my WSL2, Docker and vs code environment this weekend.
I am finding an issue when attempting to use git:
root#bb7f765df0d6:/var/www/html# git clone git#github.com:hsimah/my-repo.git
Cloning into 'my-repo'...
fatal: cannot run C:/Windows/System32/OpenSSH/ssh.exe: No such file or directory
fatal: unable to fork
Dockerfile:
FROM wordpress:latest
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
&& apt-get -y install git \
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
ENV DEBIAN_FRONTEND=dialog
If I remove the Dockerfile install of git and run apt-get update && apt-get install git from my container there is no issue. In this case git uses my host ssh keys (loaded via ssh-agent service on Windows) and can pull and push both via terminal or vs code itself.
There are no errors or warnings in the log.
Okay I posted a few minutes too soon.
I checked the git config and VS Code was pulling my Windows config into the workspace, it's a known issue.
Unblocking answer is to change this to your ssh location (/usr/bin/ssh):
core.sshcommand=C:/Windows/System32/OpenSSH/ssh.exe

macOS, Dockerfile mounting a folder cannot change locale

I'm trying to mount a folder with my docker file instead of copying it on build. We use git for development and I don't want to rebuild the image everytime I make a change for testing.
my docker file is now as such
#set base image
FROM centos:centos7.2.1511
MAINTAINER Alex <alex#app.com>
#install yum dependencies
RUN yum -y update \\
&& yum -y install yum-plugin-ovl \
&& yum -y install epel-release \
&& yum -y install net-tools \
&& yum -y install gcc \
&& yum -y install python-devel \
&& yum -y install git \
&& yum -y install python-pip \
&& yum -y install openldap-devel \
&& yum -y install gcc gcc-c++ kernel-devel \
&& yum -y install libxslt-devel libffi-devel openssl-devel \
&& yum -y install libevent-devel \
&& yum -y install openldap-devel \
&& yum -y install net-snmp-devel \
&& yum -y install mysql-devel \
&& yum -y install python-dateutil \
&& yum -y install python-pip \
&& pip install --upgrade pip
# Create the DIR
#RUN mkdir -p /var/www/itapp
# Set the working directory
#WORKDIR /var/www/itapp
# Copy the app directory contents into the container
#ADD . /var/www/itapp
# Install any needed packages specified in requirements.txt
#RUN pip install -r requirements.txt
# Make port available to the world outside this container
EXPOSE 8000
# Define environment variable
ENV NAME itapp
# Run server when the container launches
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ive commented out the creation and copy of the itapp Django files as I want to mount them instead, (do I need to rebuild this first?)
then my command for mounting is
docker run -it -v /Users/alex/itapp:/var/www/itapp itapp bash
I now get an error:
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
and the dev instance does not run.
how would I also set the working directory to the the volume that I'm mounting at runtime?
Try this command. -w WORKDIR option in docker run sets working directory inside the container.
docker run -d -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
Also, you'll to map your container port to your host port to be able to access, for example, from a browser to your app.
To do this, use the following command.
docker run -d -p 8000:8000 -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
After this, your app should be running in localhost:8000