yarn on AWS Lambda? - amazon-web-services

Did anyone manage to get yarn installed and working on AWS Lambda?
(I'm trying to create a function that can build small React projects)
Currently I have yarn installed and working in a Docker image (based on public.ecr.aws/lambda/provided:al2), but while running it in the Lambda environment I'm getting permission errors when yarn tries to access the read-only file system.
Dockerfile snippet:
FROM public.ecr.aws/lambda/provided:al2
...
# Update yum
RUN yum -y update
# Install NodeJS
RUN curl -fsSL https://rpm.nodesource.com/setup_16.x \
| bash - && yum -y install nodejs
# Install Yarn
RUN curl -fsSL https://dl.yarnpkg.com/rpm/yarn.repo \
| tee /etc/yum.repos.d/yarn.repo && \
rpm --import https://dl.yarnpkg.com/rpm/pubkey.gpg && \
yum -y install yarn
# Copy Binaries
RUN cp -r \
/usr/share/yarn/bin/* \
/usr/bin/corepack /usr/bin/node /usr/bin/npm /usr/bin/npx \
/opt/bin/
# Copy Libraries
RUN cp -r \
/usr/share/yarn/lib/* \
/usr/lib/node_modules \
/opt/lib/
# Configure NPM & Yarn
RUN npm config set prefix /tmp/npm/
RUN yarn config set prefix /tmp/yarn/
...
Errors:
>yarn install
warning Cannot find a suitable global folder. Tried these: "/usr/local, /home/sbx_user1051/.yarn"
ovl: Error while doing RPMdb copy-up:
[Errno 30] Read-only file system: '/var/lib/rpm/.dbenv.lock'
Could not set cachedir: [Errno 30] Read-only file system: '/var/tmp/yum-sbx_user1051-eC5Oqk'
Error making cache directory: /var/cache/yum/x86_64/2/amzn2-core error was: [Errno 30] Read-only file system: '/var/cache/yum'
error Command failed with exit code 1.
*** edit ***
With global flag
>yarn --global-folder /tmp/yarn/ install
warning Skipping preferred cache folder "/home/sbx_user1051/.cache/yarn" because it is not writable.
warning Selected the next writable cache folder in the list, will be "/tmp/.yarn-cache-993".
warning Cannot find a suitable global folder. Tried these: "/usr/local, /home/sbx_user1051/.yarn"
ovl: Error while doing RPMdb copy-up:
[Errno 30] Read-only file system: '/var/lib/rpm/.dbenv.lock'
Could not set cachedir: [Errno 30] Read-only file system: '/var/tmp/yum-sbx_user1051-3FcZxd'
Error making cache directory: /var/cache/yum/x86_64/2/amzn2-core error was: [Errno 30] Read-only file system: '/var/cache/yum'
error Command failed with exit code 1.

Related

pull access denied repo does not exist or may require authorization: server message:insufficient_scope: authorization failed"host=registry-1.docker.io

My Docker container works perfectly locally and using the default context and the command "docker compose up". I'm trying to run my docker image on ECS in AWS following this guide - https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I've followed all of the steps on the guide, after I've set the context to my new context (I've tried all 3 options) - after I run "docker compose up" I get the above error, here again for detail:
INFO trying next host error="pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" host=registry-1.docker.io
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I've also set the user and added all of the permissions I can think of - image below
I've looked everywhere and I can't find traction, please help :)
The image is located on AWS ECS and Docker hub - I've tried both
Here is my Docker file:
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN curl -sS https://getcomposer.org/installer | php -- --
install-dir=/usr/local/bin --filename=composer
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath
gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
# RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user

Aws cli not installing in cygwin throwing error

These are the error details. Trying to install the awscli through pip. tried wget method as well.
$ pip install awscli
Collecting awscli
Using cached awscli-1.25.53-py3-none-any.whl (3.9 MB)
Collecting PyYAML<5.5,>=3.10
Using cached PyYAML-5.4.1.tar.gz (175 kB)
0 [main] python 2066 child_copy: stack write copy failed, 0xFFFF5480..0x100000000, done 4294923504, windows pid 16392, Win32 error 5
5496 [main] python 2066 dofork: child
2067 - pid 11768, exitval 0x103, errno 11
ERROR: Error [Errno 11] Resource
temporarily unavailable while executing command pip subprocess to install build dependencies
Installing build dependencies ... error
ERROR: Could not install packages due to an OSError: [Errno 11] Resource temporarily unavailable
Any help will be appreciated.
From Installing or updating the latest version of the AWS CLI - AWS Command Line Interface:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Unable to load shared library 'libgdiplus' or one of its dependencies while running lambda function

I am writing an AWS Lambda function in .NET Core 3.1. I am using Aspose.slides library in the AWS Lambda function. I am publishing the AWS lambda function as docker on AWS. Lambda function successfully gets published but when i test the Lambda it gives me the following error:
Aspose.Slides.PptxReadException: The type initializer for 'Gdip' threw an exception.
---> System.TypeInitializationException: The type initializer for 'Gdip' threw an exception.
---> System.DllNotFoundException: Unable to load shared library 'libgdiplus' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibgdiplus: cannot open shared object file: No such file or directory
at System.Drawing.SafeNativeMethods.Gdip.GdiplusStartup(IntPtr& token, StartupInput& input, StartupOutput& output)
at System.Drawing.SafeNativeMethods.Gdip..cctor()
Even though, i am installing the libgdiplus package from the docker file but i am still getting the above error.
Docker file is:
FROM public.ecr.aws/lambda/dotnet:core3.1 AS base
FROM mcr.microsoft.com/dotnet/sdk:3.1 as build
WORKDIR /src
COPY ["Lambda.PowerPointProcessor.csproj", "base/"]
RUN dotnet restore "base/Lambda.PowerPointProcessor.csproj"
WORKDIR "/src"
COPY . .
RUN apt-get update && apt-get install -y libc6-dev
RUN apt-get update && apt-get install -y libgdiplus
RUN dotnet build "Lambda.PowerPointProcessor.csproj" --configuration Release --output /app/build
FROM build AS publish
RUN dotnet publish "Lambda.PowerPointProcessor.csproj" \
--configuration Release \
--runtime linux-x64 \
--self-contained false \
--output /app/publish \
-p:PublishReadyToRun=true
FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
Any help would be much appreciated.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/netcoreapp3.1/linux-x64" .
RUN yum install -y amazon-linux-extras
RUN amazon-linux-extras install epel -y
RUN yum install -y libgdiplus
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
This docker file resolved the issue for me. It's working fine for me.

Custom Lambda image - getting aws-lambda-ric error when trying to run Lambda

I've build a NodeJS project, which I need to run on a custom Docker image.
This is my Dockerfile:
FROM public.ecr.aws/lambda/nodejs:14-x86_64
# Create app directory
WORKDIR /usr/src/
RUN yum update && yum install -y git openssh-client vim python py-pip pip jq
RUN yum update && yum install -y automake autoconf libtool dpkg pkgconfig nasm libpng cmake
RUN pip install awscli
# RUN apk --purge -v del py-pip
# RUN rm /var/cache/apk/*
RUN npm install -g yarn
RUN yarn install --frozen-lockfile
# Bundle app source
COPY . .
RUN yarn build
ENTRYPOINT ["npx", "aws-lambda-ric"]
CMD [ "src/executionHandler.runner" ]
But when I call docker run <imagename>
I get the following errors:
tar: curl-7.78.0/tests/data/test1131: Cannot open: No such file or directory
tar: curl-7.78.0: Cannot mkdir: Permission denied
tar: curl-7.78.0/tests/data/test971: Cannot open: No such file or directory
tar: curl-7.78.0: Cannot mkdir: Permission denied
tar: Exiting with failure status due to previous errors
./scripts/preinstall.sh: line 28: cd: curl-7.78.0: No such file or directory
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! aws-lambda-ric#2.0.0 preinstall: `./scripts/preinstall.sh`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the aws-lambda-ric#2.0.0 preinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-10-13T09_11_31_626Z-debug.log
Install for [ 'aws-lambda-ric#latest' ] failed with code 1
The base image I use was taken from official AWS images repository.
How can I resolve this permissions issue?
This isn't really a permissions issue.
According to https://gallery.ecr.aws/lambda/nodejs the base image you're using public.ecr.aws/lambda/nodejs should come with the entire runtime pre-installed. I think the issue is that your entrypoint uses npx, which is a tool for running local npm packages, and the base image can only have the packages installed globally. npx, if it can't find the package in local package.json, tries to install the package. This is both unnecessary, since it's already installed globally, and not possible in the stripped down public image in to which you have installed some of the prereqs like cmake, autoconf, etc, but not libcurl.
I suspect
ENTRYPOINT ["aws-lambda-ric"]
without npx and without all the extraneous development packages will work fine with this image.

AWSEBCLI No such file or directory

I'm attempting to run awsebcli from inside a docker image based on amazonlinux
The docker file is like this..
FROM amazonlinux:latest
ENV PATH "$PATH:/root/.local/bin"
ADD . /myfiles
WORKDIR /myfiles
#copy credentials
RUN cp -R .aws ~
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python get-pip.py --user --no-warn-script-location
RUN pip install awsebcli --upgrade --user
CMD eb --version
This just returns:
ERROR: OSError - [Errno 2] No such file or directory
What have I missed?
This was a dumb issue.
I has named the elastic beanstalk config file just "config" (like the .aws/config)
It was supposed to be called .elasticbeanstalk/config.yml
I had the same problem and it turned out that in my configuration file under .elasticbeanstalk/config.yml I had the following line
sc: git
This makes the eb tool search for a .git which was not present in my case since I only wanted to deploy a zip file.
The error message is far from clear !