Include custom fonts in AWS Lambda - amazon-web-services

Does Anyone know something like ebextensions[2] in EC2 for AWS Lambda?
The goal is to install custom fonts in the AWS Lambda execution environment.
There are many ways to provide libraries and tools with fonts but the easiest way would be to include them via OS.
Also asked in response on AWS forum:
https://forums.aws.amazon.com/thread.jspa?messageID=807139&#807139
[2]How I install specific fonts on my AWS EC2 instance?

Here's what I just got to work for custom fonts on AWS Lambda with pandoc/xelatex.
I created a fonts directory in my project and placed all of my fonts there. Also in that directory I created a fonts.conf file that looks like this:
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/fonts/</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>
And then in my (node.js based) handler function before shelling out to call pandoc I set an ENV var to tell fontconfig where to find the fonts.
process.env.FONTCONFIG_PATH='/var/task/fonts'
After doing that I can refer to a font, like Bitter, in my template by name (just Bitter) and then pandoc/xelatex/fontconfig/whatever knows which version of the font to use (like Bitter-Bold.otf vs Bitter-Italic.otf) based on the styling that any bit of text is supposed to have.
I figured this out based on the tips in this project for getting RSVG to work with custom fonts on Lambda: https://github.com/claudiajs/rsvg-convert-aws-lambda-binary/blob/master/README.md#using-custom-fonts

A lot of the answer on this subject of using fonts on Lambda's were a bit incomplete.
My scenario required using a custom font in conjunction with Imagemagick. I checked out out this branch with Imagemagick and Freetype support and worked through the README. The key for my use case is the lambda or lambda layer used in the function needed freetype support to access my fonts. I'm using a TTF.
After deploying the lambda layer, in my Lambda function's directory I did the following:
At the root of my lambda create a fonts directory.
In the fonts directory add the TTF or your font.
I'm using serverless framework so this directory once deployed will be located at /var/task/fonts.
Also in the fonts directory include the following fonts.conf.
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/fonts/</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>
Finally, in your serverless.yml add the following directory so that your fonts and fonts.conf will be included in the lambda.
package:
include:
- fonts/**
Because freetype is now accessible in the lambda layer any fonts in the fonts directory will be accessible. Could have also dynamically downloaded fonts needed, but I decided to choose include in my lambda function.

The official AWS response on the forum post is still correct.
Currently, it is not possible to customize Lambda environment. If you want additional packages you can build on Amazon Linux and put them into the zip file you upload.
That's the extent to which you can "install" anything in the Lambda environment.
the easiest way would be to include them via OS.
Arguably so, but that's simply not how Lambda works.

Package your assets along with the code or have it fetch them from s3. This is the way we generate PDFs with custom fonts with Lambda.
Things like the Serverless Framework will do this for you automatically (uploading code + deps assets).
When you deploy it will create a zip file with your code, dependencies and anything else you have in the folder. Then it will automatically upload it to S3 and deploy it with the help of CloudFormation.

I followed the instructions on this gist and it worked a charm (although for me in the "Configuring fontconfig" and "Installing and caching fonts" sections the /tmp/…/fontconfig seemed to really mean /var/task/<MyLambda>/headless-chrome/fontconfig)

Lambda extracts the layer contents into the /opt directory when setting up the execution environment for the function. So, fonts.conf should have /opt/.fonts under and ttf fonts should be placed there.
In addition Lambda needs to access the fontconfig library like libfontconfig.so, libexpat.so and libfreetype.so. The files can be found in phantom-lambda-fontconfig-pack.
It seems that it works in Node.JS lambda. In case of using Python Lambda, it still does not work.
At last, I created a Lambda container image based on the dockerfile in rchauhan9/image-scraper-lambda-container and then added the following code after "RUN apk add chromium..."
ENV NOTO_TC="https://github.com/googlefonts/noto-cjk/raw/main/Sans/Variable/OTF/NotoSansCJKhk-VF.otf"
ENV NOTO_SC="https://github.com/googlefonts/noto-cjk/raw/main/Sans/Variable/OTF/NotoSansCJKsc-VF.otf"
ENV NOTO_JP="https://github.com/googlefonts/noto-cjk/raw/main/Sans/Variable/OTF/NotoSansCJKjp-VF.otf"
ENV NOTO_KR="https://github.com/googlefonts/noto-cjk/raw/main/Sans/Variable/OTF/NotoSansCJKkr-VF.otf"
RUN apk --no-cache add \
fontconfig \
wget \
&& mkdir -p /usr/share/fonts \
&& wget -q "${NOTO_TC}" -P /usr/share/fonts \
&& wget -q "${NOTO_SC}" -P /usr/share/fonts \
&& wget -q "${NOTO_JP}" -P /usr/share/fonts \
&& wget -q "${NOTO_KR}" -P /usr/share/fonts \
&& fc-cache -fv
ENV LANG="C.UTF-8"
It works with CJK fonts. Reference: Section "Building a Custom Image for Python" in New for AWS Lambda – Container Image Support

Related

Cant Import packages from layers in AWS Lambda

I know this question exists several places, but even by following different guides/answers I still cant get it to work. I have no idea what I do wrong. I have a lambda Python function on AWS where i need to do a "import requests". This is my approach so far.
Create .zip directory of packages. Locally I do:
pip3 install requests -t ./
zip -r okta_layer.zip .
Upload .zip directory to a lambda layer:
I go to the AWS console and go to lambda layers. I create a new layer based on this .zip file.
I go to my lambda python function and add the layer to the function directly form the console. I can now see the layer under "Layers" for the lambda function. Then i run the script it still complains about:
Unable to import module 'lambda_function': No module named 'requests'
I solved the problem. Apparently I needed to have a .zip folder, with a "python" folder inside, and inside that "python" folder should be all the packages.
I only had all the packages in the zip folder directly without a "python" folder ...

How do I download files within a Sagemaker notebook instance programatically?

We have a notebook instance within Sagemaker which contains many Jupyter Python scripts. I'd like to write a program which downloads these various scripts each day (i.e. so that I could back them up). Unfortunately I don't see any reference to this in the AWS CLI API.
Is this achievable?
It's not exactly that you want, but looks like VCS can fit your needs. You can use Github(if you already use it) or CodeCommit(free privat repos) Details and additional ways like sync target dir with S3 bucket - https://aws.amazon.com/blogs/machine-learning/how-to-use-common-workflows-on-amazon-sagemaker-notebook-instances/
Semi automatic way:
conda install -y -c conda-forge zip
!zip -r -X folder.zip folder-to-zip
Then download that zipfile.

AWS Elastic Beanstalk - .ebextensions

My app currently uses a folder called "Documents" that is located in the root of the app. This is where it stores supporting docs, temporary files, uploaded files etc. I'm trying to move my app from Azure to Beanstalk and I don't know how to give permissions to this folder and sub-folders. I think it's supposed to be done using .ebextensions but I don't know how to format the config file. Can someone suggest how this config file should look? This is an ASP.NET app running on Windows/IIS.
Unfortunately, you cannot use .ebextensions to set permissions to files/folders within your deployment directory.
If you look at the event hooks for an elastic beanstalk deployment:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-windows-ec2.html#windows-container-commands
You'll find that commands run before the ec2 app and web server are set up, and
container_commands run after the ec2 app and web server are setup, but before your application version is deployed.
The solution is to use a wpp.targets file to set the necessary ACLs.
The following SO post is most useful
Can Web Deploy's setAcl provider be used on a sub-directory?
Given below is the sample .ebextensions config file to create a directory/file and modify the permissions and add some content to the file
====== .ebextensions/custom_directory.config ======
commands:
create_directory:
command: mkdir C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory
command: cacls C:\inetpub\AspNetCoreWebApps\backgroundtasks\mydirectory /t /e /g username:W
files:
"C:/inetpub/AspNetCoreWebApps/backgroundtasks/mydirectory/mytestfile.txt":
content: |
This is my Sample file created from ebextensions
ebextensions go into the root of the application source code through a directory called .ebextensions. For more information on how to use ebextensions, please go through the documentation here
Place a file 01_fix_permissions.config inside .ebextensions folder.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/pre/49_change_permissions.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo chown -R ec2-user:ec2-user tmp/
Following that you can set your folder permissions as you want.
See this answer on Serverfault.
There are platform hooks that you can use to run scripts at various points during deployment that can get you around the shortcomings of the .ebextension Commands and Platform Commands that Napoli describes.
There seems to be some debate on whether or not this setup is officially supported, but judging by comments made on the AWS github, it seems to be not explicitly prohibited.
I can see where Napoli's answer could be the more standard MS way of doing things, but wpp.targets looks like hot trash IMO.
The general scheme of that answer is to use Commands/Platform commands to copy a script file into the appropriate platform hook directory (/opt/elasticbeanstalk/hooks or C:\Program Files\Amazon\ElasticBeanstalk\hooks\ ) to run at your desired stage of deployment.
I think its worth noting that differences exist between platforms and versions such as Amazon Linux 1 and Linux 2.
I hope this helps someone. It took me a day to gather that info and what's on this page and pick what I liked best.
Edit 11/4 - I would like to note that I saw some inconsistencies with the File .ebextension directive when trying to place scripts drirectly into the platform hook dir's during repeated deployments. Specifically the File directive failed to correctly move the backup copies named .bak/.bak1/etc. I would suggest using a Container Command to copy with overwriting from another directory into the desired hook directory to overcome this issue.

AWS Lambda Error: Unzipped size must be smaller than 262144000 bytes

I am developing one lambda function, which use the ResumeParser library made in the python 2.7. But when I deploy this function including the library on the AWS it's throwing me following error:
Unzipped size must be smaller than 262144000 bytes
Perhaps you did not exclude development packages which made your file to grow that big.
I my case, (for NodeJS) I had missing the following in my serverless.yml:
package:
exclude:
- node_modules/**
- venv/**
See if there are similar for Python or your case.
This is a hard limit which cannot be changed:
AWS Lambda Limit Errors
Functions that exceed any of the limits listed in the previous limits tables will fail with an exceeded limits exception. These limits are fixed and cannot be changed at this time. For example, if you receive the exception CodeStorageExceededException or an error message similar to "Code storage limit exceeded" from AWS Lambda, you need to reduce the size of your code storage.
You need to reduce the size of your package. If you have large binaries place them in s3 and download on bootstrap. Likewise for dependencies, you can pip install or easy_install them from an s3 location which will be faster than pulling from pip repos.
The best solution to this problem is to deploy your Lambda function using a Docker container that you've built and pushed to AWS ECR. Lambda container images have a limit of 10 gb.
Here's an example using Python flavored AWS CDK
from aws_cdk import aws_lambda as _lambda
self.lambda_from_image = _lambda.DockerImageFunction(
scope=self,
id="LambdaImageExample",
function_name="LambdaImageExample",
code=_lambda.DockerImageCode.from_image_asset(
directory="lambda_funcs/LambdaImageExample"
),
)
An example Dockerfile contained in the directory lambda_funcs/LambdaImageExample alongside my lambda_func.py and requirements.txt:
FROM amazon/aws-lambda-python:latest
LABEL maintainer="Wesley Cheek"
RUN yum update -y && \
yum install -y python3 python3-dev python3-pip gcc && \
rm -Rf /var/cache/yum
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY lambda_func.py ./
CMD ["lambda_func.handler"]
Run cdk deploy and the Lambda function will be automagically bundled into an image along with its dependencies specified in requirements.txt, pushed to an AWS ECR repository, and deployed.
This Medium post was my main inspiration
Edit:
(More details about this solution can be found in my Dev.to post here)
A workaround that worked for me:
Install pyminifier:
pip install pyminifier
Go to the library folder that you want to zip. In my case I wanted to zip the site-packages folder in my virtual env. So I created a site-packages-min folder at the same level where site-packages was. Run the following shell script to minify the python files and create identical structure in the site-packages-min folder. Zip and upload these files to S3.
#/bin/bash
for f in $(find site-packages -name '*.py')
do
ori=$f
res=${f/site-packages/site-packages-min}
filename=$(echo $res| awk -F"/" '{print $NF}')
echo "$filename"
path=${res%$filename}
mkdir -p $path
touch $res
pyminifier --destdir=$path $ori >> $res || cp $ori $res
done
HTH
As stated by Greg Wozniak, you may just have imported useless directories like venv and node_modules.
package.exclude is now deprecated and removed in serverless 4, you should now use package.patterns instead:
package:
patterns:
- '!node_modules/**'
- '!venv/**'
In case you're using CloudFormation, in your template yaml file, make sure your 'CodeUri' property includes only your necessary code files and does not contain stuff like the .aws-sam directory (which is big) etc.

How to set up and use EC2 CLI on Mac?

I am stuck at using Amazon EC2 CLI.
I have downloaded the Command Line Tools from
http://aws.amazon.com/developertools/351.
I placed the bin and lib folder into my Amazon project folder: /Users/Invictus/EC2
I downloaded the cert-xxxx.pem and pk-xxx.pem into the same folder.
Created a .bash_profile in the same folder.
I tried to execute ec2-describe-images -o amazon after I moved to cd /Users/Invictus/EC2.
The system does not recognise the command: command not found.
If I try to execute the same command inside the bin folder, the result is the same.
My .bash_profile:
export EC2_HOME=~/.EC2
export PATH=$PATH:$EC2_HOME/bin
export EC2_PRIVATE_KEY=`ls $EC2_HOME/pk-*.pem`
export EC2_CERT=`ls $EC2_HOME/cert-*.pem`
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home/
Where did I make a mistake?
My aim is to connect to the launched instance and be able to execute commands there from my local machine.
I have Java installed.
The newer AWS Unified CLI Tools is much, much easier to set up. All you need is Python, which comes built-in to every Mac.
Here are a few things I can think of:
Your .bash_profile should be in /Users/Invictus/ , not /Users/Invictus/EC2. Move it to your home directory and log off and log back in (or restart your machine) and see if it picks up the right path.
Instead of ec2-describe-images, can you run it as "./ec2-describe-images" - does that work? If not, can you check the permissions on that script?