I'm trying to create an AWS lambda function in order to create thumbnail of my uploaded images.
My script is running well locally, I followed this tutorial to deploy my function but I have a problem with the Pillow library, indeed when I'm testing my function I can see this following log :
I found this post with the same issue but in my case I can't execute command line on the machine.
You must include the libjpeg.so in your lambda package, but it will also require some tweaking with the patchelf utility. Assuming that you prepare the lambda package via "pip install module-name -t" (rather than via virtualenv), do the following:
cd into/your/local/lambda/package/dir
cp -L $(ldd PIL/_imaging.so|grep libjpeg|awk '{print $3}') PIL/
patchelf --set-rpath PIL PIL/_imaging.so
# zip, deploy and test the package
This script works for Pillow version 3.2.0.
Regarding patchelf: under Ubuntu it can be 'apt install'ed, but under other Linuxes it may need to be built from source.
The problem here is that Pillow uses native libraries that must be built for the exact correct environment.
I solved this by installing my requirements in a Docker container that replicates very closely the AWS Lambda environment, lambci/lambda. I used the build-python3.8 version.
I installed my requirements there and zipped up the whole contents of /var/lang/lib/python3.8/site-packages/ directory along with my lambda function file.
I tried this with a standard Amazon Linux Docker image and it didn't work. Only the lambci/lambda image worked for me.
Related
recently I am using AWS EFS for python libraries and packages, instead of lambda layers(because of limitations of lambda layers as you all know)
I have integrated the EFS with lambda function well and everything is in order. I am using camelot for parsing the tables. I have installed every libraries that I need(like camelot, fitz and ...) I can use the installed libraries on EFS well. The problem that I have is with GhostScript. As you know when you want to use camelot with flavor='lattice', we need the ghostscript package. unfortunately when I use the custom aws layer for ghostscript(in my case: arn:aws:lambda:eu-west-3:764866452798:layer:ghostscript:9)
it gives back the error:
'''OSError: Ghostscript is not installed. You can install it using the instructions here: https://camelot-py.readthedocs.io/en/master/user/install-deps.html'''
my runtime on lambda is : python 3.8
Is there anyway that I can use ghostscript layer on lambda(beside this arn that I shared) or anyway to install the ghostscript package on efs to use on lambda?
I appreciate your kindness and your time to answer in advance
I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/
This is a continuation of How do you install phantomjs on AWS lambda? I've figured out how to get phantomjs running on an aws lambda, but when I use it to generate pdfs (using the html-pdf nodejs library), the content is missing text. If I create a docker container that's using FROM node:10.16.0-jessie on it, the pdfs render fine. If I create a docker container using FROM amazonlinux:2.0.20190508 (which I think is similar to the AWS lambda container), the text is missing on my PDFs.
I've fixed this problem in amazonlinux:2.0.20190508 by running yum install fontconfig. But, I don't know how to do the equivalent of a yum install fontconfig inside a real lambda. If you look at the link above, you'll see that an answer there attempts to provide that information, but for whatever reason, it still doesn't work correctly. I believe the reason is there's still a missing step on how to get the fontconfig install properly extracted from the amazonlinux:2.0.20190508 container.
In summary, here is my question: After I run yum install fontconfig in amazonlinux:2.0.20190508, how do I extract it from the container and package it up so that an AWS Lambda can use it?
By the way, I'm sure there are other answers that seem to be answering this question, but the AWS lambda built-in dependencies change so frequently, none of those answers work anymore.
In my case, I did this:
Create a Docker file with content:
FROM amazonlinux:2.0.20190508
RUN yum -y install fontconfig freetype
and build it as example:latest
Run docker and mount folder:
docker run -v D:\dockerFiles:/mnt --rm -it example:latest
Found libraries inside docker by path /lib64
libbz2.so.1 libexpat.so.1 libfontconfig.so.1 libfreetype.so.6 libpng15.so.15
Copy it to /mnt/lib. Take lib from D:\dockerFiles and zip folder lib.
Create aws lambda layer and add to lambda.
I am learning AWS SageMaker which is supposed to be a serverless compute environment for Machine Learning. In this type of serverless compute environment, who is supposed to ensure the software package consistency and update the versions?
For example, I ran the demo program that came with SageMaker, deepar_synthetic. In this second cell, it executes the following: !conda install -y s3fs
However, I got the following warning message:
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.4.10
latest version: 4.5.4
Please update conda by running
$ conda update -n base conda
Since it is serverless compute, am I still supposed to update the software packages myself?
Another example is as follows. I wrote a few simple lines to find out the package versions in Jupyter notebook:
import platform
import tensorflow as tf
print(platform.python_version())
print (tf.version)
However, I got the following warning messages:
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
The prints still worked and I got the results shown beolow:
3.6.4
1.4.0
I am wondering what I have to do to get the package consistent so that I don't get the warning messages. Thanks.
Today, SageMaker Notebook Instances are managed EC2 instances but users still have full control over the the Notebook Instance as root. You have full capabilities to install missing libraries through the Jupyter terminal.
To access a terminal, open your Notebook Instance to the home page and click the drop-down on the top right: “New” -> “Terminal”.
Note: By default, conda installs to the root environment.
The following are instructions you can follow https://conda.io/docs/user-guide/tasks/manage-environments.html on how to install libraries in the particular conda environment.
In general you will need following commands,
conda env list
which list all of your conda environments
source activate <conda environment name>
e.g. source activate python3
conda list | grep <package>
e.g. conda list | grep numpy
list what are the current package versions
pip install numpy
Or
conda install numpy
Note: Periodically the SageMaker team releases new versions of libraries onto the Notebook Instances. To get the new libraries, you can stop and start your Notebook Instance.
If you have recommendations on libraries you would like to see by default, you can create a forum post under https://forums.aws.amazon.com/forum.jspa?forumID=285 . Alternatively, you can bootstrap your Notebook Instances with Lifecycle Configurations to install custom libraries. More details here: https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateNotebookInstanceLifecycleConfig.html
I am creating a simple AWS Lambda function using M2Crypto library. I followed the steps for creating deployment package from here. The lambda function works perfectly on an EC2 Linux instance (AMI).
This is my Function definition:
CloudOAuth.py
from M2Crypto import BIO, RSA, EVP
def verify(event, context):
pem = "-----BEGIN PUBLIC KEY-----\n{0}\n-----END PUBLIC KEY-----".format("hello")
bio = BIO.MemoryBuffer(str.encode(pem))
print(bio)
return
Deployment Package structure:
When I run the Lambda, I get the following issue and I also tried including libcrypto.so.10 from /lib64 directory, but didn't help.
Issue when running Lambda
/var/task/M2Crypto/_m2crypto.so: symbol sk_deep_copy, version libcrypto.so.10 not defined in file libcrypto.so.10 with link time reference`
Python: 2.7
M2Crypto: 0.27.0
I would guess that the M2Crypto was built with different version of OpenSSL than what's on Lambda. See the relevant code. If not (the upstream maintainer speaking here), please, file a bug at https://gitlab.com/m2crypto/m2crypto/issues
I just want to add some more details on to #mcepl's answer. The most important is that OpenSSL version on AWS Lambda and the environment (in my case ec2) where you build your M2Crypto library should match.
To check openssl version on Lambda, use print in your handler:
print(ssl.OPENSSL_VERSION)
To check openssl version on your build environment, use:
$ openssl version
Once they match, it works.
Don't hesitate to downgrade or upgrade OpenSSL on your build environment to match the Lambda environment. I had to downgrade my openssl on ec2 to match lambda runtime environment.
sudo yum -y downgrade openssl-devel-1.0.1k openssl-1.0.1k
Hope it will help anyone trying to use M2Crypto :)
copying my answer for a similar question here:
AWS lambda runs code on an old version of amazon linux (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2) as mentioned in the official documentation
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
So to run a code that depends on shared libraries, it needs to be compiled in the same environment so it can link correctly.
What I usually do in such cases is that I create virtualenv using docker container. The virtualenv can than be packaged with lambda code.
Please note that if you need install anything using yum (in the docker container), you must use same release server as the amazon linux version:
yum --releasever=2017.03 install ...
virtualenv can be built using an EC2 instance as well instead of docker container (though, I find docker method easier). Just make sure that the AMI used for EC2 is same as the one used by lambda.