I am having trouble trying to add matplotlib as a layer to my Python 2.7 AWS Lambda function.
On the Lambda execution environment, I am trying to install the necessary libraries and create a layer as described here.
Things I've tried:
First, I pip installed matplotlib into a virtual environment and copied the contents of the site-packages under lib and lib64. When the lambda function is executed, I get a No module named pkg_resources exception. I also tried installing with the --target option to install all dependancies to the same folder. The result was the same.
I read here that it may be due to outdated setuptools package. When I did an update pip install --upgrade setuptools and then tried to install matplotlib I started getting the following exception:
pkg_resources.DistributionNotFound: The 'pip==9.0.3' distribution was not found and is required by the application
Finally I thought of installing matplotlib with
sudo yum install python-matplotlib
and then collect the required packages as described here. But this did not make matplotlib importable from within the python shell, so I guess it won't work as a Lambda layer.
Thanks for any help.
P.S: At AWS re:invent, exactly this was demoed but there are no details on the session :/
I encountered similar issues with other modules such as crypto and my own custom modules. I discovered the problem is really a lack of good documentation.
In my case, I was zipping all the dependencies up from the target directory, using the --target option, so all the dependency directories were at the top level of the zip file. That works fine for straight Lambda deployment, but when you want to use a layer, the layer is deployed in to the /opt folder of your Lambda container, so you need to create your zip file with a top-level directory named 'python' so that your dependencies can be located at /opt/python/.
mkdir python && cd python && pip install pyopenssl crypto --target . && cd .. && zip -r9 ./lambda_layer.zip python/
It does appear in the documentation but it is brief and VERY easy to miss. This page helped me: https://medium.com/#adhorn/getting-started-with-aws-lambda-layers-for-python-6e10b1f9a5d
Good luck!
Related
I have a python application that has flask dependency.
All I need is to create an RPM out of this application and with this RPM I should be able to install the dependencies to another machine.
Things I have tried,
Created a setup.py file,
setup(
name='sample-package',
version='1.0.0.0',
author="Niranj Rajasekaran",
author_email="nrajasekaran#test.com",
package_dir={'': 'src/py'},
namespace_packages=['main'],
packages=find_packages('src/py/'),
install_requires=['Flask']
)
Ran this command
python setup.py bdist_rpm
Got two RPMs in dist/, one is noarch and other is src
I tried to install noarch rpm using this
yum install {generated-file}.rpm
I am able to get sample-package-1.0.0.0.egg file in site-packages but not flask.
Two questions,
Is my approach correct?
If so what is something that I am missing?
bdist_rpm lacks of a lot of functionality and IMO is not very well maintained. E.g. pyp2rpm is much better for converting existing PyPI modules. But your module does not seem to be on PyPI, so you need to specify it to bdist_rpm manually because it cannot retrieve this information from setup.py.
Run:
python setup.py bdist_rpm --requires python-flask
This will produce an rpm file which requires the python-flask package. For more recent RHEL/Fedora it would be python3-flask.
I am using a lambda function of SearchFacesbyimage And I am using this doc https://aws.amazon.com/blogs/machine-learning/build-your-own-face-recognition-service-using-amazon-rekognition/
where for comparison I am using this
from PIL import Image
And I am getting this error
Unable to import module 'lambda_function': No module named PIL
Even though the documentation clearly outlines the steps used to manually create the zip artifact for your lambda function. This solution is not very scalable. I've been using a very small package called juniper to seamlessly package python lambda functions.
In your particular case this are the steps you need to take:
Assuming this is your folder structure:
.
├── manifest.yml
├── src
│ ├── requirements.txt
│ ├── lambda_function.py
In the requirements.txt you would include only the dependencies of your lambda function, in this case, the PIL library.
Pillow==6.0.0
Now, you just have to create a small file to tell juniper what to include in the zip file. The manifest.yml would look like:
functions:
reko:
requirements: ./src/requirements.txt.
include:
- ./src/lambda_function.py
Now you need to pip install juniper in your local environment. Execute the cli command:
juni build
Juniper will create: ./dist/reko.zip. That file will have your source code as well as any dependency you include in your requirements.txt file.
By default juniper uses docker containers and the build command will use python3.6. You can override that default.
You are getting this error as PIL for Python 2.x or PILLOW for 3.x are not standard libraries available in python lambda environment.
To use such a library , you have to make a custom deployment package of all libraries you need as well as the python code you want to deploy. This package can be made easily either in docker or by using EC2 instance .
here is the procedure how you will make that deployment package on EC2 :
Suppose you have your file named CreateThumbnail.py
If your source code is on a local host, copy it over to EC2.
scp -i key.pem /path/to/my_code.py ec2-user#public-ip-address:~/CreateThumbnail.py
Connect to a 64-bit Amazon Linux instance via SSH.
ssh -i key.pem ec2-user#public-ip-address
Install Python 3.6 and virtualenv using the following steps:
a) sudo yum install -y gcc zlib zlib-devel openssl openssl-devel
b) wget https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tgz
c) tar -xzvf Python-3.6.1.tgz
d) cd Python-3.6.1 && ./configure && make
e) sudo make installfsudo /usr/local/bin/pip3 install virtualenv
Choose the virtual environment that was installed via pip3
/usr/local/bin/virtualenv ~/shrink_venv
source ~/shrink_venv/bin/activate
Install libraries in the virtual environment
pip install Pillow
pip install boto3
Add the contents of lib and lib64 site-packages to your .zip file. Note that the following steps assume you used Python runtime version 3.6. If you used version 2.7 you will need to update accordingly.
cd $VIRTUAL_ENV/lib/python3.6/site-packages
zip -r9 ~/CreateThumbnail.zip
note- To include all hidden files, use the following option:
zip -r9 ~/CreateThumbnail.zip
Add your python code to the .zip file
cd ~
zip -g CreateThumbnail.zip CreateThumbnail.py
Now CreateThumbnail.zip is your custom deployment package , just copy it to s3 and upload it to your lambda.
This example is taken from official AWS documentation at
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html
I also ran into this exact same problem. There are two steps that you can take here: manual versus automated packaging and deploying.
The manual step would involve creating the correct virtualenv and install dependencies in that virtual environment. Then zip everything and upload to AWS.
To automate stuff, I always prefer to use the Serverless framework to package and deploy Lambda functions. Specifically the python-requirements-plugin helps with packaging. But I do have to specify the following things to tell the framework to build within a docker container and don't strip any libraries:
custom:
pythonRequirements:
dockerizePip: true
strip: false
As most of the answers here already allude to, AWS Lambda execution environment includes only the Python built-in packages and boto3, but nothing else.
To include external packages you need to include them yourself, either by building them and including it in your function upload -- or by packagaging them as layers. Also remember that the packages themselves need to be built for Amazon Linux.
If you're using python3.7, then you can use this publicly available layer for pillow:
https://github.com/keithrozario/Klayers
I downloaded the .zip from the py2neo github and placed in the site-packages folder and ran
pip install py2neo
Everything looks like it's in the right place (I compared to windows setup and they both contain the same files in the same places) but when I run a .py I get:
ImportError: No module named batch *
It sounds like your paths aren't setup correctly. To install, I would recommend simply running the pip install py2neo line without first downloading the zip and allowing pip to pull py2neo from PyPI. Alternatively, if you are trying to avoid using a network connection from your server, run python setup.py install from within a copy of the GitHub repository.
Note: You will want to checkout the latest release branch from the GitHub repository before installing. At the time of writing, this is named release/1.6.4.
I am trying to install django-dash to run one of the dashboard examples and see what it's like.
I am on Windows running Python 2.7 and Django 1.6.5. I know the usual approach is to download pip then install the package using pip. However, I am on a work computer with no administrative rights so I can't access my Internet Option Settings to find my proxy URL to follow the instructions below:
Proxy problems
If you work in an office, you might be behind a HTTP proxy. If so, set the environment variables http_proxy and https_proxy. Most Python applications (and other free software) respect these. Example syntax:
http://proxy_url:port
http://username:password#proxy_url:port
I had the same issue when trying to install Django but was able to get it to work by moving the django directory under Python27/Lib/site-packages. Is there something similar I can do with django-dash?
I also tried downloading the sources and running python setup.py install. I received the following error:
File "setup.py", line 3, in <module> from setuptools import setup, find_packages ImportError: No module named setuptools
Link to django-dash: http://django-dash.readthedocs.org/en/latest/
Yes, you can probably get the sources from The Python Package Index
Once you have them, uncompress the files and install them manually (this will depend on you OS).
On Linux systems:
python setup.py build
python setup.py install
Here's the full reference
EDIT : Note that when manually installing those packages, you must also install any missing dependencies, eg. setuptools in your case
Preface
I am so new to ssh/unix protocols that I hope I don't offend anybody.
Context
I am using the cores at my university, and do not have root access. Thus, when I install python modules, I resort to the answer on these two related stack overflow posts:
1) How to install python modules without root access?
2) How to install python packages without root privileges?
In the second post, Col Panic highly recommends getting pip or easy_install on the cores, and if they are not already there, 'you should politely ask the admins to add it, explaining the benefit to them (they won't be bothered anymore by requests for individual packages)."
Following that piece of advice, I request that the admin put easy_install on all the cores. They did and after some proverbial futzing around with export, PATH and PYTHONPATH, I was able to get numpy and scipy on the cores and import them into iPython environment.
Unfortunately, there was some problems with matplotlib related to this question: ImportError: No module named backend_tkagg
I thought I could just ignore this problem related to SUSE by pickling everything and then plotting it on my laptop.
My Problem
I really do need NetworkX. I wrote down some notes on all the small intricacies that I used to install the other packages my last go, but failed this time around. Maybe I am forgetting something that I did last time?
nemo01.65$ easy_install --prefix=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal networkx
TEST FAILED: /u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages does
NOT support .pth files
error: bad install directory or PYTHONPATH
You are attempting to install a package to a directory that is not
on PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
and your PYTHONPATH environment variable currently contains:
'/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python2.7/site-packages'
Here are some of your options for correcting the problem:
* You can choose a different installation directory, i.e., one that is
on PYTHONPATH or supports .pth files
* You can add the installation directory to the PYTHONPATH environment
variable. (It must then also be on PYTHONPATH whenever you run
Python and want to use the package(s) you are installing.)
* You can set up the installation directory to support ".pth" files by
using one of the approaches described here:
https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
My Attemps to Fix This
I really do networkx otherwise I have to adjust a bunch of my code that I want to put on the clusters.
1) I typed in:
export PYTHONPATH=/u/walnut/h1/grad/cmarshak/xdrive/xpylocal/lib/python3.3/site-packages
into the bash environment. No luck...
2) I asked another grad for some help. He suggested I install pip via easy_install, which I did and then use:
pip install --user networkx
When I type in:
find ./local/lib/python2.7/site-packages/ | grep net
I get a ton of files that are all from the networkx library. Unfortunately, there is still some problems with dependencies.
THANK YOU IN ADVANCE FOR YOUR HELP. Really enjoy learning new things from your answers.
It looks like there are multiple versions of pip floating around (cf pip: dealing with multiple Python versions? ). Try installing pip using a specific version of easy_install. For example, this gave me a pip2.7
walnut.39$ easy_install-2.7 -U --user pip
Searching for pip
Reading https://pypi.python.org/simple/pip/
Best match: pip 1.5.6
Processing pip-1.5.6-py2.7.egg
pip 1.5.6 is already the active version in easy-install.pth
Installing pip script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2.7 script to /u/walnut/h1/grad/rcompton/.local/bin
Installing pip2 script to /u/walnut/h1/grad/rcompton/.local/bin
Using /net/walnut/h1/grad/rcompton/.local/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg
Processing dependencies for pip
Finished processing dependencies for pip
walnut.40$
Then use pip2.7
walnut.40$ pip2.7 install --user networkx
Also, for non-root package installations, I've got the follow lines in my .bashrc:
export PYTHONPATH=$PYTHONPATH:$HOME/.local/lib/python2.7/site-packages
export PATH=$PATH:~/.local/bin