aws lambda django app deployed with zappa - python import precendecy - django

We have a Django application we deploy on AWS Lambda, using Zappa. We use pipenv to manage python packages of the project. Some packages we use (e.g cryptography) need to be compiled with the same configration as the lambda machine. To do that, I've generated wheels for those packages on a similar machine, and included them in a sub folder in the directory. So here is our deployment process now:
install packages with pipenv (which also includes those special packages)
extract precompiled wheels in a special directory
run zappa deploy command
So after this, we have two versions of those packages, one installed via pipenv, and one extracted manually from precompiled wheels. The reason for this is, I still want the project to be able to run locally (using the package installed via pipenv, and not using the precompiled ones). So I want local versions of the project to use packages installed via pipenv, but want Lambda to use extracted - precompiled version.
I've added the directory where we keep precompiled packages to PYTHONPATH environment variable, but as far as I can see, zappa puts all python packages installed via pipenv in the root folder of the project. Those have precedence over whatever I add in PYTHONPATH in import package resolution order.
So do I have any options in telling python to give a specific folder import resolution precedence over all others?

Related

AWS lambda Unable to import module 'lambda_function': No module named PIL

I am using a lambda function of SearchFacesbyimage And I am using this doc https://aws.amazon.com/blogs/machine-learning/build-your-own-face-recognition-service-using-amazon-rekognition/
where for comparison I am using this
from PIL import Image
And I am getting this error
Unable to import module 'lambda_function': No module named PIL
Even though the documentation clearly outlines the steps used to manually create the zip artifact for your lambda function. This solution is not very scalable. I've been using a very small package called juniper to seamlessly package python lambda functions.
In your particular case this are the steps you need to take:
Assuming this is your folder structure:
.
├── manifest.yml
├── src
│ ├── requirements.txt
│ ├── lambda_function.py
In the requirements.txt you would include only the dependencies of your lambda function, in this case, the PIL library.
Pillow==6.0.0
Now, you just have to create a small file to tell juniper what to include in the zip file. The manifest.yml would look like:
functions:
reko:
requirements: ./src/requirements.txt.
include:
- ./src/lambda_function.py
Now you need to pip install juniper in your local environment. Execute the cli command:
juni build
Juniper will create: ./dist/reko.zip. That file will have your source code as well as any dependency you include in your requirements.txt file.
By default juniper uses docker containers and the build command will use python3.6. You can override that default.
You are getting this error as PIL for Python 2.x or PILLOW for 3.x are not standard libraries available in python lambda environment.
To use such a library , you have to make a custom deployment package of all libraries you need as well as the python code you want to deploy. This package can be made easily either in docker or by using EC2 instance .
here is the procedure how you will make that deployment package on EC2 :
Suppose you have your file named CreateThumbnail.py
If your source code is on a local host, copy it over to EC2.
scp -i key.pem /path/to/my_code.py ec2-user#public-ip-address:~/CreateThumbnail.py
Connect to a 64-bit Amazon Linux instance via SSH.
ssh -i key.pem ec2-user#public-ip-address
Install Python 3.6 and virtualenv using the following steps:
a) sudo yum install -y gcc zlib zlib-devel openssl openssl-devel
b) wget https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tgz
c) tar -xzvf Python-3.6.1.tgz
d) cd Python-3.6.1 && ./configure && make
e) sudo make installfsudo /usr/local/bin/pip3 install virtualenv
Choose the virtual environment that was installed via pip3
/usr/local/bin/virtualenv ~/shrink_venv
source ~/shrink_venv/bin/activate
Install libraries in the virtual environment
pip install Pillow
pip install boto3
Add the contents of lib and lib64 site-packages to your .zip file. Note that the following steps assume you used Python runtime version 3.6. If you used version 2.7 you will need to update accordingly.
cd $VIRTUAL_ENV/lib/python3.6/site-packages
zip -r9 ~/CreateThumbnail.zip
note- To include all hidden files, use the following option:
zip -r9 ~/CreateThumbnail.zip
Add your python code to the .zip file
cd ~
zip -g CreateThumbnail.zip CreateThumbnail.py
Now CreateThumbnail.zip is your custom deployment package , just copy it to s3 and upload it to your lambda.
This example is taken from official AWS documentation at
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html
I also ran into this exact same problem. There are two steps that you can take here: manual versus automated packaging and deploying.
The manual step would involve creating the correct virtualenv and install dependencies in that virtual environment. Then zip everything and upload to AWS.
To automate stuff, I always prefer to use the Serverless framework to package and deploy Lambda functions. Specifically the python-requirements-plugin helps with packaging. But I do have to specify the following things to tell the framework to build within a docker container and don't strip any libraries:
custom:
pythonRequirements:
dockerizePip: true
strip: false
As most of the answers here already allude to, AWS Lambda execution environment includes only the Python built-in packages and boto3, but nothing else.
To include external packages you need to include them yourself, either by building them and including it in your function upload -- or by packagaging them as layers. Also remember that the packages themselves need to be built for Amazon Linux.
If you're using python3.7, then you can use this publicly available layer for pillow:
https://github.com/keithrozario/Klayers

confusion in deploying module's with django

Good day.
I'm a newbie to Django and I have a slight confusion:
When deploying my Django app, do I need to deploy it with all the Python 'come-with' modules, or the hosts already have them installed.
Also, I installed PIL for image manipulation. Would they also have it installed or i have to find a way to install it on their servers. Thanks in advance
do I need to deploy it with all the Python 'come-with' modules
Never do that. It might conflict with the dependencies on the server. Instead issue the following command to create a dependency file (requirements.txt).
pip freeze > requirements.txt (issue this command where manage.py is located)
On the server create a new virtual environment. Now copy django project to the server (you can do this using git clone or just plain old Filezilla). Activate virtual environment. Then change you current working directory to the where manage.py is located. to install all the dependencies issue the following command.
pip install -r requirements.txt
This will install the required dependencies on on server.

Where do you install django-registration-redux module?

I have my own project called "polling". I am looking to install the django-registration-redux submodule to create the log-in process. Where should I same the Django-registration-redux module? Another way to ask this is what does the directory structure look like?
Should I save it within the polling directory? or on the same level as polling because it is a separate project that I'm importing?
From the docs:
Automatic installation via a package manager
Several automatic package-installation tools are available for Python; the recommended one is pip.
Using pip, type:
pip install django-registration-redux
Manual installation from a downloaded package
If you prefer not to use an automated package installer, you can download a copy of django-registration-redux and install it manually... From a command line in that directory, type:
python setup.py install
Bottom line, just run the setup installation process and it will "install" it to the right place.

offline install for py2neo in ubuntu

I downloaded the .zip from the py2neo github and placed in the site-packages folder and ran
pip install py2neo
Everything looks like it's in the right place (I compared to windows setup and they both contain the same files in the same places) but when I run a .py I get:
ImportError: No module named batch *
It sounds like your paths aren't setup correctly. To install, I would recommend simply running the pip install py2neo line without first downloading the zip and allowing pip to pull py2neo from PyPI. Alternatively, if you are trying to avoid using a network connection from your server, run python setup.py install from within a copy of the GitHub repository.
Note: You will want to checkout the latest release branch from the GitHub repository before installing. At the time of writing, this is named release/1.6.4.

Is it possible to install a django package without pip?

I am trying to install django-dash to run one of the dashboard examples and see what it's like.
I am on Windows running Python 2.7 and Django 1.6.5. I know the usual approach is to download pip then install the package using pip. However, I am on a work computer with no administrative rights so I can't access my Internet Option Settings to find my proxy URL to follow the instructions below:
Proxy problems
If you work in an office, you might be behind a HTTP proxy. If so, set the environment variables http_proxy and https_proxy. Most Python applications (and other free software) respect these. Example syntax:
http://proxy_url:port
http://username:password#proxy_url:port
I had the same issue when trying to install Django but was able to get it to work by moving the django directory under Python27/Lib/site-packages. Is there something similar I can do with django-dash?
I also tried downloading the sources and running python setup.py install. I received the following error:
File "setup.py", line 3, in <module> from setuptools import setup, find_packages ImportError: No module named setuptools
Link to django-dash: http://django-dash.readthedocs.org/en/latest/
Yes, you can probably get the sources from The Python Package Index
Once you have them, uncompress the files and install them manually (this will depend on you OS).
On Linux systems:
python setup.py build
python setup.py install
Here's the full reference
EDIT : Note that when manually installing those packages, you must also install any missing dependencies, eg. setuptools in your case