How to use Stripe Apis on AWS Lambda in Python - amazon-web-services

I am going to build a backend using Stripe api on AWS Lambda.
But I can't import stripe library.
import stripe
This line gives me this error.
{
"errorMessage": "Unable to import module 'lambda_function'"
}
Anybody can help me?

The Stripe python libraries need to be installed to the same folder as the python script you are writing.
The pip command to do this is:
pip install --install-option="--prefix=/full/local/path/to/your/python/script" --upgrade stripe
This will actually install the libraries into the "lib" folder in the path you indicated. Copy everything from /full/local/path/to/your/python/script/lib/python2.7/site-packages to /full/local/path/to/your/python/script
Your directory will then look something like this:
./main.py
./requests/
./requests-2.13.0-py2.7.egg-info/
./stripe/
./stripe-1.55.0-py2.7.egg-info/
Zip up those files and then upload that ZIP file to AWS Lambda.
I know this question is over a year old, but it's still unanswered, and is what still turned up when I searched for this same problem, so here is how I solved it.

To add to James Eberhardt's answer using --target will place files directly into desired location.
pip install --target="/full/local/path/to/your/python/script" --upgrade stripe

Related

Cant Import packages from layers in AWS Lambda

I know this question exists several places, but even by following different guides/answers I still cant get it to work. I have no idea what I do wrong. I have a lambda Python function on AWS where i need to do a "import requests". This is my approach so far.
Create .zip directory of packages. Locally I do:
pip3 install requests -t ./
zip -r okta_layer.zip .
Upload .zip directory to a lambda layer:
I go to the AWS console and go to lambda layers. I create a new layer based on this .zip file.
I go to my lambda python function and add the layer to the function directly form the console. I can now see the layer under "Layers" for the lambda function. Then i run the script it still complains about:
Unable to import module 'lambda_function': No module named 'requests'
I solved the problem. Apparently I needed to have a .zip folder, with a "python" folder inside, and inside that "python" folder should be all the packages.
I only had all the packages in the zip folder directly without a "python" folder ...

Install Scrapy in apache airflow will cause INVALID_ARGUMENT

I`m trying to install Scrapy from PyPi using below command.
gcloud composer environments update $(AIRFLOW_ENVIRONMENT_NAME) \
--update-pypi-packages-from-file requirements.txt \
--location $(AIRFLOW_LOCATION)
requirements.txt is like this.
google-api-python-client==1.7.*
google-cloud-datastore==1.7.*
Scrapy==2.0.0
After running gcloud command, It will cause an invalid argument but it runs successfully in the local environment.
gcloud composer environments update xxxx \
--update-pypi-packages-from-file requirements.txt \
--location asia-northeast1
ERROR: (gcloud.composer.environments.update) INVALID_ARGUMENT: Found 1 problem:
1) Error validating key Scrapy. PyPi dependency name is not formatted properly. It must be lowercase and follow the format of 'identifier' specified in PEP-508.
Is there any way to install?
As the previous answer stated, the error that you are receiving now is quite clear and it's caused by the wrong formatting of the dependency. It should be scrapy==2.0.0 instead of Scrapy==2.0.0 inside the requirements.txt.
I would like to add that to avoid the installation error when you fix the formatting, you should add one more dependency to your list and that is attrs==19.2.0. I was able to install your requirements to my environment by specifying the following list:
google-api-python-client==1.7.*
google-cloud-datastore==1.7.*
scrapy==2.0.0
attrs==19.2.0
Even though you adjust package name in requirements.txt file according to PEP-508 document prerequisites, formatting certan package name in lowercase layout scrapy==2.0.0, the issue most probably will remain the same and updating process will stuck with the error:
Failed to install PyPI packages
Generally, this kind of error appears then the source PyPI package has some external dependencies or this package is sensitive on some system-level libraries that GCP Composer doesn't support.
In this case a vendor recommends two ways either using KubernetesPodOperator to build own custom image and use it in particular Kubernetes Pod or deploy PyPi package as a local Python library, uploading shared object libraries for the PyPI dependency to Airflow /plugins directory, find more info here.

How to import Spark packages in AWS Glue?

I would like to use the GrameFrames package, if I were to run pyspark locally I would use the command:
~/hadoop/spark-2.3.1-bin-hadoop2.7/bin/pyspark --packages graphframes:graphframes:0.6.0-spark2.3-s_2.11
But how would I run a AWS Glue script with this package? I found nothing in the documentation...
You can provide a path to extra libraries packaged into zip archives located in s3.
Please check out this doc for more details
It's possible to using graphframes as follows:
Download the graphframes python library package file e.g. from here. Unzip the .tar.gz and then re-archive to a .zip. Put somewhere in s3 that your glue job has access to
When setting up your glue job:
Make sure that your Python Library Path references the zip file
For job parameters, you need {"--conf": "spark.jars.packages=graphframes:graphframes:0.6.0-spark2.3-s_2.11"}
Every one looking for an answer please read this comment..
In order to use an external package in AWS Glue pySpark or Python-shell:
1)
Clone the repo from follwing url..
https://github.com/bhavintandel/py-packager/tree/master
git clone git#github.com:bhavintandel/py-packager.git
cd py-packager
2)
Add your required package under requirements.txt. For ex.,
pygeohash
Update the version and project name under setup.py. For ex.,
VERSION = "0.1.0"
PACKAGE_NAME = "dependencies"
3) Run the follwing "command1" to create .zip package for pyspark OR "command2" to create egg files for python-shell..
command1:
sudo make build_zip
Command2:
sudo make bdist_egg
Above commands will generate packae in dist folder.
4) Finally upload this pakcage from dist directory to S3 bucket. Then goto AWS Glue Job Console, edit job, find script libraries option, click on folder icon of "python library path" .. then select your s3 path.
finally use in your glue script:
import pygeohash as pgh
Done!
Also set --user-jars-firs: "true" parameter in glue job.

Deployment of Django app to AWS Lambda using zappa fails even though Zappa says your app is live at the following link

I came across the amazing serverless AWS Lambda recently and thought it would be great to have my app up there and not have to worry about auto scaling, load balancing and all for apparently a fraction of the cost.
So then I found out about Zappa which takes care of deploying your python app to AWS Lambda for you. Amazing is what I thought.
Its actually on paper very easy to do. just follow the instructions here..
https://github.com/Miserlou/Zappa
Anyway I followed the instructions with just a very basic django app using virtualenv that just contained the django rest framework tutorial in it..
Tested it locally and works fine.
Then I set up my s3 bucket and authenticated my credentials with the awscli.
Then I ran the 2 thing you need to deply.
Zappa init,
Zappa deploy dev.
Then it went through all its processes, packaging into zip, deploying etc...
Then at the end it said your app is live and here is the url
It gave me a url to try.
I pasted the url into the browser and this is what the browser displayed for me.
Oh yeah and my s3 bucket is still empty and so is my aws lambda service.
{
"message": "An uncaught exception happened while servicing this request.",
"traceback": [
"Traceback (most recent call last):\n",
" File \"/var/task/handler.py\", line 395, in handler\n response = Response.from_app(self.wsgi_app, environ)\n",
" File \"/home/donagh/projects/vizzydev/vizzy/visualid/vizzy_django/env/build/Werkzeug/werkzeug/wrappers.py\", line 865, in from_app\n",
" File \"/home/donagh/projects/vizzydev/vizzy/visualid/vizzy_django/env/build/Werkzeug/werkzeug/wrappers.py\", line 57, in _run_wsgi_app\n",
" File \"/home/donagh/projects/vizzydev/vizzy/visualid/vizzy_django/env/build/Werkzeug/werkzeug/test.py\", line 871, in run_wsgi_app\n",
"TypeError: 'NoneType' object is not callable\n"
]
}
If anyone as any ideas I would be very grateful. I would love to get this working. It would be an incredibly powerful resource.
When I get errors related to werkzeug wrapper it is usually because my packages were not installed in my virtual environment.
virtualenv venv
source venv/bin/activate
pip install Django
pip install zappa
# pip install any other packages
# or with a requirements.txt file
pip install -r requirements.txt
Then run the zappa deploy commands.

elastic beanstalk: incremental push git

When I would like to push incremental changes to the AWS Elastic Beanstalk solution I get the following:
$ git aws.push
Updating the AWS Elastic Beanstalk environment None...
Error: Failed to get the Amazon S3 bucket name
I've already added FULLS3Access to my AWS users policies.
I had a similar issue today and here are the steps I followed to investigate :-
I modified line no 133 at .git/AWSDevTools/aws/dev_tools.py to print the exception like
except Exception, e:
print e
* Please make sure of spaces as Python does not work in case of spaces.
I ran command git aws.push again
and here is the exception printed :-
BotoServerError: 403 Forbidden
{"Error":{"Code":"SignatureDoesNotMatch","Message":"Signature not yet current: 20150512T181122Z is still later than 20150512T181112Z (20150512T180612Z + 5 min.)","Type":"Sender"},"
The issue is because there was a time difference in server and machine I corrected it and it stated working fine.
Basically the Exception will helps to let you know exact root cause, It may be related to Secret key as well.
It may have something to do with the boto-library (related thread). If you are on ubuntu/debian try this:
Remove old version of boto
sudo apt-get remove python-boto
Install newer version
sudo apt-get install python-pip
sudo pip install -U boto
Other systems (e.g. Mac)
Via easy_install
sudo easy_install pip
pip install boto
Or simply build from source
git clone git://github.com/boto/boto.git
cd boto
python setup.py install
Had the same problem a moment ago.
Note:
I just noticed your environment is called none. Did you follow all instructions and executed eb config/eb init?
One more try:
Add export PATH=$PATH:<path to unzipped eb CLI package>/AWSDevTools/Linux/ to your path and execute AWSDevTools-RepositorySetup.sh maybe something is wrong with your repository setup (notice the none weirdness). Other possible solutions:
Doublecheck AWSCredentials (maybe you are using different Key IDs / Wrong CredentialsFile-format)
Old/mismatching versions of eb client & python (check with eb -v and python -v) (current client is this)
Use amazons policy validator to doublecheck if your AWS User is allowed to perform all actions
If all that doesn't help im out of options. Good luck.