I followed https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html guide by AWS until the virtual env part.
My zip file structure looks like this -
bin
numpy
numpy-1.15.2.dist-info
myscript.py
I get an error when I upload the zip file to AWS Lambda. The error says -
{
"errorMessage": "Unable to import module 'testingUpload'"
}
All my script file contains is
import numpy
def lambda_handler(event, context):
print ("This is the test package")
When I upload the zip file without import numpy, it works fine.
def lambda_handler(event, context):
print ("This is the test package")
Related
My lambda_function.py file looks like this:
from urllib.request import urlopen
from google.cloud import bigquery
import json
url = "https://data2.unhcr.org/population/get/timeseries?widget_id=286725&sv_id=54&population_group=5460&frequency=day&fromDate=1900-01-01"
bq_client = bigquery.Client()
def lambda_helper(event, context):
response = urlopen(url)
data_json = json.loads(response.read())
bq_client.load_table_from_json(data_json['data']['timeseries'], "xxx.xxxx.tablename")
But every time I zip it up and upload it to my Lambda I get this error:
{
"errorMessage": "Handler 'lambda_handler' missing on module 'lambda_function'",
"errorType": "Runtime.HandlerNotFound",
"stackTrace": []
}
Is there a reason this error would be thrown even though I clearly have that function written in this module? This is driving me nuts. Thanks for any help!
It should be:
def lambda_handler(event, context):
not
def lambda_helper(event, context):
this is basic thing and it seems obvious but am stuck in this one
am using boto3, I have this access key and secret key,
upload file route:
#app.route('/up')
def up():
main = request.files['mainimg']
bucket=<bucketname>
if main:
upload_to_aws(main)
the upload_to_aws function (from github):
import os
import boto3
from werkzeug.utils import secure_filename
def upload_to_aws(file, acl="public-read"):
filename = secure_filename(file.filename)
s3 = boto3.client(
's3',
aws_access_key_id=os.environ.get('FASO_S3_ACCESS_KEY'),
aws_secret_access_key=os.environ.get('FASO_S3_SECRET_KEY')
)
try:
s3.upload_fileobj(
file,
"fasofashion",
file.filename,
ExtraArgs={
"ACL": acl,
"ContentType": file.content_type
}
)
print('uploadeed')
except Exception as e:
# This is a catch all exception, edit this part to fit your needs.
print("Something Happened: ", e)
return e
I keep getting these errors:
Access denied file must be a string
File must be a string
I am trying to run a python script that is present in AWS Lambda /tmp directory. The scripts require some extra dependencies like boto3 etc to run the file. When AWS Lambda runs the file it gives out the following error:
ModuleNotFoundError: No module named 'boto3'
However when i run this file directly as a lambda function then it runs easily whithout any import errors.
The Lambda Code that is trying to execute the code present in /tmp directory :
import json
import os
import urllib.parse
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
records = [x for x in event.get('Records', []) if x.get('eventName') == 'ObjectCreated:Put']
sorted_events = sorted(records, key=lambda e: e.get('eventTime'))
latest_event = sorted_events[-1] if sorted_events else {}
info = latest_event.get('s3', {})
file_key = info.get('object', {}).get('key')
bucket_name = info.get('bucket', {}).get('name')
s3 = boto3.resource('s3')
BUCKET_NAME = bucket_name
keys = [file_key]
for KEY in keys:
local_file_name = '/tmp/'+KEY
s3.Bucket(BUCKET_NAME).download_file(KEY, local_file_name)
print("Running Incoming File !! ")
os.system('python ' + local_file_name)
The /tmp code that is trying to get some data from S3 using boto3 :
import sys
import boto3
import json
def main():
session = boto3.Session(
aws_access_key_id='##',
aws_secret_access_key='##',
region_name='##')
s3 = session.resource('s3')
# get a handle on the bucket that holds your file
bucket = s3.Bucket('##')
# get a handle on the object you want (i.e. your file)
obj = bucket.Object(key='8.json')
# get the object
response = obj.get()
# read the contents of the file
lines = response['Body'].read().decode()
data = json.loads(lines)
transactions = data['dataset']['fields']
print(str(len(transactions)))
return str(len(transactions))
main()
So boto3 is imported in both the codes . But its only successful when the lambda code is executing it . However /tmp code cant import boto3 .
What can be the reason and how can i resolve it ?
Executing another python process does not copy Lambda's PYTHONPATH by default:
os.system('python ' + local_file_name)
Rewrite like this:
os.system('PYTHONPATH=/var/runtime python ' + local_file_name)
In order to find out complete PYTHONPATH the current Lambda version is using, add the following to the first script (one executed by Lambda):
import sys
print(sys.path)
I'm trying to verify if the public access block of my bucket mypublicbucketname is checked or not through Lambda function. For testing, I create a bucket and I have unchecked the public access block. So, I did this Lambda:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import json
import boto3
import botocore
def lambda_handler(event, context):
# TODO implement
print(boto3.__version__)
print(botocore.__version__)
client = boto3.client('s3')
response = client.get_public_access_block(Bucket='mypublicbucketname')
print("response:>>",response)
I updated the latest version of boto3 and botocore.
1.16.40 #for boto3
1.19.40 #for botocore
Even if I uploaded them and the function seems correct I got this exception:
[ERROR] ClientError: An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Someone can explain me why I have this error ?
For futur users. If you got the same problem with get_public_access_block(). Use this solution:
try:
response = client.get_public_access_block(Bucket='mypublicbucketname')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchPublicAccessBlockConfiguration':
print('No Public Access')
else:
print("unexpected error: %s" % (e.response))
for put_public_access_block, it works fine.
I have a Google Cloud Run instance that looks like this:
import json
import os
import rarfile
from google.cloud import storage
from flask import Flask, request
app = Flask(__name__)
#app.route("/", methods=["POST"])
def index():
file_id = request.values.get("fileId")
try:
storage_client = storage.Client()
bucket = storage_client.get_bucket("test-bucket")
blob = bucket.blob(file_id)
blob.download_to_filename(file_id)
rf = rarfile.RarFile(file_id)
rf.extractall()
return ("", 204)
except Exception as e:
return f"Error: {e}", 400
return ("500 Error", 500)
However, when I trigger the instance, I get the following error:
Error: can only concatenate str (not "RarCannotExec") to str
What is going wrong here? When I download the file and unzip in locally, I run into no problems. Does it have to do with the file system of Cloud Run instances?
EDIT:
I think from above it is clear that the error is from the except where I return Error: {e}. However, from analyzing the logs, it is apparent that the program fails at the rf = rarfile.RarFile(file_id) line. That, I am still unclear on.
EDIT 2:
I need to install either unrar or unrar-free to the container. Cheers!