Missing handler error on Lambda but handler exists in script - amazon-web-services

My lambda_function.py file looks like this:
from urllib.request import urlopen
from google.cloud import bigquery
import json
url = "https://data2.unhcr.org/population/get/timeseries?widget_id=286725&sv_id=54&population_group=5460&frequency=day&fromDate=1900-01-01"
bq_client = bigquery.Client()
def lambda_helper(event, context):
response = urlopen(url)
data_json = json.loads(response.read())
bq_client.load_table_from_json(data_json['data']['timeseries'], "xxx.xxxx.tablename")
But every time I zip it up and upload it to my Lambda I get this error:
{
"errorMessage": "Handler 'lambda_handler' missing on module 'lambda_function'",
"errorType": "Runtime.HandlerNotFound",
"stackTrace": []
}
Is there a reason this error would be thrown even though I clearly have that function written in this module? This is driving me nuts. Thanks for any help!

It should be:
def lambda_handler(event, context):
not
def lambda_helper(event, context):

Related

Describe listener rule count using Boto3

i need to list listener rules count but still i get a output as null without any error.my complete project was getting email notification from when listener rules created on elastic load balancer.
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('elbv2')
response = client.describe_listeners(
ListenerArns=[
'arn:aws:elasticloadbalancing:ap-south-1:my_alb_listener',
],
)
print('response')
Here is the output of my code
Response
null
Response is null because your indentation is incorrect and you are not returning anything from your handler. It should be:
import json
import boto3
def lambda_handler(event, context):
client = boto3.client('elbv2')
response = client.describe_listeners(
ListenerArns=[
'arn:aws:elasticloadbalancing:ap-south-1:my_alb_listener',
],
)
return response

Lambda call S3 get public access block using boto3

I'm trying to verify if the public access block of my bucket mypublicbucketname is checked or not through Lambda function. For testing, I create a bucket and I have unchecked the public access block. So, I did this Lambda:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import json
import boto3
import botocore
def lambda_handler(event, context):
# TODO implement
print(boto3.__version__)
print(botocore.__version__)
client = boto3.client('s3')
response = client.get_public_access_block(Bucket='mypublicbucketname')
print("response:>>",response)
I updated the latest version of boto3 and botocore.
1.16.40 #for boto3
1.19.40 #for botocore
Even if I uploaded them and the function seems correct I got this exception:
[ERROR] ClientError: An error occurred (NoSuchPublicAccessBlockConfiguration) when calling the GetPublicAccessBlock operation: The public access block configuration was not found
Someone can explain me why I have this error ?
For futur users. If you got the same problem with get_public_access_block(). Use this solution:
try:
response = client.get_public_access_block(Bucket='mypublicbucketname')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'NoSuchPublicAccessBlockConfiguration':
print('No Public Access')
else:
print("unexpected error: %s" % (e.response))
for put_public_access_block, it works fine.

Invoking an endpoint in AWS with a multidimensional array

I have deployed a Tensorflow-Model in SageMaker Studio following this tutorial:
https://aws.amazon.com/de/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker/
The Model needs a Multidimensional Array as input. Invoking it from the Notebook itself is working:
import numpy as np
import json
data = np.load("testValues.npy")
pred=predictor.predict(data)
But I wasnt able to invoke it from a boto 3 client using this code:
import json
import boto3
import numpy as np
import io
client = boto3.client('runtime.sagemaker')
datain = np.load("testValues.npy")
data=datain.tolist();
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data))
response_body = response['Body']
print(response_body.read())
This throws the Error:
An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (415) from model with message "{"error": "Unsupported Media Type: Unknown"}".
I guess the reason is the json Media Type but i have no clue how to get it back in shape.
I tried this:https://github.com/aws/amazon-sagemaker-examples/issues/644 but it doesnt seem to change anything
This fixed it for me:
The Content Type was missing.
import json
import boto3
import numpy as np
import io
client = boto3.client('runtime.sagemaker',aws_access_key_id=..., aws_secret_access_key=...,region_name=...)
endpoint_name = '...'
data = np.load("testValues.npy")
payload = json.dumps(data.tolist())
response = client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=payload)
result = json.loads(response['Body'].read().decode())
res = result['predictions']
print("test")

Lambda-API gateway : "message": "Internal server error"

I am using AWS CodeStar (Lambda + API Gateway) to build my serverless API. My lambda function works well in the Lambda console but strangely throws this error when I run the code on AWS CodeStar:
"message": "Internal server error"
Kindly help me with this issue.
import json
import os
import bz2
import pprint
import hashlib
import sqlite3
import re
from collections import namedtuple
from gzip import GzipFile
from io import BytesIO
from botocore.vendored import requests
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
def handler(event, context):
logger.info('## ENVIRONMENT VARIABLES')
logger.info(os.environ)
logger.info('## EVENT')
logger.info(event)
n = get_package_list()
n1 = str(n)
dat = {"total_pack":n1}
return {'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps(dat)
}
def get_package_list():
url = "http://amazonlinux.us-east-2.amazonaws.com/2/core/2.0/x86_64/c60ceaf6dfa3bc10e730c9e803b51543250c8a12bb009af00e527a598394cd5e/repodata/primary.sqlite.gz"
db_filename = "dbfile"
resp = requests.get(url, stream=True)
remote_data = resp.raw.read()
cached_fh = BytesIO(remote_data)
compressed_fh = GzipFile(fileobj=cached_fh)
with open(os.path.join('/tmp',db_filename), "wb") as local_fh:
local_fh.write(compressed_fh.read())
package_obj_list = []
db = sqlite3.connect(os.path.join('/tmp',db_filename))
c = db.cursor()
c.execute('SELECT name FROM packages')
for package in c.fetchall():
package_obj_list.append(package)
no_of_packages = len(package_obj_list)
return no_of_packages
Expected Result: should return an Integer (no_of_packages).

"errorMessage": "module initialization error"

Using Python, I followed and when it came to Test it, the following error popped up:
{
"errorMessage": "module initialization error"
}
What could I have done wrong?
from __future__ import print_function
import os
from datetime import datetime
from urllib2 import urlopen
SITE = os.environ['site'] # URL of the site to check, stored in the site environment variable, e.g. https://aws.amazon.com
EXPECTED = os.environ['expected'] # String expected to be on the page, stored in the expected environment variable, e.g. Amazon
def validate(res):
'''Return False to trigger the canary
Currently this simply checks whether the EXPECTED string is present.
However, you could modify this to perform any number of arbitrary
checks on the contents of SITE.
'''
return EXPECTED in res
def lambda_handler(event, context):
print('Checking {} at {}...'.format(SITE, event['time']))
try:
if not validate(urlopen(SITE).read()):
raise Exception('Validation failed')
except:
print('Check failed!')
raise
else:
print('Check passed!')
return event['time']
finally:
print('Check complete at {}'.format(str(datetime.now())))
You don't need any environment variables. Just keep it simple
from __future__ import print_function
import os
from datetime import datetime
from urllib2 import urlopen
def lambda_handler(event, context):
url = 'https://www.google.com' # change it with your own
print('Checking {} at {}...'.format(url, datetime.utcnow()))
html = urlopen(url).read()
# do some processing
return html
Here is another simple example.
from __future__ import print_function
def lambda_handler(event, context):
first = event.get('first', 0)
second = event.get('second', 0)
sum = first + second
return sum
Here is a sample event which will be used to invoke this lambda. you can configure event from Lambda web interface. (or google it)
{
"first": 10,
"second": 23
}
In my case, I missed adding the logging_config.ini to the lambda function.
I guess you would face similar error when the lambda function doesn't find the referenced file or package.
Thanks to the new cloud9 IDE integration, I was able to create one on the fly.