Internal Server Error when querying endpoint - amazon-web-services

I have a very simple lambda function that i created in aws. Please see below.
import json
print('Loading function')
def lambda_handler(event, context):
#1. Parse out query string params
userChestSize = event['userChestSize']
print('userChestSize= ' + userChestSize)
#2. Construct the body of the response object
transactionResponse = {}
transactionResponse['userChestSize'] = userChestSize
transactionResponse['message'] = 'Hello from Lambda'
#3. Construct http response object
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps(transactionResponse)
#4. Return the response object
return responseObject
Then I created a simple api with GET method. It generated a endpoint link for me to test my lambda. So when i use my link https://abcdefgh.execute-api.us-east-2.amazonaws.com/TestStage?userChestSize=30
I get
{"message": "Internal server error"}
Cloud Log has the following error
'userChestSize': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 7, in lambda_handler
userChestSize = event['userChestSize']
KeyError: 'userChestSize'
What am i doing wrong? I followed the basic instructions to create lambda and api gateway.

event['userChestSize'] does not exist. I suggest logging the entire event object so you can see what is actually in the event.

Related

Linking dialogflow to aws dynamodb

I am new to AWS and following a tutorial to read data from my dynamo db to dialogdlow to use as a response, I have been following a tutorial and I get a Webhook call failed. Error:
UNAVAILABLE, State: URL_UNREACHABLE, Reason: UNREACHABLE_5xx, HTTP
status code: 500.
Someone suggested it due to lambda not working(the function is linked to an API gateway I posted the webhook on in Dialogflow), when I ran the function I got the Response
{
"errorMessage": "'queryResult'",
"errorType": "KeyError",
"requestId": "d51caf49-1cf2-4428-b42a-d8dbe9cdb4f8",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 18, in lambda_handler\n distance=event['queryResult']['parameters']['distance']\n"
]
}
Error and my code are attached below, someone said it has to with how I am accessing keys in my python dictionary but I fail to understand
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('places')
def findPlaces(distance):
response=table.get_item(Key={'distance': distance})
if 'Item' in response:
return{'fulfillmentText':response['Item']['place']}
else:
return{'fulfillmentText':'No Places within given distance'}
def lambda_handler(event, context):
distance=event['queryResult']['parameters']['distance']
return findPlaces(int(distance))
I also need help with how I would store data from Dialogflow responses back to dynamo DB, thank you

ERROR - 'Credentials' object has no attribute 'signer_email'

I have a gRPC service deployed on Google Cloud Run which I want to call from Composer.
I have assigned the roles/iam.serviceAccountTokenCreator role to the service account which my composer worker nodes are running under, and I'm not mounting any custom service key files or setting the GOOGLE_APPLICATION_CREDENTIALS environment variable.
Using the JWT_GOOGLE authentication option in the airflow gRPC hook I get the following error:
[2022-05-31 14:20:16,082] {grpc.py:90} INFO - Calling gRPC service
[2022-05-31 14:20:16,097] {taskinstance.py:1152} ERROR - 'Credentials' object has no attribute 'signer_email'
Traceback (most recent call last):
File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 985, in _run_raw_task
result = task_copy.execute(context=context)
File "/usr/local/lib/airflow/airflow/providers/grpc/operators/grpc.py", line 95, in execute
for response in responses:
File "/usr/local/lib/airflow/airflow/providers/grpc/hooks/grpc.py", line 136, in run
with self.get_conn() as channel:
File "/usr/local/lib/airflow/airflow/providers/grpc/hooks/grpc.py", line 104, in get_conn
jwt_creds = google_auth_jwt.OnDemandCredentials.from_signing_credentials(credentials)
File "/opt/python3.6/lib/python3.6/site-packages/google/auth/jwt.py", line 695, in from_signing_credentials
kwargs.setdefault("issuer", credentials.signer_email)
AttributeError: 'Credentials' object has no attribute 'signer_email'
[2022-05-31 14:20:16,100] {taskinstance.py:1196} INFO - Marking task as FAILED. dag_id=example_dag, task_id=example_task, execution_date=20220531T135709, start_date=20220531T142015, end_date=20220531T142016
[2022-05-31 14:20:23,826] {local_task_job.py:102} INFO - Task exited with return code 1
Does anyone have any idea how/why my credentials aren't including the field I need?
Found a solution to this after discussing with Google Cloud - essentially, it looks like the JWT_GOOGLE authentication method isn't set up for GCE service accounts so I went down the CUSTOM authentication route instead:
import google.auth.transport.grpc
import google.auth.transport.requests
import google.oauth2.credentials
import google.oauth2.id_token
from airflow.providers.grpc.operators.grpc import GrpcOperator
def connection_func(conn):
"""Custom connection function for gRPC authentication.
Args:
conn: Airflow Connection object
Returns:
An instantiated gRPC channel for making calls to our remote service.
"""
request = google.auth.transport.requests.Request()
if not str(conn.host).startswith("https://"):
audience = f"https://{conn.host}"
else:
audience = conn.host
token = google.oauth2.id_token.fetch_id_token(request, audience)
creds = google.oauth2.credentials.Credentials(token)
base_url = conn.host
if conn.port:
base_url = f"{base_url}:{conn.port}"
channel = google.auth.transport.grpc.secure_authorized_channel(
creds, None, base_url
)
return channel
return GrpcOperator(
...
custom_connection_func=connection_func,
)
This uses the approach seen here to fetch an ID token for a given audience, then create a set of credentials from there and finally instantiate the gRPC secure channel for use in the operator.

Lambda Python request athena error OutputLocation

I'm working with AWS Lambda and I would like to make a simple query in athena and store my data in an s3.
My code :
import boto3
def lambda_handler(event, context):
query_1 = "SELECT * FROM test_athena_laurent.stage limit 5;"
database = "test_athena_laurent"
s3_output = "s3://athena-laurent-result/lambda/"
client = boto3.client('athena')
response = client.start_query_execution(
QueryString=query_1,
ClientRequestToken='string',
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': 's3://athena-laurent-result/lambda/'
}
)
return response
It works on spyder 2.7 but in AWS I have this error :
Parameter validation failed:
Invalid length for parameter ClientRequestToken, value: 6, valid range: 32-inf: ParamValidationError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 18, in lambda_handler
'OutputLocation': 's3://athena-laurent-result/lambda/'
I think that It doesn't understand my path and I don't know why.
Thanks
ClientRequestToken (string) --
A unique case-sensitive string used to ensure the request to create the query is idempotent (executes only once). If another StartQueryExecution request is received, the same response is returned and another query is not created. If a parameter has changed, for example, the QueryString , an error is returned. [Boto3 Docs]
This field is autopopulated if not provided.
If you are providing a string value for ClientRequestToken, ensure it is within length limits from 32 to 128.
Per #Tomalak's point ClientRequestToken is a string. However, per the documentation I just linked, you don't need it anyway when using the SDK.
This token is listed as not required because AWS SDKs (for example the AWS SDK for Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS CLI, you must provide this token or the action will fail.
So, I would refactor as such:
import boto3
def lambda_handler(event, context):
query_1 = "SELECT * FROM some_database.some_table limit 5;"
database = "some_database"
s3_output = "s3://some_bucket/some_tag/"
client = boto3.client('athena')
response = client.start_query_execution(QueryString = query_1,
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': s3_output
}
)
return response

aws boto3 client Stubber help stubbing unit tests

I'm trying to write some unit tests for aws RDS. Currently, the start stop rds api calls have not yet been implemented in moto. I tried just mocking out boto3 but ran into all sorts of weird issues. I did some googling and found http://botocore.readthedocs.io/en/latest/reference/stubber.html
So I have tried to implement the example for rds but the code appears to be behaving like the normal client, even though I have stubbed it. Not sure what's going on or if I am stubbing correctly?
from LambdaRdsStartStop.lambda_function import lambda_handler
from LambdaRdsStartStop.lambda_function import AWS_REGION
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
response = {u'DBInstances': [some copy pasted real data here], extra_info_about_call: extra_info}
stubber.add_response('describe_db_instances', response, {})
with stubber:
r = client.describe_db_instances()
lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context')
so the mocking WORKS for the first line inside the stubber and the value of r is returned as my stubbed data. When I try and go into my lambda_handler method inside my lambda_function.py and still use the stubbed client it behaves like a normal unstubbed client:
lambda_function.py
def lambda_handler(event, context):
rds_client = boto3.client('rds', region_name=AWS_REGION)
rds_instances = rds_client.describe_db_instances()
error output:
File "D:\dev\projects\virtual_envs\rds_sloth\lib\site-packages\botocore\auth.py", line 340, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials
You will need to patch boto3 where it is called in the routine that you will be testing. Also Stubber responses appear to be consumed on each call and thus will require another add_response for each stubbed call as below:
def tests_turn_db_on_when_cw_event_matches_tag_value(self, mock_boto):
client = boto3.client('rds', AWS_REGION)
stubber = Stubber(client)
# response data below should match aws documentation otherwise more errors due to botocore error handling
response = {u'DBInstances': [{'DBInstanceIdentifier': 'rds_response1'}, {'DBInstanceIdentifierrd': 'rds_response2'}]}
stubber.add_response('describe_db_instances', response, {})
stubber.add_response('describe_db_instances', response, {})
with mock.patch('lambda_handler.boto3') as mock_boto3:
with stubber:
r = client.describe_db_instances() # first_add_response consumed here
mock_boto3.client.return_value = client
response=lambda_handler({u'AutoStart': u'10:00:00+10:00/mon'}, 'context') # second_add_response would be consumed here
# asert.equal(r,response)

linkedin api - python - get_connections()

I am working on a simple python scraping script, I am trying to get connections from LinkedIn using their API without a redirect_uri. I worked once with some APIs, that don't require the redirect url or just https://localhost. I got the consumer_key, consumer_secret, user_secret, consumer_secret. Here's the code i am using from https://github.com/ozgur/python-linkedin:
RETURN_URL = ''
url = 'https://api.linkedin.com/v1/people/~'
# Instantiate the developer authentication class
authentication = linkedin.LinkedInDeveloperAuthentication(CONSUMER_KEY, CONSUMER_SECRET,
USER_TOKEN, USER_SECRET,
RETURN_URL, linkedin.PERMISSIONS.enums.values())
# Pass it in to the app...
application = linkedin.LinkedInApplication(authentication)
print application.get_profile() # works
print application.get_connections()
And here's the error I get:
Traceback (most recent call last):
File "getContacts.py", line 20, in <module>
print application.get_connections()
File "/home/imane/Projects/prjL/env/local/lib/python2.7/site-packages/linkedin/linkedin.py", line 219, in get_connections
raise_for_error(response)
File "/home/imane/Projects/prjL/env/local/lib/python2.7/site-packages/linkedin/utils.py", line 63, in raise_for_error
raise LinkedInError(message)
linkedin.exceptions.LinkedInError: 403 Client Error: Forbidden for url: https://api.linkedin.com/v1/people/~/connections: Unknown Error
This is my first question here, so excuse me if I didn't make it clear enough, and thank you for helping me out.
Here's what i tried with python_oauth2:
import oauth2 as oauth
import requests
url = 'https://api.linkedin.com/v1/people/~'
params = {}
token = oauth.Token(key=USER_TOKEN, secret=USER_SECRET)
consumer = oauth.Consumer(key=CONSUMER_KEY, secret=CONSUMER_SECRET)
# Set our token/key parameters
params['oauth_token'] = token.key
params['oauth_consumer_key'] = consumer.key
oauth_request = oauth.Request(method="GET", url=url, parameters=params)
oauth_request.sign_request(oauth.SignatureMethod_HMAC_SHA1(), consumer, token)
signed_url = oauth_request.to_url()
response = requests.get(signed_url)
Connections API calls are a restricted endpoint as of March, 2015. It's possible you're using sample code/documentation that was written at a time when anyone could access those endpoints. You are receiving a 403 response because your application legitimately does not have the permission required to make that request.