Linking dialogflow to aws dynamodb - amazon-web-services

I am new to AWS and following a tutorial to read data from my dynamo db to dialogdlow to use as a response, I have been following a tutorial and I get a Webhook call failed. Error:
UNAVAILABLE, State: URL_UNREACHABLE, Reason: UNREACHABLE_5xx, HTTP
status code: 500.
Someone suggested it due to lambda not working(the function is linked to an API gateway I posted the webhook on in Dialogflow), when I ran the function I got the Response
{
"errorMessage": "'queryResult'",
"errorType": "KeyError",
"requestId": "d51caf49-1cf2-4428-b42a-d8dbe9cdb4f8",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 18, in lambda_handler\n distance=event['queryResult']['parameters']['distance']\n"
]
}
Error and my code are attached below, someone said it has to with how I am accessing keys in my python dictionary but I fail to understand
import boto3
from boto3.dynamodb.conditions import Key
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('places')
def findPlaces(distance):
response=table.get_item(Key={'distance': distance})
if 'Item' in response:
return{'fulfillmentText':response['Item']['place']}
else:
return{'fulfillmentText':'No Places within given distance'}
def lambda_handler(event, context):
distance=event['queryResult']['parameters']['distance']
return findPlaces(int(distance))
I also need help with how I would store data from Dialogflow responses back to dynamo DB, thank you

Related

Is Snomed Integration For AWS Medical Comprehend Actually Working?

According to a recent update, the AWS medical comprehend service should now be returning snomed categories along with other medical terms.
I am running this in a Python 3.9 Lambda:
import json
def lambda_handler(event, context):
clinical_note = "Patient X was diagnosed with insomnia."
import boto3
cm_client = boto3.client("comprehendmedical")
response = cm_client.infer_snomedct(Text=clinical_note)
print (response)
I get the following response:
{
"errorMessage": "'ComprehendMedical' object has no attribute 'infer_snomedct'",
"errorType": "AttributeError",
"requestId": "560f1d3c-800a-46b6-a674-e0c3c3cc719f",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 7, in lambda_handler\n response = cm_client.infer_snomedct(Text=clinical_note)\n",
" File \"/var/runtime/botocore/client.py\", line 643, in __getattr__\n raise AttributeError(\n"
]
}
So either I am missing something (probably something obvious) or maybe the method is not actually available yet? Anything to set me on the right path would be welcome.
This is most likely due to the default Boto3 version being out of date on my lambda.

Internal Server Error when querying endpoint

I have a very simple lambda function that i created in aws. Please see below.
import json
print('Loading function')
def lambda_handler(event, context):
#1. Parse out query string params
userChestSize = event['userChestSize']
print('userChestSize= ' + userChestSize)
#2. Construct the body of the response object
transactionResponse = {}
transactionResponse['userChestSize'] = userChestSize
transactionResponse['message'] = 'Hello from Lambda'
#3. Construct http response object
responseObject = {}
responseObject['statusCode'] = 200
responseObject['headers'] = {}
responseObject['headers']['Content-Type'] = 'application/json'
responseObject['body'] = json.dumps(transactionResponse)
#4. Return the response object
return responseObject
Then I created a simple api with GET method. It generated a endpoint link for me to test my lambda. So when i use my link https://abcdefgh.execute-api.us-east-2.amazonaws.com/TestStage?userChestSize=30
I get
{"message": "Internal server error"}
Cloud Log has the following error
'userChestSize': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 7, in lambda_handler
userChestSize = event['userChestSize']
KeyError: 'userChestSize'
What am i doing wrong? I followed the basic instructions to create lambda and api gateway.
event['userChestSize'] does not exist. I suggest logging the entire event object so you can see what is actually in the event.

How to troubleshoot and solve lambda function issue?

import sys
import botocore
import boto3
from botocore.exceptions import ClientError
def lambda_handler(event, context):
# TODO implement
rds = boto3.client('rds')
lambdaFunc = boto3.client('lambda')
print 'Trying to get Environment variable'
try:
funcResponse = lambdaFunc.get_function_configuration(
FunctionName='RDSInstanceStart'
)
#print (funcResponse)
DBinstance = funcResponse['Environment']['Variables']['DBInstanceName']
print 'Starting RDS service for DBInstance : ' + DBinstance
except ClientError as e:
print(e)
try:
response = rds.start_db_instance(
DBInstanceIdentifier=DBinstance
)
print 'Success :: '
return response
except ClientError as e:
print(e)
return
{
'message' : "Script execution completed. See Cloudwatch logs for complete output"
}
I have a running rds instance. Every day I start and stop my RDS instance(db.t2.micro (MSSQL Server)) of AWS using a lambda expression. It was working fine previously but unexpectedly today I faced the issue.
Where my rds instance is not started automatically by the lambda expression. I watched an error log but there is not an issue it usually seems like the daily log. I am unable to troubleshoot and solve the issue. Can anyone tell me about this issue?
FYI, a shortened version would be:
import boto3
import os
def lambda_handler(event, context):
rds_client = boto3.client('rds')
response = rds.start_db_instance(DBInstanceIdentifier=os.environ['DBInstanceName'])
print response
You can see the logs of each lambda calls in cloudwatch or in aws lambda-> monitoring -> view logs in cloud watch. This will open a page with logs of each lambda call.
if there is not logs. it means that lambda is not invoking.
you can check the roles and policies assign to lambda if that is correct.
You should print the response of the api you use to start the db (ex- start-db-instance). The response will be printed to CloudWatch Log.
https://docs.aws.amazon.com/cli/latest/reference/rds/start-db-instance.html
for later automation you might want to create a metric-filter on the Lambda's CloudWatch Logs on a certain keyword such as -
"\"DBInstanceStatus\": \"starting\""
there will be an Alarm created as well with setting say threshold < 1, and if the keyword is not found in a log the metric will push no Value (you can customize this setting under Advanced option) and the Alarm will go in to INSUFFICIENT_DATA and you can set notification for INSUFFICIENT_DATA using SNS.
You can tweak the Alarm a bit to treat missing data as Bad and then Alarm will transition to ALARM state when metric filter does not match with the incoming log.

returning JSON response from AWS Glue Pythonshell job to the boto3 caller

Is there a way to send a JSON response (of a dictionary of outputs) from A AWS Glue pythonshell job? Similar to returning a JSON response from AWS Lambda?
I am calling a Glue pythonshell job like below:
response = glue.start_job_run(
JobName = 'test_metrics',
Arguments = {
'--test_metrics': 'test_metrics',
'--s3_target_path_key': 's3://my_target',
'--s3_target_path_value': 's3://my_target_value'} )
print(response)
The response I get is a 200 stating the fact that the Glue start_job_run was a success. From the documentation, all I see is the result if a Glue job is either written in s3 or some other database.
I tried adding return {'result':'some_string'} at the end of my Glue pythonshell job to test if it works or not with below code.
import sys
from awsglue.utils import getResolvedOptions
args = getResolvedOptions(sys.argv,
['JOB_NAME',
's3_target_path_key',
's3_target_path_value'])
print ("Target path key is: ", args['s3_target_path_key'])
print ("Target Path value is: ", args['s3_target_path_value'])
return {'result':"some_string"}
But it throws error SyntaxError: 'return' outside function
Glue is not made to return response as it is expected to run long running operation inside it. Blocking for response for long running task is not right approach in itself. Instead of it, you may use launch job (service 1) -> execute job (service 2)-> get result (service 3) pattern. You can send json response to AWS service 3 which you want to launch from AWS Service 2 (execute job) e.g. if you launch lambda from glue job, you can send json response to it.

Lambda Python request athena error OutputLocation

I'm working with AWS Lambda and I would like to make a simple query in athena and store my data in an s3.
My code :
import boto3
def lambda_handler(event, context):
query_1 = "SELECT * FROM test_athena_laurent.stage limit 5;"
database = "test_athena_laurent"
s3_output = "s3://athena-laurent-result/lambda/"
client = boto3.client('athena')
response = client.start_query_execution(
QueryString=query_1,
ClientRequestToken='string',
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': 's3://athena-laurent-result/lambda/'
}
)
return response
It works on spyder 2.7 but in AWS I have this error :
Parameter validation failed:
Invalid length for parameter ClientRequestToken, value: 6, valid range: 32-inf: ParamValidationError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 18, in lambda_handler
'OutputLocation': 's3://athena-laurent-result/lambda/'
I think that It doesn't understand my path and I don't know why.
Thanks
ClientRequestToken (string) --
A unique case-sensitive string used to ensure the request to create the query is idempotent (executes only once). If another StartQueryExecution request is received, the same response is returned and another query is not created. If a parameter has changed, for example, the QueryString , an error is returned. [Boto3 Docs]
This field is autopopulated if not provided.
If you are providing a string value for ClientRequestToken, ensure it is within length limits from 32 to 128.
Per #Tomalak's point ClientRequestToken is a string. However, per the documentation I just linked, you don't need it anyway when using the SDK.
This token is listed as not required because AWS SDKs (for example the AWS SDK for Java) auto-generate the token for users. If you are not using the AWS SDK or the AWS CLI, you must provide this token or the action will fail.
So, I would refactor as such:
import boto3
def lambda_handler(event, context):
query_1 = "SELECT * FROM some_database.some_table limit 5;"
database = "some_database"
s3_output = "s3://some_bucket/some_tag/"
client = boto3.client('athena')
response = client.start_query_execution(QueryString = query_1,
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': s3_output
}
)
return response