AWS Lambda cutting my payload in lambda invocation - amazon-web-services

I fried my brain a long day with a weird problem. I'm trying to send a .docx file document encoded in base64, initially, I thought that the AWS Api Gateway was cutting my body request, but invoking directly via awscli I check the lambda is cutting part of the body payload. Let me show you:
Here a snippet of my lambda function:
def lambda_handler(event, context):
if event["httpMethod"] == "GET":
return {
'statusCode': 200,
'body': json.dumps('Hello from PDFTron! Please send base64 doc to use this Lambda function!')
}
elif event["httpMethod"] == "POST":
try:
output_path = "/tmp/"
body = json.loads(str(event["body"])) # <--- the error is thrown here
base64str = body["file"]["data"]
filename = body["file"]["filename"]
base64_bytes = base64decode(base64str)
input_filename = filename.split('.')[0] + "docx"
Can you check the full file in lambda_function.py
Ok, now I try call my function via awscli with this command line:
aws lambda invoke \
--profile lab \
--region us-east-1 \
--function-name pdf_sign \
--cli-binary-format raw-in-base64-out \
--log-type Tail \
--query 'LogResult' \
--output text \
--payload file://payload.json response.json | base64 -d
payload.json
And receive the error:
C/ZcP1zcsiKiso3qnIUFO0Bk9/LjB7EOzkNAA7GgFDYu2A7RzzmPege9ijNyW/K0LvQKiYYtd21rNKycfuvBIr8syxsO7wi2gebCTwnZmHG+x/9N2jid9MWX+uApnxQ19L5TCPJbetnNGoe94JNV1A5VV5seZPWJ7BMTa7WFKCvBRyBeXWiivCsFH5FY7lRQGmmC8rq6Ezzj4rP3ndEKabbyq9HBRddi8TQILtJ7wfMQQU1sQL8FgwdJFXIqvhhL9a8EHwEJC2oblN8d1U1MbLTqYEnty1Z1EQT/bRCPoNJq18okfXuc70GjC0U0P2m5l6z4riKkoS3YXgWjLLIxbCQD7nzEIGuDHeWe+ADzsBybqyRyBOeBAxk0ED5XN1SITy31hv8QW+ViBw2j1ExOruxU44+sS9d7ZQ9yvXqog7O0v6MhDfxHfPa1W6ULOY7y3Jgt/9XgbuOVptXclLf5GWQesSErNLTXaTWTQTxSI6FL+emt3UJzivnbkQ7rZfxnZXU9K+kbLulko3uYfib5CwAA//8DAFBLAwQUAAYACAAAACEA+36uyJIBAAAiAwAAEQAIAWRvY1Byb3BzL2NvcmUueG1sIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAfJJNa9wwEEDvhf4Ho7tX0uZjg/E6tAmhhAYCdUnoTZUmGzW2JKRJnP33HdtrbzaEgA4zmqeHNKPy/LVtsheIyXq3ZnIhWAZOe2PdZs1+11f5GcsSKmdU4x2s2RYSO6++fil1KLSPcBt9gIgWUkYmlwod1uwRMRScJ/0IrUoLIhwVH3xsFVIaNzwo/aQ2wJdCnPIWUBmFivfCPMxGtlMaPSvDc2wGgdEcGmjBYeJyIfmeRYht+vDAUHlDtha3AT5Ep+JMvyY7g13XLbqjAaX7S35/8/PX8NTcur5XGlhVGl2gxQaqku9DitLz33+gcdyeE4p1BIU+VtcqeZf9UDFaCgZsKvVNf4Jt56NJJDjICDOQdLQBaZSj/mCD6EYlvKHZPlgw37fVtya7pqGmwfSu1uMRXmz/L6rjgZjTSXUbrUMw1VJIkYtlLk9rIQshaP2ZnRNU7iYzPgZMRh0txv5Plbuji8v6iu19ZzXJ5Gr0vTu/F7a7W39uPMnFKpermnQnx4fGSTC29PBXV/8BAAD//wMAUEsDBBQABgAIAAAAIQDKYulMAAEAAOQBAAAUAAAAd29yZC93ZWJTZXR0aW5ncy54bWyU0VFLwzAQB/B3we9Q8r6mHSpS1g5EBj6rHyDNrl1YLhfuMuv89IY6J+LLfLtwuR93/Ffrd/TFG7A4Cq2qy0oVECxtXRhb9fqyWdyrQpIJW+MpQKuOIGrdXV+tpmaC/hlSyj+lyEqQBm2rdinFRmuxO0AjJUUIuTkQo0n5yaNGw/tDXFjCaJLrnXfpqJdVdadODF+i0DA4C49kDwghzfOawWeRguxclG9tukSbiLeRyYJIvgf9l4fGhTNT3/yB0FkmoSGV+ZjTRjOVx+tqrtD/ALf/A5ZnAG3zNAZi0/scQd6kyJjqcgYUk0P3ARviB6ZJgHW30r+y6T4BAAD//wMAUEsDBBQABgAIAAAAIQA1ZRhYpAAAAP4AAAATACgAY3VzdG9tWG1sL2l0ZW0xLnhtbCCiJAAooCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACsz00KgzAQhuF9oXcIOYCRLlyICkLd1kKgq26SOJpAfiQZQW/fIKUn6HL4Hl6YRtY8bFFBIhwsKISJ42Ghpe/+2XOzox4mgyb4cZ6NgtFb46HYk6XkhA/hMs6WkhfElGFLK0p2Z32qZUs14lozlpQGJ1IRVvB5m0N0AvMZFxbO8D2ozYFHdivLikkjrQlLFKs+vrG/pLqG/R7urpcPAAAA//8DAFBLAQItABQABgAIAAAAIQCWW+dDiQEAADwGAAATAAAAAAAAAAAAAAAAAAAAAABbQ29udGVudF9UeXBlc10ueG1sUEsBAi0AFAAGAAgAAAAhAB6RGrfvAAAATgIAAAsAAAAAAAAAAAAAAAAAwgMAAF9yZWxzLy5yZWxzUEsBAi0AFAAGAAgAAAAhAFRBdBwzAQAASwUAABwAAAAAAAAAAAAAAAAA4gYAAHdvcmQvX3JlbHMvZG9jdW1lbnQueG1sLnJlbHNQSwECLQAUAAYACAAAACEA5fxODOgMAADnMAAAEQAAAAAAAAAAAAAAAABXCQAAd29yZC9kb2N1bWVudC54bWxQSwECLQAKAAAAAAAAACEArp7hlQsEAQALBAEAFQAAAAAAAAAAAAAAAABuFgAAd29yZC9tZWRpYS9pbWFnZTMuZ2lmUEsBAi0AFAAGAAgAAAAhAJa1reLxBQAAUBsAABUAAAAAAAAAAAAAAAAArBoBAHdvcmQvdGhlbWUvdGhlbWUxLnhtbFBLAQItAAoAAAAAAAAAIQAZDo2bmmgCAJpoAgAVAAAAAAAAAAAAAAAAANAgAQB3b3JkL21lZGlhL2ltYWdlMS5wbmdQSwECLQAUAAYACAAAACEABNjgzVrSAgCcAgcAFQAAAAAAAAAAAAAAAACdiQMAd29yZC9tZWRpYS9pbWFnZTIuZW1mUEsBAi0AFAAGAAgAAAAhABb3sCgZCAAArB0AABEAAAAAAAAAAAAAAAAAKlwGAHdvcmQvc2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhALqKxOGRAgAAZgkAABIAAAAAAAAAAAAAAAAAcmQGAHdvcmQvZm9udFRhYmxlLnhtbFBLAQItABQABgAIAAAAIQB0Pzl6wgAAACgBAAAeAAAAAAAAAAAAAAAAADNnBgBjdXN0b21YbWwvX3JlbHMvaXRlbTEueG1sLnJlbHNQSwECLQAUAAYACAAAACEASUab3eAAAABVAQAAGAAAAAAAAAAAAAAAAAA5aQYAY3VzdG9tWG1sL2l0ZW1Qcm9wczEueG1sUEsBAi0AFAAGAAgAAAAhAIIHldWPDAAA+ncAAA8AAAAAAAAAAAAAAAAAd2oGAHdvcmQvc3R5bGVzLnhtbFBLAQItABQABgAIAAAAIQCiDemz4QEAAOMDAAAQAAAAAAAAAAAAAAAAADN3BgBkb2NQcm9wcy9hcHAueG1sUEsBAi0AFAAGAAgAAAAhAPt+rsiSAQAAIgMAABEAAAAAAAAAAAAAAAAASnoGAGRvY1Byb3BzL2NvcmUueG1sUEsBAi0AFAAGAAgAAAAhAMpi6UwAAQAA5AEAABQAAAAAAAAAAAAAAAAAE30GAHdvcmQvd2ViU2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhADVlGFikAAAA/gAAABMAAAAAAAAAAAAAAAAARX4GAGN1c3RvbVhtbC9pdGVtMS54bWxQSwUGAAAAABEAEQBdBAAAQn8GAAAA'}}}
[ERROR] Runtime.MarshalError: Unable to marshal response: Object of type KeyError is not JSON serializable
Traceback (most recent call last):END RequestId: a26b93bd-bcf2-4072-99da-29ffcc3bc350
REPORT RequestId: a26b93bd-bcf2-4072-99da-29ffcc3bc350 Duration: 11.92 ms Billed Duration: 12 ms Memory Size: 1024 MB Max Memory Used: 93 MB
When I add a debug (print command) on my code I can check that the payload is cutted and in the log error conntains the the final part of my payload (eg. RX4GAGN1c3RvbVhtbC9pdGVtMS54bWxQSwUGAAAAABEAEQBdBAAAQn8GAAAA'}}}.
Reading the aws doc about lambda quotas and limits, we can see that the invocation payload limit is 6MB, very different than my payload size (232kb)
Resource
Quota
Invocation payload (request and response)
6 MB each for request and response (synchronous); 256 KB (asynchronous)
Source: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
I really think I'm doing all things, into the rules, someone here can tell me and help me, where am i going wrong or misunderstand?
I expect the payload of my request arrives correct in my python function.

How you logged event in your Lambda function?
It seems like the problem is this line:
body = json.loads(str(event["body"]))
You are using the CLI and based on the docs, the Event will be sent as a String. So you need to JSON parse the event object first.
output_path = "/tmp/"
e = json.loads(event)
body = str(e["body"])
base64str = body["file"]["data"]
filename = body["file"]["filename"]
base64_bytes = base64decode(base64str)
input_filename = filename.split('.')[0] + "docx"

Related

Getting error while testing AWS Lambda function: "Invalid database identifier"

Hi I'm getting error while testing lambda function like:
{
"errorMessage": "An error occurred (InvalidParameterValue) when calling the DescribeDBInstances operation: Invalid database identifier: <RDS instance id>",
"errorType": "ClientError",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 25, in lambda_handler\n db_instances = rdsClient.describe_db_instances(DBInstanceIdentifier=rdsInstanceId)['DBInstances']\n",
" File \"/var/runtime/botocore/client.py\", line 391, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 719, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
AND here is my lambda code :
import json
import boto3
import logging
import os
#Logging
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
#Initialise Boto3 for RDS
rdsClient = boto3.client('rds')
def lambda_handler(event, context):
#log input event
LOGGER.info("RdsAutoRestart Event Received, now checking if event is eligible. Event Details ==> ", event)
#Input event from the SNS topic originated from RDS event notifications
snsMessage = json.loads(event['Records'][0]['Sns']['Message'])
rdsInstanceId = snsMessage['Source ID']
stepFunctionInput = {"rdsInstanceId": rdsInstanceId}
rdsEventId = snsMessage['Event ID']
#Retrieve RDS instance ARN
db_instances = rdsClient.describe_db_instances(DBInstanceIdentifier=rdsInstanceId)['DBInstances']
db_instance = db_instances[0]
rdsInstanceArn = db_instance['DBInstanceArn']
# Filter on the Auto Restart RDS Event. Event code: RDS-EVENT-0154.
if 'RDS-EVENT-0154' in rdsEventId:
#log input event
LOGGER.info("RdsAutoRestart Event detected, now verifying that instance was tagged with auto-restart-protection == yes")
#Verify that instance is tagged with auto-restart-protection tag. The tag is used to classify instances that are required to be terminated once started.
tagCheckPass = 'false'
rdsInstanceTags = rdsClient.list_tags_for_resource(ResourceName=rdsInstanceArn)
for rdsInstanceTag in rdsInstanceTags["TagList"]:
if 'auto-restart-protection' in rdsInstanceTag["Key"]:
if 'yes' in rdsInstanceTag["Value"]:
tagCheckPass = 'true'
#log instance tags
LOGGER.info("RdsAutoRestart verified that the instance is tagged auto-restart-protection = yes, now starting the Step Functions Flow")
else:
tagCheckPass = 'false'
#log instance tags
LOGGER.info("RdsAutoRestart Event detected, now verifying that instance was tagged with auto-restart-protection == yes")
if 'true' in tagCheckPass:
#Initialise StepFunctions Client
stepFunctionsClient = boto3.client('stepfunctions')
# Start StepFunctions WorkFlow
# StepFunctionsArn is stored in an environment variable
stepFunctionsArn = os.environ['STEPFUNCTION_ARN']
stepFunctionsResponse = stepFunctionsClient.start_execution(
stateMachineArn= stepFunctionsArn,
name=event['Records'][0]['Sns']['MessageId'],
input= json.dumps(stepFunctionInput)
)
else:
LOGGER.info("RdsAutoRestart Event detected, and event is not eligible")
return {
'statusCode': 200
}
I'm trying to Stop an Amazon RDS database which starts automatically after 7 days. I'm following this AWS document: Field Notes: Stopping an Automatically Started Database Instance with Amazon RDS | AWS Architecture Blog
Can anyone help me?
The error message is saying: Invalid database identifier: <RDS instance id>"
It seems to be coming from this line:
db_instances = rdsClient.describe_db_instances(DBInstanceIdentifier=rdsInstanceId)['DBInstances']
The error message is saying that the rdsInstanceId variable contains <RDS instance id>, which seems to be an example value rather than a real value.
In looking at the code on Field Notes: Stopping an Automatically Started Database Instance with Amazon RDS | AWS Architecture Blog, it is asking you to create a test event that includes this message:
"Message": "{\"Event Source\":\"db-instance\",\"Event Time\":\"2020-07-09 15:15:03.031\",\"Identifier Link\":\"https://console.aws.amazon.com/rds/home?region=<region>#dbinstance:id=<RDS instance id>\",\"Source ID\":\"<RDS instance id>\",\"Event ID\":\"http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Events.html#RDS-EVENT-0154\",\"Event Message\":\"DB instance started\"}",
If you look closely at that line, it includes this part to identify the Amazon RDS instance:
dbinstance:id=<RDS instance id>
I think that you are expected to modify the provided test event to fill-in your own values for anything in <angled brackets> (such as the Instance Id of your Amazon RDS instance).

Why my 'AWS Lambda Invoke Function' task in Azure DevOps Build Pipeline doesn't fail if the Lambda returns 400?

I have this python code inside Lambda:
#This script will run as a Lambda function on AWS.
import time, json
cmdStatus = "Failed"
message = ""
statusCode = 200
def lambda_handler(event, context):
time.sleep(2)
if(cmdStatus=="Failed"):
message = "Command execution failed"
statusCode = 400
elif(cmdStatus=="Success"):
message = "The script execution is successful"
statusCode = 200
else:
message = "The cmd status is: " + cmdStatus
statusCode = 500
return {
'statusCode': statusCode,
'body': json.dumps(message)
}
and I am invoking this Lambda from Azure DevOps Build Pipeline - AWS Lambda Invoke Function.
As you can see in the above code - have intentionally put that cmdStatus to Failed to make that Lambda fail but when executed from Azure DevOps Build Pipeline - the task succeeds. Strange.
How can I make the pipeline to fail in this case? Please help.
Thanks
I have been working with a similar issue myself and it looks like a bug in the task itself. It was reported in 2019 and nothing happened since so I wouldn't hold out much hope.
https://github.com/aws/aws-toolkit-azure-devops/issues/175
My workaround to this issue was to instead use the AWS CLI task with
Command: lambda
Subcommand: invoke
Options and Parameters: --function-name {nameOfYourFunction} response.json
Followed immediately by a bash task with an inline bash script
cat response.json
if grep -q "errorType" "response.json"; then
echo "An error was found"
exit 1
fi
echo "No error was found"

AWS (GovCloud) Lambda Destination Not Triggering

I am working in AWS GovCloud I have the following configuration in AWS Lambda:
A Lambda function which decodes a payload
A Kinesis Stream set as a trigger for the aforementioned function
A Lambda Destination (we have tried Lambda functions as well as SQS, SNS)
No matter the configuration, I cannot seem to get Lambda to trigger the destination function (or queue in the event of SQS).
Here is the current Lambda Function. I have tried many permutations of the result/return payload without avail.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
for record in event['Records']:
payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
print("Success")
result = {
"statusCode": 202,
"headers": {
#'Content-Type': 'application/json',
},
"body": '{payload}'
}
return json.dumps(result)
I then send a message to Kinesis with the AWS CLI (I have noted that "Test" in the console does not observe desintations as per Jared Short ).
Every 0.1s: aws kinesis put-records --stream-name test-stream --records Data=SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IGZyb20gdGhlIEFXUyBDTEkh,PartitionKey=partitionkey1 Thu Jul 8 19:03:54 2021
{
"FailedRecordCount": 0,
"Records": [
{
"SequenceNumber": "49619938447946944252072058244333476686328287240252293122",
"ShardId": "shardId-000000000000"
}
]
}
Using Cloudwatch metrics and logs I am able to observe the function being triggered by the messages sent to Kinesis every .1 second.
The metrics charts indicate a success (as I expect).
Here is an example log from Cloudwatch:
START RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Version: $LATEST
Loading function
Success
END RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d
REPORT RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Duration: 1.27 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB Init Duration: 113.64 ms
START RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Version: $LATEST
Success
END RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101
REPORT RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Duration: 1.04 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Version: $LATEST
Success
END RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb
REPORT RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Duration: 0.98 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Version: $LATEST
Success
END RequestId: e0382653-9c33-44d6-82a7-a82f0f416297
REPORT RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Duration: 1.05 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: f9600ef5-419f-4271-9680-7368ccc5512d Version: $LATEST
Success
However, viewing the cloudwatch logs/metrics for the destination lambda function or SQS queue clearly show that the destination is not being triggered.
Over the course of troubleshooting, I have over-provisioned IAM policies to the Lambda function execution role so I am fairly confident that it is not an IAM related issue. Additionally, both functions are sharing the same execution role.
One thing I am not clear on after reviewing AWS documentation and 3rd party information is the criteria by which AWS determines success or failure for a given function. I am currently researching the invokation docs in search of what might be wrong here - but my interpretation is that AWS knows our function is successful based on the above Cloudwatch metrics showing a 100% success rate.
Does anyone know what I am doing wrong or how to try to troubleshoot the destination trigger for lambda?
Edit: As pointed out, the code is not correct for multiple record events. This is a function of senseless troubleshooting/changes to the code to get the Destination to trigger. Even something as simple as this does not invoke the destination.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# for record in event['Records']:
# payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
# print("Success")
# result = {
# "statusCode": 202,
# "headers": {
# 'Content-Type': 'application/json',
# },
# "body": '{"Success":True, payload}'
# }
return { "result": "OK" }
So, the question: Can someone demonstrate it is possible to have a Kinesis Stream Event Source Trigger a Lambda Function which successfully triggers a Lambda destination in AWS Govcloud?

AWS SAM Incorrect region

I am using AWS SAM to test my Lambda functions in the AWS cloud.
This is my code for testing Lambda:
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
def test_data_extraction_validate():
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="eu-west-1",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=10,
retries={'max_attempts': 1}
)
)
else:
lambda_client = boto3.client('lambda',region_name="eu-west-1")
####################################################
# Test 1. Correct payload
####################################################
with open("payloads/myfunction/ok.json","r") as f:
payload = f.read()
# Correct payload
response = lambda_client.invoke(
FunctionName="myfunction",
Payload=payload
)
result = json.loads(response['Payload'].read())
assert result['status'] == True
assert result['error'] == ""
This is the command I am using to start AWS SAM locally:
sam local start-lambda -t template.yaml --debug --region eu-west-1
Whenever I run the code, I get the following error:
botocore.exceptions.ClientError: An error occurred (ResourceNotFound) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-2:012345678901:function:myfunction
I don't understand why it's trying to invoke function located in us-west-2 when I explicitly told the code to use eu-west-1 region. I also tried to use AWS Profile with hardcoded region - the same error.
When I switch the running_flag to False and run the code without AWS SAM everything works fine.
===== Updated =====
The list of env variables:
# env | grep 'AWS'
AWS_PROFILE=production
My AWS configuration file:
# cat /Users/alexey/.aws/config
[profile production]
region = eu-west-1
My AWS Credentials file
# cat /Users/alexey/.aws/credentials
[production]
aws_access_key_id = <my_access_key>
aws_secret_access_key = <my_secret_key>
region=eu-west-1
Make sure you are actually running the correct local endpoint! In my case the problem was that I had started the lambda client with an incorrect configuration previously, and so my invocation was not invoking what I thought it was. Try killing the process on the port you have specified: kill $(lsof -ti:3001), run again and see if that helps!
This also assumes that you have built the function FunctionName="myfunction" correctly (make sure your function is spelt correctly in the template file you use during sam build)

AWS Lambda reading Athena database file and writing S3 to no avail

A help because are not writing the file to the S3 bucket
What did I do:
import time
import boto3
query = 'SELECT * FROM db_lambda.tb_inicial limit 10'
DATABASE = 'db_lambda'
output = 's3: // bucket-lambda-test1 / result /'
def lambda_handler (event, context):
client = boto3.client ('athena')
# Execution
response = client.start_query_execution (
QueryString = query,
QueryExecutionContext = {
Database: DATABASE
},
ResultConfiguration = {
'OutputLocation': output,
}
)
return response
return
IAM role created with:
AmazonS3FullAccess
AmazonAthenaFullAccess
CloudWatchLogsFullAccess
AmazonVPCFullAccess
AWSLambda_FullAccess
When running Lambda message:
Response:
{
"statusCode": 200,
"body": "\" Hello from Lambda! \ ""
}
Request ID:
"f2dd5cd2-070c-41ea-939f-d4909ce39fd0"
Function logs:
START RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0 Version: $ LATEST
END RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0
REPORT RequestId: f2dd5cd2-070c-41ea-939f-d4909ce39fd0 Duration: 0.84 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 52 MB
How I did the test:
Configure test event
A function can have a maximum of 10 test events. The events are maintained, so that you can change your computer or web browser and test the function with the same events.
Create new test event
Edit saved test events
Test event saved
{
}
The "Hello from Lambda" message is the default code in a Lambda function. It would appear that you did not click 'Deploy' before testing the function. Clicking Deploy will save the Lambda code.
Also, once you get it running, please note that start_query_execution() will simply start the Athena query. You will need to use get_query_results() to obtain the results.