How to get S3 trigger details inside Lambda function? - amazon-web-services

I created a lambda function that is triggered using S3 PutObject event. My S3 bucket looks liks this:
s3://my-bucket-name/directory_1/...
s3://my-bucket-name/directory_2/...
s3://my-bucket-name/directory_3/...
Another application is creating folders like directory_1 and so on. Once data is created in this directory, the application also creates a file called _SUCCESS. so one of the directories look like this:
s3://my-bucket-name/directory_1/data.txt
s3://my-bucket-name/directory_1/_SUCCESS
Now I want to trigger my lambda function as soon as this _SUCCESS file is created. so I added a trigger of S3 in lambda as follows:
Event type: ObjectCreatedByPut
Bucket: my-bucket-name
Prefix: directory
Suffix: _SUCCESS
Currently, I'm able to see that lambda function is triggered properly whenever a new _SUCCESS file is created any directory. But I also want to know what is the exact key of the _SUCCESS file that triggered this function. How to do this?
For example, if s3://my-bucket-name/directory_1/_SUCCESS triggers my lambda, I should be able to get full path of this file inside this lambda function.

S3 notification event structure is described in details here.
For example to get bucket name from the event (assuming 1 record in the event):
bucket_name = event['Records'][0]['s3']['bucket']['name']
Similarly for key:
key_name = event['Records'][0]['s3']['object']['key']

Related

Lambda function is not triggered by S3 put event trigger. There is no cloud watch log for the event either

I created Lambda function to process csv file when it is uploaded to S3.
My trigger:
S3: bucket
arn:aws:s3:::bucket
Details
Event type: ObjectCreatedByPut
Notification name: ~notification name~
Suffix: .csv
I have had thousands of files uploaded and processed.
I wanted to process one file again due to some issues with data. So I deleted the file, uploaded the file. I have did that multiple times previously and it worked until recently it wasn't.
My code is from blueprint.
def lambda_handler(event, context):
for record in event['Records']:
bucket_name = record['s3']['bucket']['name']
path_to_file = record['s3']['object']['key']
I checked the Cloud Watch logs under the log group but there is no log created when I put the file. I am not sure what is going on.
I gave full access to CloudWatchLogs just to be sure. Still nothing.

AWS S3 put event Lambda create DynamoDB entry with correct owner Cognito

I am using amplify Storage client JS library in order to put a file on S3.
this.amplify.storage().put(
this.s3ImagePath,
this.s3ImageFile,
this._storageOptions)
On the s3 I have created a trigger for the PUT events in order to call a Lambda.
The Lambda inserts an entry on DynamoDB.
My problem is that I want to put the correct owner at this newly created DynamoDB entry.
Inside the Lambda I call the headObject like this:
const headResult = await S3.headObject({Bucket: bucketName, Key: key }).promise();
console.log('headResult ', headResult);
I would expect that the headResult would contain the correct owner who uploaded the file and created the event on S3 bucket.
I am able to find from the S3 event:
ownerIdentity: { principalId: 'A2N3MUDHGLOEZI' },
but I am not sure how to use this in order to find the correct owner and insert the correct value on the new DynamoDB entry.
I found also this that could help:
userIdentity:
{ principalId: 'AWS:AROASQSLQMXWXMMOH3FEA:CognitoIdentityCredentials' },
Since I am new to the serverless paradigm could you please tell me how am I supposed to insert a new entry to the DynamoDB when the Lambda is triggered from an S3 PUT event ?
As a workaround I am thinking to remove the S3 PUT event that calls the lambda in order to create the entry and call a mutation from the client when the upload of the file finishes. But I consider this a workaround and not the correct approach to the problem.

AWS Lambda and S3 trigger event data

I have a lambda function that I need to run eveytime there is a change in my s3 Bucket. I have added the trigger and it is working just fine, but I was wondering if there is any way to limit the scope the lambda function is to be run... for example Instead of running over the entire bucket, it runs only in the folder (inside the bucket) that change has been made?! or something like that..!
You can specify rules:
- s3:
bucket: photos
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
See the functions/events/s3 section in the yml definition.
Per this AWS announcement, you can add prefix or suffix restrictions for S3 event triggers.

Passing payload through AWS S3/Lambda Trigger

I am new to the AWS platform. I have invoked a lambda function through AWS CLI.
aws lambda invoke --function-name CFT ... --payload file://${DATA_TMP}/affl_ftp_config.json ${DATA_LOG}/outfile.txt
Here, The Payload is a json file
{
"s3_bucket": "fanatics.dev.internal.confidential",
....
"date": "20160813"
}
This json file is being used as part of event object in my lambda handler.
Is it possible to have this behavior configured when a S3 file is uploaded and it automatically triggers a Lambda function?
For e.g.,
I upload a file in a S3_bucket that will trigger a lambda function with the json payload shown above.
No, you can't.
The Lambda function triggered by an S3 upload provides information about the new object (region, bucket, key, version-id if the bucket is versioned) but does not provide the object payload.
See the documented S3 Event Message Structure. This is what a Lambda function invoked by S3 will receive.
So, the Lambda function invoked by the S3 event must then fetch the object from S3 in order to access the payload.
So, either your existing lambda function will need to be modified, or you'll need a new lambda function to respond to the event, fetch the payload, and then call the original function.
Note also that if these events are triggered by overwrites of existing objects, then you will want versioning enabled on your bucket and you'll want to use GetObjectVersion to fetch the payload with the explicit versionId in the event, because GetObject (without specifying the version) may return stale data on overwrites.
Yes you can. S3 is one of the Lambda triggers. Please read more details here

AWS S3 Put event is not working in lower level directory

I stored application logs into AWS S3 in following formats.
/MyBucket/TestApplication/Year/Month/Date/mylogs.log
I enabled the event of S3 Bucket "MyBucket".
See the event settings
But event not fired when new logs come to the log file "mylogs.log"
We use Event Type : ObjectCreated(All) in S3 bucket then Lambda function triggered from lower level directory of the bucket.
You must use Event Type : ObjectCreated(All) in S3 bucket then Lambda function will trigger. If in any case it is not working then check your role policy that You have defined on lambda. This role must have policy to read/write s3.