I am new to the AWS platform. I have invoked a lambda function through AWS CLI.
aws lambda invoke --function-name CFT ... --payload file://${DATA_TMP}/affl_ftp_config.json ${DATA_LOG}/outfile.txt
Here, The Payload is a json file
{
"s3_bucket": "fanatics.dev.internal.confidential",
....
"date": "20160813"
}
This json file is being used as part of event object in my lambda handler.
Is it possible to have this behavior configured when a S3 file is uploaded and it automatically triggers a Lambda function?
For e.g.,
I upload a file in a S3_bucket that will trigger a lambda function with the json payload shown above.
No, you can't.
The Lambda function triggered by an S3 upload provides information about the new object (region, bucket, key, version-id if the bucket is versioned) but does not provide the object payload.
See the documented S3 Event Message Structure. This is what a Lambda function invoked by S3 will receive.
So, the Lambda function invoked by the S3 event must then fetch the object from S3 in order to access the payload.
So, either your existing lambda function will need to be modified, or you'll need a new lambda function to respond to the event, fetch the payload, and then call the original function.
Note also that if these events are triggered by overwrites of existing objects, then you will want versioning enabled on your bucket and you'll want to use GetObjectVersion to fetch the payload with the explicit versionId in the event, because GetObject (without specifying the version) may return stale data on overwrites.
Yes you can. S3 is one of the Lambda triggers. Please read more details here
Related
I created a lambda function that is triggered using S3 PutObject event. My S3 bucket looks liks this:
s3://my-bucket-name/directory_1/...
s3://my-bucket-name/directory_2/...
s3://my-bucket-name/directory_3/...
Another application is creating folders like directory_1 and so on. Once data is created in this directory, the application also creates a file called _SUCCESS. so one of the directories look like this:
s3://my-bucket-name/directory_1/data.txt
s3://my-bucket-name/directory_1/_SUCCESS
Now I want to trigger my lambda function as soon as this _SUCCESS file is created. so I added a trigger of S3 in lambda as follows:
Event type: ObjectCreatedByPut
Bucket: my-bucket-name
Prefix: directory
Suffix: _SUCCESS
Currently, I'm able to see that lambda function is triggered properly whenever a new _SUCCESS file is created any directory. But I also want to know what is the exact key of the _SUCCESS file that triggered this function. How to do this?
For example, if s3://my-bucket-name/directory_1/_SUCCESS triggers my lambda, I should be able to get full path of this file inside this lambda function.
S3 notification event structure is described in details here.
For example to get bucket name from the event (assuming 1 record in the event):
bucket_name = event['Records'][0]['s3']['bucket']['name']
Similarly for key:
key_name = event['Records'][0]['s3']['object']['key']
I am working with a serverless project and I have only the access to aws cli, so I want to get the trigger information of a function such as event and since I am using a sns topic to trigger the function, I want to get the topic infomation and arn, I tried diffrent options, such as,
list-event-source-mapping - which returns a empty array
get-function: which doesn't hold that value
Do I have means to get the trigger information of a function with aws cli?
In this case, I believe the only way to get that information would be from the get-policy API call as that will contain the resource based policy(AKA trigger) which allows the other service to invoke the Lambda.
The get-event-source-mappings API returns the stream based event sources in the region such as:
Kinesis
Dynamo
SQS
So for example, if I have a lambda function which is configured to be invoked from SNS then the policy returned would be similar to:
aws lambda get-policy --function-name arn:aws:lambda:us-east-1:111122223333:function:YOUR_LAMBDA_NAME_HERE --query Policy --output text | jq '.Statement[0].Condition.ArnLike["AWS:SourceArn"]'
OUTPUT:
"arn:aws:sns:REGION:111122223333:TOPIC_NAME"
Though that assumes that the policy in the Lambda function only has that one statement but if you know the specific statement id then you should be able to select it in jq using a filter
I am using amplify Storage client JS library in order to put a file on S3.
this.amplify.storage().put(
this.s3ImagePath,
this.s3ImageFile,
this._storageOptions)
On the s3 I have created a trigger for the PUT events in order to call a Lambda.
The Lambda inserts an entry on DynamoDB.
My problem is that I want to put the correct owner at this newly created DynamoDB entry.
Inside the Lambda I call the headObject like this:
const headResult = await S3.headObject({Bucket: bucketName, Key: key }).promise();
console.log('headResult ', headResult);
I would expect that the headResult would contain the correct owner who uploaded the file and created the event on S3 bucket.
I am able to find from the S3 event:
ownerIdentity: { principalId: 'A2N3MUDHGLOEZI' },
but I am not sure how to use this in order to find the correct owner and insert the correct value on the new DynamoDB entry.
I found also this that could help:
userIdentity:
{ principalId: 'AWS:AROASQSLQMXWXMMOH3FEA:CognitoIdentityCredentials' },
Since I am new to the serverless paradigm could you please tell me how am I supposed to insert a new entry to the DynamoDB when the Lambda is triggered from an S3 PUT event ?
As a workaround I am thinking to remove the S3 PUT event that calls the lambda in order to create the entry and call a mutation from the client when the upload of the file finishes. But I consider this a workaround and not the correct approach to the problem.
I am making a call to aws cloudWatchEvent putRule & PutTarget api through aws sdk to create a cloudWatch Rule and attach a target to it. My Target is a lambda function, the rule gets created, the target gets attached to the rule but when the rule triggers based on its schedule the target lambda function not trigger. So I looked further and found out that the event source under the lambda function is not added which makes it not trigger. If I create the rule and target through AWS console the event source gets created and everything works but not thorugh API.
You'll need to call the lambda add-permission after adding the target.
That is (via boto3 for me):
create the lambda
create the rule
create the targets
call lambda add-permission with the lambda arn
see boto3 documentation or the cli doc.
It is possible to add event sources via aws sdk. I faced the same issue and please see code below as the solution using java.
AddPermissionRequest addPermissionRequest = new AddPermissionRequest();
addPermissionRequest.setStatementId("12345ff"); //any unique string would go
addPermissionRequest.withSourceArn(ruleArn);
addPermissionRequest.setAction("lambda:InvokeFunction");
addPermissionRequest.setPrincipal("events.amazonaws.com");
addPermissionRequest.setFunctionName("name of your lambda function");
AWSLambdaAsyncClient lambdaClient = new AWSLambdaAsyncClient();
lambdaClient.withRegion(Regions.US_EAST_1); //region of your lambda's location
lambdaClient.addPermission(addPermissionRequest);
I had same issue here, and i solve this by what #Anvita Shukla has sugested.
This worked fine when i do:
create the lambda (this i was created in web page)
And with SDK
create the rule object
create the target object
put request of the rule
put request of the target
get response object of rule request to retrieve the rule ARN
create permission object (has #Anvita Shukla said) and set the rule
ARN
add permission by lambda client object
In the aws lambda page i can see my lambdas with associated triggers events. And in aws cloudwatch events page i can see the created rules.
I wrote this in java lang. If you want i can share the code.
I fixed it. You need to add permission for lambda with SourceArn is cloud watch after putTargets. For example :
var lambdaPermission = {
FunctionName: 'cloudwatch-trigger',
StatementId : timestamp.toString(),
Action: 'lambda:InvokeFunction',
Principal: 'events.amazonaws.com',
SourceArn: 'arn:aws:events:ap-southeast-1:XXXXXX:rule/schedule_auto_1'
};
lambda.addPermission(lambdaPermission, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Success", data);
console.log("add permisson done");
}
});
As far as I understand this is currently not possible through the SDK, CloudWatch event sources can only be added to lambdas through the console as you said or using the CLI. If I'm wrong I would love to know what is possible, but the documentation here seems to agree.
http://docs.aws.amazon.com/lambda/latest/dg/with-scheduled-events.html
I stored application logs into AWS S3 in following formats.
/MyBucket/TestApplication/Year/Month/Date/mylogs.log
I enabled the event of S3 Bucket "MyBucket".
See the event settings
But event not fired when new logs come to the log file "mylogs.log"
We use Event Type : ObjectCreated(All) in S3 bucket then Lambda function triggered from lower level directory of the bucket.
You must use Event Type : ObjectCreated(All) in S3 bucket then Lambda function will trigger. If in any case it is not working then check your role policy that You have defined on lambda. This role must have policy to read/write s3.