Trigger Lambda function when S3 buckets get created - amazon-web-services

I want to create a Lambda function using Java or Python so that whenever a new S3 bucket gets created, by default it enables default encryption of AES256 and server-access logs on those buckets, if they were not enabled while creating the bucket.

You can use Cloudwatch event here.
Go to cloudwatch and under events, select rule.
Create a new rule.
Select Event Pattern, specific operations, and then select CreateBucket.
Now in the next column you can click add target and select the
lambda you want to trigger. (create one if you don't already have it)
Now write your lambda using Java and use the SDK to make whatever changes you require.

Related

How to do aws vm incremental scan functionality?

Can anyone help me with how to do an incremental scan on AWS using python SDK? I want to scan the S3 bucket of an Amazon EC2 instance. The first time it should scan completely and after that, it should scan only the change in this.
It can be done using...
Use logs
Consume changes (event driven)
Use rest API
But how I did not get it?
There is no "incremental scan" capability in Amazon S3.
You would need to list all existing objects. However, you could use the LastModified date to determine which objects were created since a particular point in time.
Alternatively, you could create an Amazon S3 Event to trigger an AWS Lambda function whenever a new object is created. You could then write code in the Lambda function that extracts the name of the new objects from the event parameter and does something with it. This is effectively 'incremental', but one new object at a time.

How to trigger a specific AWS Lambda version from AWS Dynamodb Streams

I have a dynamodb table that triggers a lambda function by enabling the Dynamodb streams. This was setup from the Dynamodb console. I would however like to be able to point the trigger to a specific version/alias of the lambda function. Most other AWS services allow you to specify the lambda ARN where you can tag on the version or alias at the end like arn:aws:lambda:::function::<version/alias>
However when adding a trigger to the dynamodb table , it only allows you to select the lambda function name from a list and there seems to be no way to use a version/alias.
There also does not seem to be a CLI/api command to do the same.
Has anyone had any success doing this?
We can attach a different trigger to each alias we have for a Lambda function. In order to do this, we just have to go Lambda console, select our function, and create a new alias.
After the alias is created, we will have the option to attach new triggers:
On this page we just have press + Add trigger button and we will have to search for DynamoDB. After we select DynamoDB, we are prompted to select our table for which we have the stream:
As I know the way that might be useful for you as well, Which is as follows.
Open AWS Console, Search the DynamoDB service, and open it.
Click your table and click on the triggers option.
Then you can see the lambda's which are linked with the stream. Click on the lambda you wanna change the version/alias. Then click on the edit/test trigger button.
You will redirect to the Lambda service page and where you can deploy your specific version of the lambda. Then stream will call that specific version of lambda.
Short Way:
Open AWS Console, Search lambda service, and open your lambda/function who is trigger with the stream.
Deploy the specific version as you need.
Hope that might be helpful for you!

Enable/Disable or delete aws lambda trigger on s3:Objectcreated.* using python

I have a lambda function that triggers when any new objectCreated(s3:Objectcreated.*) in a Bucket A-prod, This lambda process the file saves the result in B-prod.
Now I want to restore this data on QA, I am writing a script for same and I don't want to process file again. For my case I will copy B-prod to B-qa and A-prod to A-qa via a python script.
It would be great if I can disable the lambda trigger temporarily, But It is not possible as aws do not allow disabling anymore(It is grey on aws console).
I can delete trigger from aws console but can't find how to do it from python or cli.
AWS Console Screenshot Lambda Function Trigger
can't find how to do it from python or cli.
In boto3 you use put_bucket_notification_configuration to provide empty notifications.
For anyone looking for a temporary disable from AWS console without deleting the trigger, here's a work-around. Note that this is not meant as a permanent solution. I had to disable temporarily and it works.
Go to S3 > Buckets > your-bucket > Properties
Scroll down to "Event Notifications"
You should see your Lambda Function in Destination,
Choose Edit,
Uncheck "All Object Create Events",
Check another event type that will not occur in your process, such as "Object Tagging",
Save.
When finished, enable by checking the "Object Create Events" and unchecking the other event.

AWS lambda function and Athena to create partitioned table

Here's my requirements. Every day i'm receiving a CSV file into an S3 bucket. I need to partition that data and store it into Parquet to eventually map a Table. I was thinking about using AWS lambda function that is triggered whenever a file is uploaded. I'm not sure what are the steps to do that.
There are (as usual in AWS!) several ways to do this, the 2 first ones that come to me first are:
using a Cloudwatch Event, with an S3 PutObject Object level) action as trigger, and a lambda function that you have already created as a target.
starting from the Lambda function it is slightly easier to add suffix-filtered triggers, eg for any .csv file, by going to the function configuration in the Console, and in the Designer section adding a trigger, then choose S3 and the actions you want to use, eg bucket, event type, prefix, suffix.
In both cases, you will need to write the lambda function in either case to do the work you have described, and it will need IAM access to the bucket to pull the files and process them.

executing lambda on s3 bucket ObjectCreated event in cloudformation

I have a requirement to launch a number of lambda functions on the ObjectCreated event in a number of s3 buckets. But the architecture of my application requires modularity thus, I have to create two different templates, one for my bucket creation and another for the lambdas. According to me, one way to achieve this is by using the SNS service.
SNS
we create the SNS topic in the buckets creation template and provide the ObjectCreated event to it through NotificationConfiguration property of the s3. In the lambda template we can subscribe the lambda to the above mentioned SNS topic and the lambda function will be called on the s3 ObjectCreated event.
But again the architecture does not allows using SNS.
Possible way
Is it all possible to do this without using SNS and compromising on the modularity like making two separate templates for buckets and lambdas and using their notification configuration in a third template to complete the chain.
Final Question
I can not use SNS and I want modularity, how can I call my lambda functions on the s3 event? is it even ossible with my restrictions?
thank you
You could trigger your functions straight from S3 using events in the bucket properties. http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
You could also use a CloudWatch Event Rule to trigger your functions. To do so:
Go to your AWs Console and select Services > CloudWatch.
Select Rules under Events on the left.
Select Create Rule.
Leave Event Pattern selected.
Select Simple Storage Service (S3) from Service Name drop down.
Select Object Level Operations from Event Type drop down.
Select Specific operation(s).
Select PutObject from drop down.
Select Specific bucket(s) by name.
Enter bucket names.
Select + Add target* on the right.
Select Lambda function to trigger.
Select Configure details at the bottom of the page.
Enter a rule name.
Finish by selecting Create rule.