Read AWS s3 Bucket Name from CloudWatch Event - amazon-web-services

I am working on writing a Lambda function that triggers when a new s3 bucket is created. I have a cloudwatch function that triggers the lambda function. I see the open to pass the whole event to the lambda function as input. When I this, how do I get my Lambda function to read the bucket's name from the event and assign the name as the value to a string variable?
Here is what my code looks like:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['s3']['bucket']['name']

CloudTrail events of S3 bucket level operations have different format than the one posted by #Woodrow. Actually, the name of bucket is within a JSON object called requestParameters. Moreover, the whole event is encapsulated within Records array. See CloudTrail Log Event Reference
Truncated version of CloudTrail event for bucket creation
"eventSource": "s3.amazonaws.com",
"eventName": "CreateBucket",
"userAgent": "signin.amazonaws.com",
"requestParameters": {
"CreateBucketConfiguration": {
"LocationConstraint": "aws-region",
"xmlns": "http://s3.amazonaws.com/doc/2006-03-01/"
},
"bucketName": "my-awsome-bucket"
}
Therefore, your code could look something like:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']:
if record['eventName'] == "CreateBucket":
bucket = record['requestParameters']['bucketName']
print(bucket)

Related

AWS Lambda Publish Messages to SNS in cross region

I have a Lambda function (east-us-1) that needs to publish messages to SNS topics in both (east-us-1) & (eu-central-1) regions. Is this possible?
Here is my code snippet. Can someone help on how can I achieve this?
I am surprised to see that the boto3 client documentation says "You can publish messages only to topics and endpoints in the same Amazon Web Services Region"
Link Here:
from __future__ import print_function
import json
import urllib
import boto3
print('Loading message function...')
def send_to_sns(message, context):
# This function receives JSON input with three fields: the ARN of an SNS topic,
# a string with the subject of the message, and a string with the body of the message.
# The message is then sent to the SNS topic.
#
# Example:
# {
# "topic": "arn:aws:sns:REGION:123456789012:MySNSTopic",
# "subject": "This is the subject of the message.",
# "message": "This is the body of the message."
# }
sns = boto3.client('sns')
sns.publish(
TopicArn=message['topic'],
Subject=message['subject'],
Message=message['body']
)
return ('Sent a message to an Amazon SNS topic.')
My code throws the following error when SNS is in different region
{
"response": {
"stackTrace": [
[
"/var/task/lambda_function.py",
28,
"send_to_sns",
"Message=message['body']"
],
[
"/var/runtime/botocore/client.py",
357,
"_api_call",
"return self._make_api_call(operation_name, kwargs)"
],
[
"/var/runtime/botocore/client.py",
676,
"_make_api_call",
"raise error_class(parsed_response, operation_name)"
]
],
"errorType": "InvalidParameterException",
"errorMessage": "An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: TopicArn"
}
}
you are facing this issue because of region conflict with below code you can check
your current region
my_session = boto3.session.Session()
my_region = my_session.region_name
and to solve your problem just do this i am assuming you are connected to us-east-1 and your sns is present in us-east-2 region
sns = boto3.client('sns',region_name='us-east-2')

Can I export data from s3 to dynamoDB using lambda?

I want to build a lambda process that automatically sends data to dynamodb when s3 comes in. But as dynamodb is not set as destination like the picture below, what should I do? (The permission is admin.)
The best way would be to setup notifications on your s3 bucket to trigger for new object. The notifications would launch your lambda function, which would then update dynamodb.
If you already have objects in your bucket, you could use S3 batch operations to process all of them with your lambda function.
You should understand how to work with lambda event source (event trigger). Here the event source is the S3 where once there is a stored object, the S3 will trigger an event to the lambda function. To get this work, you have to add the permission on lambda for S3 event. Check it out :
Using Lambda Function with Amazon S3
Now, every objects put into S3 will trigger an event to Lambda. Something like tell the lambda for new coming S3 object. You can check the event object from lambda code like this sample code :
exports.handler = async (event) => {
event.Records.forEach((record, i)=>{
if (record.eventName == 'ObjectCreated:Put')
console.log(record);
})
return;
}
Test the file upload to your s3 bucket and go to cloudwatch to check your lambda log.
Next if you want to store the file content into dynamodb, you will add policies for lambda role and write some lines on the lambda function, for the sample policy to add permission s3:GetObject and dynamodb:PutItem for lambda role :
{
"Sid": "Stmt1583413548180",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "your_s3Bucket_ARN"
},
{
"Sid": "Stmt1583413573162",
"Action": [
"dynamodb:PutItem"
],
"Effect": "Allow",
"Resource": "your_dynamodbTable_ARN"
}
And this is the sample of lambda code :
let s3 = new AWS.S3();
let dynamoDB = new AWS.DynamoDB.DocumentClient();
let TextDecoder = require("util").TextDecoder;
exports.handler = async (event) => {
let records = [];
event.Records.forEach((record, i)=>{
if (record.eventName == 'ObjectCreated:Put')
records.push(fileContent());
if (i == event.Records.length-1)
if (records.length > 0)
return Promise.all(records).then(()=>{
console.log("All events completed")
return "All events completed";
}).catch((e)=>{
console.log("The tasks error: ",e)
throw "The tasks error";
})
else
return "All events completed";
})
}
/* Get the file content and put new dynamodb item */
function fileContent(obj) {
let params = {
Bucket: obj.bucket.name,
Key: obj.object.key
}
return s3.getObject(params).promise().then((content)=>{
console.log("GetObject succeeded");
content = new TextDecoder("utf-8").decode(content.Body);
let Item = {
Key: obj.object.key,
dataContent: content
}
return dynamoDB.put({
TableName: 'table_name',
Item:Item
}).promise();
})
}
Well, let me resume the steps :
Add permission for s3 event on your lambda function
Add the IAM policy to lambda role for actions s3:GetObject and dynamodb:PutItem
Update your lambda function code to export the s3 file to dynamodb item

Read and Copy S3 inventory data from SNS topic trigger with AWS lambda function

I am a data analyst and new to AWS lambda functions. I have an s3 bucket where I store the Inventory data from our data-lake which is generated using Inventory feature under S3 Management tab.
So lets say the inventory data (reports) looks like this:
s3://my-bucket/allobjects/data/report-1.csv.gz
s3://my-bucket/allobjects/data/report-2.csv.gz
s3://my-bucket/allobjects/data/report-3.csv.gz
Regardless of the file contents, I have an Event setup for s3://my-bucket/allobjects/data/ which notifies an SNS topic during any event like GET or PUT. (I cant change this workflow due to strict governance)
Now, I am trying to create a Lambda Function with this SNS topic as a trigger and simply move the inventory-report files generated by the S3 Inventory feature under
s3://my-bucket/allobjects/data/
and repartition it as follows:
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-1.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-2.csv.gz
s3://my-object/allobjects/partitiondata/year=2019/month=01/day=29/report-3.csv.gz
How can I achieve this using the lambda function (node.js or python is fine) reading an SNS topic? Any help is appreciated.
I tried something like this based on some smaple code i found online but it didnt help.
console.log('Loading function');
var AWS = require('aws-sdk');
AWS.config.region = 'us-east-1';
exports.handler = function(event, context) {
console.log("\n\nLoading handler\n\n");
var sns = new AWS.SNS();
sns.publish({
Message: 'File(s) uploaded successfully',
TopicArn: 'arn:aws:sns:_my_ARN'
}, function(err, data) {
if (err) {
console.log(err.stack);
return;
}
console.log('push sent');
console.log(data);
context.done(null, 'Function Finished!');
});
};
The preferred method would be for the Amazon S3 Event to trigger the AWS Lambda function directly. But since you cannot alter this port, the flow would be:
The Amazon S3 Event will send a message to an Amazon SNS topic.
The AWS Lambda function is subscribed to the SNS topic, so it is triggered and receives the message from S3.
The Lambda function extracts the Bucket and Key, then calls S3 to copy_object() to another location. (There is no move command. You will need to copy the object to a new bucket/key.)
The content of the event field is something like:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "...",
"Sns": {
"Type": "Notification",
"MessageId": "1c3189f0-ffd3-53fb-b60b-dd3beeecf151",
"TopicArn": "...",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"ap-southeast-2\",\"eventTime\":\"2019-01-30T02:42:07.129Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:AIDAIZCFQCOMZZZDASS6Q\"},\"requestParameters\":{\"sourceIPAddress\":\"54.1.1.1\"},\"responseElements\":{\"x-amz-request-id\":\"...",\"x-amz-id-2\":\"..."},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"...\",\"bucket\":{\"name\":\"stack-lake\",\"ownerIdentity\":{\"principalId\":\"...\"},\"arn\":\"arn:aws:s3:::stack-lake\"},\"object\":{\"key\":\"index.html\",\"size\":4378,\"eTag\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2019-01-30T02:42:07.212Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "...",
"UnsubscribeUrl": "...",
"MessageAttributes": {}
}
}
]
}
Thus, the name of the uploaded Object needs to be extracted from the Message.
You could use code like this:
import json
def lambda_handler(event, context):
for record1 in event['Records']:
message = json.loads(record1['Sns']['Message'])
for record2 in message['Records']:
bucket = record2['s3']['bucket']['name'])
key = record2['s3']['object']['key'])
# Do something here with bucket and key
return {
'statusCode': 200,
'body': json.dumps(event)
}

Read only AWS CLI access to strictly CloudWatch billing metrics

I need to provide somebody with read only AWS CLI access to our CloudWatch billing metrics ONLY. I'm not sure how to do this since CloudWatch doesn't have any specific resources that one can control access to. This means there are no ARN's to specify in an IAM policy and as a result, any resource designation in the policy is "*". More info regarding CloudWatch ARN limitations can be found here. I looked into using namespaces but I believe the "aws-portal" namespace is for the console. Any direction or ideas are greatly appreciated.
With the current CloudWatch ARN limitations the IAM policy would look something like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:DescribeMetricData",
"cloudwatch:GetMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
As you say, you will not be able to achieve this within CloudWatch. According to the docs:
CloudWatch doesn't have any specific resources for you to control access to... For example, you can't give a user access to CloudWatch data for only a specific set of EC2 instances or a specific load balancer. Permissions granted using IAM cover all the cloud resources you use or monitor with CloudWatch.
An alternative option might be to:
Use Scheduled events on a lambda function to periodically export relevant billing metrics from Cloudwatch to an S3 bucket. For example, using the Python SDK, the lambda might look something like this:
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
try:
bucket_name = "so-billing-metrics"
filename = '-'.join(['billing', datetime.now().strftime("%Y-%m-%d-%H")])
region_name = "us-east-1"
dimensions = {'Name': 'Currency', 'Value':'USD'}
metric_name = 'EstimatedCharges'
namespace = 'AWS/Billing'
start_time = datetime.now() - timedelta(hours = 1)
end_time = datetime.now()
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch', region_name=region_name)
# Get billing metrics for the last hour
metrics = cloudwatch.get_metric_statistics(
Dimensions=[dimensions],
MetricName=metric_name,
Namespace=namespace,
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=['Sum'])
# Save data to temp file
with open('/tmp/billingmetrics', 'wb') as f:
# Write header and data
f.write("Timestamp, Cost")
for entry in metrics['Datapoints']:
f.write(",".join([entry['Timestamp'].strftime('%Y-%m-%d %H:%M:%S'), str(entry['Sum']), entry['Unit']]))
# Upload temp file to S3
s3 = boto3.client('s3')
with open('/tmp/billingmetrics', 'rb') as data:
s3.upload_fileobj(data, bucket_name, filename)
except Exception as e:
print str(e)
return 0
return 1
Note: You will need to ensure that the Lambda function has the relevant permissions to write to S3 and read from cloudwatch.
Restrict the IAM User/Role to read only access to the S3 bucket.

Get emails whenever a file is uploaded on s3 bucket using serverless

i want to get emails whenever a file is uploaded to s3 bucket as described in the title above, i am using serverless, the issue is that the event that i have created on s3 gives me just notification on s3-aws console, and don't know how to configure cloudwatch event on S3 to trigger lambda. So please if someone knows how to triggered events on S3 using cloudwatch i am all ears.
Here is my code:
import json
import boto3
import botocore
import logging
import sys
import os
import traceback
from botocore.exceptions import ClientError
from pprint import pprint
from time import strftime, gmtime
email_from = '*****#******.com'
email_to = '******#******.com'
#email_cc = '********#gmail.com'
email_subject = 'new event on s3 '
email_body = 'a new file is uploaded'
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def sthree(event, context):
"""Send email whenever a file is uploaded to S3"""
body = {}
status_code = 200
try:
s3 = boto3.client('s3')
ses = boto3.client('ses')
response = ses.send_email(Source = email_from,
Destination = {'ToAddresses': [email_to,],},
Message = {'Subject': {'Data': email_subject}, 'Body':{'Text' : {'Data': email_body}}}
)
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
and here is my serverless.yml file
service: aws-python # NOTE: update this with your service name
plugins:
- serverless-external-s3-event
provider: name: aws
runtime: python2.7
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
- "ses:SendEmail"
- "ses:SendRawEmail"
- "s3:PutBucketNotification"
Resource: "*"
functions: sthree:
handler: handler.sthree
description: send mail whenever a file is uploaded on S3
events:
- s3:
bucket: cartegie-nirmine
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
- cloudwatchEvent:
description: 'CloudWatch Event triggered '
event:
source:
- "aws.S3"
detail-type:
- "S3 event Notification"
enabled : true
If your motto is just to receive email notification of operations on a S3 bucket, then you dont need lambda functions for that. For the use-case mentioned in the question, you can achieve that using SNS topic and S3 events. I will mention the steps to follow from console(through the same can be achieved via sdk or cli).
1) Create a Topic using SNS console.
2) Subscribe to the topic. Use email as the communications protocol and provide your email-id.
3) You will get email requesting you to confirm your subscription to the topic. Confirm the subscription.
4) IMPORTANT: Replace the access policy of the topic with the below policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "sns-topic-arn",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:s3-bucket-name"
}
}
}
]
}
Basically you are giving permission for your s3-bucket to publish to the SNS topic.
Replace sns-topic-arn with your ARN of the topic you created above.
Replce s3-bucket-name with your bucket name, for which you want to receive notifications.
5) Go to S3 Console. Click on your S3 bucket and open the Properties tab.
6) Under Advanced settings, Click on Events Card.
7) Click Add Notifications and enter values. A sample has been shown below.
Select the required s3-events to monitor and the SNS topic you created.
8) Click Save. Now you should start receiving notifications to your email.