i want to get emails whenever a file is uploaded to s3 bucket as described in the title above, i am using serverless, the issue is that the event that i have created on s3 gives me just notification on s3-aws console, and don't know how to configure cloudwatch event on S3 to trigger lambda. So please if someone knows how to triggered events on S3 using cloudwatch i am all ears.
Here is my code:
import json
import boto3
import botocore
import logging
import sys
import os
import traceback
from botocore.exceptions import ClientError
from pprint import pprint
from time import strftime, gmtime
email_from = '*****#******.com'
email_to = '******#******.com'
#email_cc = '********#gmail.com'
email_subject = 'new event on s3 '
email_body = 'a new file is uploaded'
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def sthree(event, context):
"""Send email whenever a file is uploaded to S3"""
body = {}
status_code = 200
try:
s3 = boto3.client('s3')
ses = boto3.client('ses')
response = ses.send_email(Source = email_from,
Destination = {'ToAddresses': [email_to,],},
Message = {'Subject': {'Data': email_subject}, 'Body':{'Text' : {'Data': email_body}}}
)
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
and here is my serverless.yml file
service: aws-python # NOTE: update this with your service name
plugins:
- serverless-external-s3-event
provider: name: aws
runtime: python2.7
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
- "ses:SendEmail"
- "ses:SendRawEmail"
- "s3:PutBucketNotification"
Resource: "*"
functions: sthree:
handler: handler.sthree
description: send mail whenever a file is uploaded on S3
events:
- s3:
bucket: cartegie-nirmine
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
- cloudwatchEvent:
description: 'CloudWatch Event triggered '
event:
source:
- "aws.S3"
detail-type:
- "S3 event Notification"
enabled : true
If your motto is just to receive email notification of operations on a S3 bucket, then you dont need lambda functions for that. For the use-case mentioned in the question, you can achieve that using SNS topic and S3 events. I will mention the steps to follow from console(through the same can be achieved via sdk or cli).
1) Create a Topic using SNS console.
2) Subscribe to the topic. Use email as the communications protocol and provide your email-id.
3) You will get email requesting you to confirm your subscription to the topic. Confirm the subscription.
4) IMPORTANT: Replace the access policy of the topic with the below policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "sns-topic-arn",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:s3-bucket-name"
}
}
}
]
}
Basically you are giving permission for your s3-bucket to publish to the SNS topic.
Replace sns-topic-arn with your ARN of the topic you created above.
Replce s3-bucket-name with your bucket name, for which you want to receive notifications.
5) Go to S3 Console. Click on your S3 bucket and open the Properties tab.
6) Under Advanced settings, Click on Events Card.
7) Click Add Notifications and enter values. A sample has been shown below.
Select the required s3-events to monitor and the SNS topic you created.
8) Click Save. Now you should start receiving notifications to your email.
Related
My goal is to upload objects to S3, I have been trying with both smart_open and boto3 libraries with no success.
I don't know much about configuring IAM policies or Access points in S3; but finding very hard to debug and understand how to pass configurations.
IAM
this is my policy - it should be open and allow PUT. I don't have any access point set.
{
"Version": "2012-10-17",
"Id": "Policy1449009487903",
"Statement": [
{
"Sid": "Stmt1449009478455",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://s3-us-west-2.amazonaws.com/MY_BUCKET/*"
]
}
}
}
]
}
boto3
With boto3, I try to open a session, and then upload a file from local disk:
import boto3
session = boto3.Session(
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECRET_KEY,
)
s3 = boto3.resource('s3')
s3.Bucket(S3_BUCKET).upload_file(path_to_my_file_on_disk ,'test.json')
But I got error (very long), which end with:
EndpointConnectionError: Could not connect to the endpoint URL: "https://MY_BUCKET.s3.us-oregon.amazonaws.com/test.json"
Note that the url is different from the URI of an object shared on s3, that should be:
s3://MY_BUCKET/test.json
Looking at :
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html
I just tried:
import boto3
# Print out bucket names
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
And it yields error: fail to connect to:
EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.us-oregon.amazonaws.com/"
Smart_open
I tried with smart_open like this:
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb') as o:
o.write(json.dumps(template).encode('utf8'))
But also here, it fails to connect. It does not say why though.
Reading on Stackoverflow, some threads reported that uploading with Smart_open version >= 5.0.0 could be more complicated - see:
https://github.com/RaRe-Technologies/smart_open/blob/develop/howto.md
So I tried:
session = boto3.Session(
aws_access_key_id= ACCESS_KEY,
aws_secret_access_key= SECRET_KEY)
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt', mode = 'w', transport_params={'client': session.client('s3')}) as o:
o.write("nothing to see here\n")
o.close()
No success
with smart_open.open(
's3://' + S3_BUCKET + '/robots.txt',
'w',
transport_params = {
'client_kwargs': {
'S3.Client.create_multipart_upload': {
'ServerSideEncryption': 'aws:kms'
}
},
'client': boto3.client('s3')
}
) as o:
o.write("nothing to see here\n")
o.close()
no success.
Can you help debug and point to the correct direction ?
I found a solution for boto3:
it turned out I had to specify correct region in the Session:
s3 = boto3.client('s3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION
)
s3.upload_file(path_to_filename , S3_BUCKET, 'test.json')
worked out.
However, with smart_open I could not find a solution:
Ref.
How to use Python smart_open module to write to S3 with server-side encryption
I tried to specify the boto3 session as above, and then:
session = boto3.Session(
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name=MY_REGION)
client_kwargs = {'S3.Client.create_multipart_upload': {'ServerSideEncryption': 'AES256'}}
with smart_open.open('s3://{}:{}#{}/{}'.format(ACCESS_KEY, SECRET_KEY, S3_BUCKET, filename), 'wb', transport_params={'client_kwargs': client_kwargs}
) as o:
o.write(json.dumps(myfile).encode('utf8'))
Someone can show a correct way for smart_open as well ?
Using 6.3.0 version.
I post this partial answer if someone can find it useful...
.. debugging cumbersome for me, not an expert of AWS IAM either
I have a Lambda function (east-us-1) that needs to publish messages to SNS topics in both (east-us-1) & (eu-central-1) regions. Is this possible?
Here is my code snippet. Can someone help on how can I achieve this?
I am surprised to see that the boto3 client documentation says "You can publish messages only to topics and endpoints in the same Amazon Web Services Region"
Link Here:
from __future__ import print_function
import json
import urllib
import boto3
print('Loading message function...')
def send_to_sns(message, context):
# This function receives JSON input with three fields: the ARN of an SNS topic,
# a string with the subject of the message, and a string with the body of the message.
# The message is then sent to the SNS topic.
#
# Example:
# {
# "topic": "arn:aws:sns:REGION:123456789012:MySNSTopic",
# "subject": "This is the subject of the message.",
# "message": "This is the body of the message."
# }
sns = boto3.client('sns')
sns.publish(
TopicArn=message['topic'],
Subject=message['subject'],
Message=message['body']
)
return ('Sent a message to an Amazon SNS topic.')
My code throws the following error when SNS is in different region
{
"response": {
"stackTrace": [
[
"/var/task/lambda_function.py",
28,
"send_to_sns",
"Message=message['body']"
],
[
"/var/runtime/botocore/client.py",
357,
"_api_call",
"return self._make_api_call(operation_name, kwargs)"
],
[
"/var/runtime/botocore/client.py",
676,
"_make_api_call",
"raise error_class(parsed_response, operation_name)"
]
],
"errorType": "InvalidParameterException",
"errorMessage": "An error occurred (InvalidParameter) when calling the Publish operation: Invalid parameter: TopicArn"
}
}
you are facing this issue because of region conflict with below code you can check
your current region
my_session = boto3.session.Session()
my_region = my_session.region_name
and to solve your problem just do this i am assuming you are connected to us-east-1 and your sns is present in us-east-2 region
sns = boto3.client('sns',region_name='us-east-2')
I have tried everything but couldn't get any clue what's wrong with my IAM policy to do with Cognito sub with identity ID access
I am using Lambda to get authentication details > get_object from a folder separated by Cognito user using boto3.
Here's my Lambda code:
import json
import urllib.parse
import boto3
import sys
import hmac, hashlib, base64
print('Loading function')
cognito = boto3.client('cognito-idp')
cognito_identity = boto3.client('cognito-identity')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
username = '{substitute_with_my_own_data}' //authenticated user
app_client_id = '{substitute_with_my_own_data}' //cognito client id
key = '{substitute_with_my_own_data}' //cognito app client secret key
cognito_provider = 'cognito-idp.{region}.amazonaws.com/{cognito-pool-id}'
message = bytes(username+app_client_id,'utf-8')
key = bytes(key,'utf-8')
secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()
print("SECRET HASH:",secret_hash)
auth_data = { 'USERNAME': username, 'PASSWORD':'{substitute_user_password}', 'SECRET_HASH': secret_hash}
auth_response = cognito.initiate_auth(
AuthFlow='USER_PASSWORD_AUTH',
AuthParameters=auth_data,
ClientId=app_client_id
)
print(auth_response)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
auth_result=auth_response['AuthenticationResult']
id_token=auth_result['IdToken']
id_response = cognito_identity.get_id(
IdentityPoolId='{sub_cognito_identity_pool_id}',
Logins={cognito_provider: id_token}
)
print('id_response = ' + id_response['IdentityId']) // up to this stage verified correct user cognito identity id returned
credentials_response = cognito_identity.get_credentials_for_identity(
IdentityId=id_response['IdentityId'],
Logins={cognito_provider: id_token}
)
secretKey = credentials_response['Credentials']['SecretKey']
accessKey = credentials_response['Credentials']['AccessKeyId']
sessionToken = credentials_response['Credentials']['SessionToken']
print('secretKey = ' + secretKey)
print('accessKey = ' + accessKey)
print('sessionToken = ' + sessionToken)
# Use the temporary credentials that AssumeRole returns to make a
# connection to Amazon S3
s3 = boto3.client(
's3',
aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=sessionToken,
)
# Use the Amazon S3 resource object that is now configured with the
# credentials to access your S3 buckets.
# for bucket in s3.buckets.all():
# print(bucket.name)
# Get the object from the event and show its content type
bucket = '{bucket-name}'
key = 'abc/{user_cognito_identity_id}/test1.txt'
prefix = 'abc/{user_cognito_identity_id}'
try:
response = s3.get_object(
Bucket=bucket,
Key=key
)
# response = s3.list_objects(
# Bucket=bucket,
# Prefix=prefix,
# Delimiter='/'
# )
print(response)
return response
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
What I have verified:
authentication OK
identity with correct assumed role (printed the cognito identity ID and verified it's the correct authenticated user with the ID)
removed the ${cognito-identity.amazonaws.com:sub} and granted general access to authenticated role > I will be able to get, however the ${cognito-identity.amazonaws.com:sub} seems not able to detect and match well
So it seems that there's issue with the IAM policy
IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"*/${cognito-identity.amazonaws.com:sub}/*"
]
}
}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
]
}
]
}
I tried listing bucket / get object / put object, all access denied.
I did try playing around with policys such as removing the listbucket condition (obviously it allows access then since i have authenticated) / changing "s3:prefix" to "${cognito-identity.amazonaws.com:sub}/" or "cognito/${cognito-identity.amazonaws.com:sub}/" but can't make anything work.
Same goes for put or get object.
My S3 folder is bucket-name/cognito/{cognito-user-identity-id}/key
I referred to:
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_cognito-bucket.html
https://aws.amazon.com/blogs/mobile/understanding-amazon-cognito-authentication-part-3-roles-and-policies/
any insights on where might be wrong?
I managed to resolve this after changing the GetObject and PutObject policy Resources from
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/cognito/${cognito-identity.amazonaws.com:sub}/*"
to
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/",
"arn:aws:s3:::bucket-name/*/${cognito-identity.amazonaws.com:sub}/*"
and it works magically. I don't quite get why cognito would prevent the access since my bucket has cognito prefix after the bucket root, but this is resolved now.
I want to build a lambda process that automatically sends data to dynamodb when s3 comes in. But as dynamodb is not set as destination like the picture below, what should I do? (The permission is admin.)
The best way would be to setup notifications on your s3 bucket to trigger for new object. The notifications would launch your lambda function, which would then update dynamodb.
If you already have objects in your bucket, you could use S3 batch operations to process all of them with your lambda function.
You should understand how to work with lambda event source (event trigger). Here the event source is the S3 where once there is a stored object, the S3 will trigger an event to the lambda function. To get this work, you have to add the permission on lambda for S3 event. Check it out :
Using Lambda Function with Amazon S3
Now, every objects put into S3 will trigger an event to Lambda. Something like tell the lambda for new coming S3 object. You can check the event object from lambda code like this sample code :
exports.handler = async (event) => {
event.Records.forEach((record, i)=>{
if (record.eventName == 'ObjectCreated:Put')
console.log(record);
})
return;
}
Test the file upload to your s3 bucket and go to cloudwatch to check your lambda log.
Next if you want to store the file content into dynamodb, you will add policies for lambda role and write some lines on the lambda function, for the sample policy to add permission s3:GetObject and dynamodb:PutItem for lambda role :
{
"Sid": "Stmt1583413548180",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "your_s3Bucket_ARN"
},
{
"Sid": "Stmt1583413573162",
"Action": [
"dynamodb:PutItem"
],
"Effect": "Allow",
"Resource": "your_dynamodbTable_ARN"
}
And this is the sample of lambda code :
let s3 = new AWS.S3();
let dynamoDB = new AWS.DynamoDB.DocumentClient();
let TextDecoder = require("util").TextDecoder;
exports.handler = async (event) => {
let records = [];
event.Records.forEach((record, i)=>{
if (record.eventName == 'ObjectCreated:Put')
records.push(fileContent());
if (i == event.Records.length-1)
if (records.length > 0)
return Promise.all(records).then(()=>{
console.log("All events completed")
return "All events completed";
}).catch((e)=>{
console.log("The tasks error: ",e)
throw "The tasks error";
})
else
return "All events completed";
})
}
/* Get the file content and put new dynamodb item */
function fileContent(obj) {
let params = {
Bucket: obj.bucket.name,
Key: obj.object.key
}
return s3.getObject(params).promise().then((content)=>{
console.log("GetObject succeeded");
content = new TextDecoder("utf-8").decode(content.Body);
let Item = {
Key: obj.object.key,
dataContent: content
}
return dynamoDB.put({
TableName: 'table_name',
Item:Item
}).promise();
})
}
Well, let me resume the steps :
Add permission for s3 event on your lambda function
Add the IAM policy to lambda role for actions s3:GetObject and dynamodb:PutItem
Update your lambda function code to export the s3 file to dynamodb item
I need to provide somebody with read only AWS CLI access to our CloudWatch billing metrics ONLY. I'm not sure how to do this since CloudWatch doesn't have any specific resources that one can control access to. This means there are no ARN's to specify in an IAM policy and as a result, any resource designation in the policy is "*". More info regarding CloudWatch ARN limitations can be found here. I looked into using namespaces but I believe the "aws-portal" namespace is for the console. Any direction or ideas are greatly appreciated.
With the current CloudWatch ARN limitations the IAM policy would look something like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:DescribeMetricData",
"cloudwatch:GetMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
As you say, you will not be able to achieve this within CloudWatch. According to the docs:
CloudWatch doesn't have any specific resources for you to control access to... For example, you can't give a user access to CloudWatch data for only a specific set of EC2 instances or a specific load balancer. Permissions granted using IAM cover all the cloud resources you use or monitor with CloudWatch.
An alternative option might be to:
Use Scheduled events on a lambda function to periodically export relevant billing metrics from Cloudwatch to an S3 bucket. For example, using the Python SDK, the lambda might look something like this:
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
try:
bucket_name = "so-billing-metrics"
filename = '-'.join(['billing', datetime.now().strftime("%Y-%m-%d-%H")])
region_name = "us-east-1"
dimensions = {'Name': 'Currency', 'Value':'USD'}
metric_name = 'EstimatedCharges'
namespace = 'AWS/Billing'
start_time = datetime.now() - timedelta(hours = 1)
end_time = datetime.now()
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch', region_name=region_name)
# Get billing metrics for the last hour
metrics = cloudwatch.get_metric_statistics(
Dimensions=[dimensions],
MetricName=metric_name,
Namespace=namespace,
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=['Sum'])
# Save data to temp file
with open('/tmp/billingmetrics', 'wb') as f:
# Write header and data
f.write("Timestamp, Cost")
for entry in metrics['Datapoints']:
f.write(",".join([entry['Timestamp'].strftime('%Y-%m-%d %H:%M:%S'), str(entry['Sum']), entry['Unit']]))
# Upload temp file to S3
s3 = boto3.client('s3')
with open('/tmp/billingmetrics', 'rb') as data:
s3.upload_fileobj(data, bucket_name, filename)
except Exception as e:
print str(e)
return 0
return 1
Note: You will need to ensure that the Lambda function has the relevant permissions to write to S3 and read from cloudwatch.
Restrict the IAM User/Role to read only access to the S3 bucket.