Read only AWS CLI access to strictly CloudWatch billing metrics - amazon-web-services

I need to provide somebody with read only AWS CLI access to our CloudWatch billing metrics ONLY. I'm not sure how to do this since CloudWatch doesn't have any specific resources that one can control access to. This means there are no ARN's to specify in an IAM policy and as a result, any resource designation in the policy is "*". More info regarding CloudWatch ARN limitations can be found here. I looked into using namespaces but I believe the "aws-portal" namespace is for the console. Any direction or ideas are greatly appreciated.
With the current CloudWatch ARN limitations the IAM policy would look something like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:DescribeMetricData",
"cloudwatch:GetMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

As you say, you will not be able to achieve this within CloudWatch. According to the docs:
CloudWatch doesn't have any specific resources for you to control access to... For example, you can't give a user access to CloudWatch data for only a specific set of EC2 instances or a specific load balancer. Permissions granted using IAM cover all the cloud resources you use or monitor with CloudWatch.
An alternative option might be to:
Use Scheduled events on a lambda function to periodically export relevant billing metrics from Cloudwatch to an S3 bucket. For example, using the Python SDK, the lambda might look something like this:
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
try:
bucket_name = "so-billing-metrics"
filename = '-'.join(['billing', datetime.now().strftime("%Y-%m-%d-%H")])
region_name = "us-east-1"
dimensions = {'Name': 'Currency', 'Value':'USD'}
metric_name = 'EstimatedCharges'
namespace = 'AWS/Billing'
start_time = datetime.now() - timedelta(hours = 1)
end_time = datetime.now()
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch', region_name=region_name)
# Get billing metrics for the last hour
metrics = cloudwatch.get_metric_statistics(
Dimensions=[dimensions],
MetricName=metric_name,
Namespace=namespace,
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=['Sum'])
# Save data to temp file
with open('/tmp/billingmetrics', 'wb') as f:
# Write header and data
f.write("Timestamp, Cost")
for entry in metrics['Datapoints']:
f.write(",".join([entry['Timestamp'].strftime('%Y-%m-%d %H:%M:%S'), str(entry['Sum']), entry['Unit']]))
# Upload temp file to S3
s3 = boto3.client('s3')
with open('/tmp/billingmetrics', 'rb') as data:
s3.upload_fileobj(data, bucket_name, filename)
except Exception as e:
print str(e)
return 0
return 1
Note: You will need to ensure that the Lambda function has the relevant permissions to write to S3 and read from cloudwatch.
Restrict the IAM User/Role to read only access to the S3 bucket.

Related

AWS Cloudtrail Event for S3 Bucket in Terraform

I had quite a hard time setting up an automization with Beanstalk and Codepipeline...
I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.
So the current Setup is:
S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
}
}
}
But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error
"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"
I tried my best to explain my problem. Not sure if it is understandable.
In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.
Thx for help,
Daniel
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.
Create a CloudWatch Events rule for an Amazon S3 source (console)
From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
]
}
}
You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.
Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
}
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
}
}
}
You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)

S3 public read access restricted by IP range for object uploaded by third-party

I am trying to accomplish the following scenario:
1) Account A uploads a file to an S3 bucket owned by account B. At upload I specify full control for Account owner B
s3_client.upload_file(
local_file,
bucket,
remote_file_name,
ExtraArgs={'GrantFullControl': 'id=<AccountB_CanonicalID>'}
)
2) Account B defines a bucket policy that limits the access to the objects by IP (see below)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowIPs",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketB/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CIDR1>,
<CIDR2>
]
}
}
}
]
}
I get access denied if I try to download the file as anonymous user, even from the specific IP range. If at upload I add public read permission for everyone then I can download the file from any IP.
s3_client.upload_file(
local_file, bucket,
remote_file_name,
ExtraArgs={
'GrantFullControl': 'id=AccountB_CanonicalID', GrantRead':'uri="http://acs.amazonaws.com/groups/global/AllUsers"'
}
)
Question: is it possible to upload the file from Account A to Account B but still restrict public access by an IP range.
This is not possible. According to the documentation:
Bucket Policy – For your bucket, you can add a bucket policy to grant
other AWS accounts or IAM users permissions for the bucket and the
objects in it. Any object permissions apply only to the objects that
the bucket owner creates. Bucket policies supplement, and in many
cases, replace ACL-based access policies.
However, there is a workaround for this scenario. The problem is that the owner of the uploaded file is Account A. We need to upload the file in such a way that the owner of the file is Account B. To accomplish this we need to:
In Account B create a role for trusted entity (select "Another AWS account" and specify Account A). Add upload permission for the bucket.
In Account A create a policy that allows AssumeRole action and as resource specify the ARN of the role created in step 1.
To upload the file from boto3 you can use the following code. Note the use of cachetools to deal with limited TTL of temporary credentials.
import logging
import sys
import time
import boto3
from cachetools import cached, TTLCache
CREDENTIALS_TTL = 1800
credentials_cache = TTLCache(1, CREDENTIALS_TTL - 60)
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(message)s')
logger = logging.getLogger()
def main():
local_file = sys.argv[1]
bucket = '<bucket_from_account_B>'
client = _get_s3_client_for_another_account()
client.upload_file(local_file, bucket, local_file)
logger.info('Uploaded to %s to %s' % (local_file, bucket))
#cached(credentials_cache)
def _get_s3_client_for_another_account():
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='<arn_of_role_created_in_step_1>',
DurationSeconds=CREDENTIALS_TTL
)
credentials = response['Credentials']
credentials = {
'aws_access_key_id': credentials['AccessKeyId'],
'aws_secret_access_key': credentials['SecretAccessKey'],
'aws_session_token': credentials['SessionToken'],
}
return boto3.client('s3', 'eu-central-1', **credentials)
if __name__ == '__main__':
main()

Read AWS s3 Bucket Name from CloudWatch Event

I am working on writing a Lambda function that triggers when a new s3 bucket is created. I have a cloudwatch function that triggers the lambda function. I see the open to pass the whole event to the lambda function as input. When I this, how do I get my Lambda function to read the bucket's name from the event and assign the name as the value to a string variable?
Here is what my code looks like:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['s3']['bucket']['name']
CloudTrail events of S3 bucket level operations have different format than the one posted by #Woodrow. Actually, the name of bucket is within a JSON object called requestParameters. Moreover, the whole event is encapsulated within Records array. See CloudTrail Log Event Reference
Truncated version of CloudTrail event for bucket creation
"eventSource": "s3.amazonaws.com",
"eventName": "CreateBucket",
"userAgent": "signin.amazonaws.com",
"requestParameters": {
"CreateBucketConfiguration": {
"LocationConstraint": "aws-region",
"xmlns": "http://s3.amazonaws.com/doc/2006-03-01/"
},
"bucketName": "my-awsome-bucket"
}
Therefore, your code could look something like:
import boto3
from botocore.exceptions import ClientError
s3 = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']:
if record['eventName'] == "CreateBucket":
bucket = record['requestParameters']['bucketName']
print(bucket)

How to define Resource Policy for CloudWatch Logs with CloudFormation?

When I configure DNS Query Logging with Route53, I can create a resource policy for Route53 to log to my log group. I can confirm this policy with the cli aws logs describe-resource-policies and see something like:
{
"resourcePolicies": [
{
"policyName": "test-logging-policy",
"policyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"route53.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":\"arn:aws:logs:us-east-1:xxxxxx:log-group:test-route53*\"}]}",
"lastUpdatedTime": 1520865407511
}
]
}
The cli also has a put-resource-policy to create one of these. I also see that Terraform has a resource aws_cloudwatch_log_resource_policy which does the same.
So the question: How do I do this with CloudFormation???
You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the
AWS CLI.
There is no Cloudformation support for creating a resource policy right now, but you create a custom lambda resource to do this.
https://gist.github.com/sudharsans/cf9c52d7c78a81818a4a47872982bd76
CloudFormation Custom resource:
AddResourcePolicy:
Type: Custom::AddResourcePolicy
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:872673965194:function:test-lambda-deploy-Lambda-15R963QKCI80A
CloudWatchLogsLogGroupArn: !GetAtt LogGroup.Arn
PolicyName: "testpolicy"
lambda:
import cfnresponse
import boto3
def PutPolicy(arn,policyname):
response = client.put_resource_policy(
policyName=policyname,
policyDocument="....",
)
return
def handler(event, context):
......
if event['RequestType'] == "Delete":
DeletePolicy(PolicyName)
if event['RequestType'] == "Create":
PutPolicy(CloudWatchLogsLogGroupArn,PolicyName)
responseData['Data'] = "SUCCESS"
status=cfnresponse.SUCCESS
.....
4 years later, this still doesn't seem to work through Cloudformation although there is apparently support for this included now

Get emails whenever a file is uploaded on s3 bucket using serverless

i want to get emails whenever a file is uploaded to s3 bucket as described in the title above, i am using serverless, the issue is that the event that i have created on s3 gives me just notification on s3-aws console, and don't know how to configure cloudwatch event on S3 to trigger lambda. So please if someone knows how to triggered events on S3 using cloudwatch i am all ears.
Here is my code:
import json
import boto3
import botocore
import logging
import sys
import os
import traceback
from botocore.exceptions import ClientError
from pprint import pprint
from time import strftime, gmtime
email_from = '*****#******.com'
email_to = '******#******.com'
#email_cc = '********#gmail.com'
email_subject = 'new event on s3 '
email_body = 'a new file is uploaded'
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def sthree(event, context):
"""Send email whenever a file is uploaded to S3"""
body = {}
status_code = 200
try:
s3 = boto3.client('s3')
ses = boto3.client('ses')
response = ses.send_email(Source = email_from,
Destination = {'ToAddresses': [email_to,],},
Message = {'Subject': {'Data': email_subject}, 'Body':{'Text' : {'Data': email_body}}}
)
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
and here is my serverless.yml file
service: aws-python # NOTE: update this with your service name
plugins:
- serverless-external-s3-event
provider: name: aws
runtime: python2.7
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- s3:*
- "ses:SendEmail"
- "ses:SendRawEmail"
- "s3:PutBucketNotification"
Resource: "*"
functions: sthree:
handler: handler.sthree
description: send mail whenever a file is uploaded on S3
events:
- s3:
bucket: cartegie-nirmine
event: s3:ObjectCreated:*
rules:
- prefix: uploads/
- suffix: .jpg
- cloudwatchEvent:
description: 'CloudWatch Event triggered '
event:
source:
- "aws.S3"
detail-type:
- "S3 event Notification"
enabled : true
If your motto is just to receive email notification of operations on a S3 bucket, then you dont need lambda functions for that. For the use-case mentioned in the question, you can achieve that using SNS topic and S3 events. I will mention the steps to follow from console(through the same can be achieved via sdk or cli).
1) Create a Topic using SNS console.
2) Subscribe to the topic. Use email as the communications protocol and provide your email-id.
3) You will get email requesting you to confirm your subscription to the topic. Confirm the subscription.
4) IMPORTANT: Replace the access policy of the topic with the below policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "sns-topic-arn",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3:*:*:s3-bucket-name"
}
}
}
]
}
Basically you are giving permission for your s3-bucket to publish to the SNS topic.
Replace sns-topic-arn with your ARN of the topic you created above.
Replce s3-bucket-name with your bucket name, for which you want to receive notifications.
5) Go to S3 Console. Click on your S3 bucket and open the Properties tab.
6) Under Advanced settings, Click on Events Card.
7) Click Add Notifications and enter values. A sample has been shown below.
Select the required s3-events to monitor and the SNS topic you created.
8) Click Save. Now you should start receiving notifications to your email.