AWS billing with python - amazon-web-services

I'm want to extract my current billing from aws by using amazon boto3 library for python, but couldn't find any API command that does so.
When trying to use previous version (boto2) with the fps connection and get_account_balance() method, i'm waiting for a response with no reply.
What's the correct way of doing so?

You can get the current bill of your AWS account by using the CostExplorer API.
Below is an example:
import boto3
client = boto3.client('ce', region_name='us-east-1')
response = client.get_cost_and_usage(
TimePeriod={
'Start': '2018-10-01',
'End': '2018-10-31'
},
Granularity='MONTHLY',
Metrics=[
'AmortizedCost',
]
)
print(response)

I use the CloudWatch API to extract billing information. The "AWS/Billing" namespace has everything you need.

captainblack has the right idea. Note, however, that the End Date is exclusive. In the example provided, you would only retrieve cost data from 2018-10-01 - 2018-10-30. The end date needs to be the first day of the following month if you want the cost data for the entire month:
import boto3
client = boto3.client('ce', region_name='us-east-1')
response = client.get_cost_and_usage(
TimePeriod={
'Start': '2018-10-01',
'End': '2018-11-01'
},
Granularity='MONTHLY',
Metrics=[
'AmortizedCost',
]
)
print(response)

Related

Can I create Slack subscriptions to an AWS SNS topic?

I'm trying to create a SNS topic in AWS and subscribe a lambda function to it that will send notifications to Slack apps/users.
I did read this article -
https://aws.amazon.com/premiumsupport/knowledge-center/sns-lambda-webhooks-chime-slack-teams/
that describes how to do it using this lambda code:
#!/usr/bin/python3.6
import urllib3
import json
http = urllib3.PoolManager()
def lambda_handler(event, context):
url = "https://hooks.slack.com/services/xxxxxxx"
msg = {
"channel": "#CHANNEL_NAME",
"username": "WEBHOOK_USERNAME",
"text": event['Records'][0]['Sns']['Message'],
"icon_emoji": ""
}
encoded_msg = json.dumps(msg).encode('utf-8')
resp = http.request('POST',url, body=encoded_msg)
print({
"message": event['Records'][0]['Sns']['Message'],
"status_code": resp.status,
"response": resp.data
})
but the problem is, that in that implementation I have to create a lambda function for every user.
I want to subscribe multiple Slack apps/users to one SNS topic.
Is there a way of doing that without creating a lambda function for each one?
You really DON'T need Lambda. Just SNS and SLACK are enough.
I found a way to integrate AWS SNS with slack WITHOUT AWS Lambda or AWS chatbot. With this approach you can confirm the subscription easily.
Follow the video which show all the step clearly.
https://www.youtube.com/watch?v=CszzQcPAqNM
Steps to follow:
Create slack channel or use existing channel
Create a work flow with selecting Webhook
Create a variable name as "SubscribeURL". The name
is very important
Add the above variable in the message body of the
workflow Publish the workflow and get the url
Add the above Url as subscription of the SNS You will see the subscription URL in the
slack channel
Follow the URl and complete the subscription
Come back to the work flow and change the "SubscribeURL" variable to "Message"
The publish the
message in SNS. you will see the message in the slack channel.
Hi i would say you should go for a for loop and make a list of all the users. Either manually state them in the lambda or get them with api call from slack e.g. this one here: https://api.slack.com/methods/users.list
#!/usr/bin/python3.6
import urllib3
import json
http = urllib3.PoolManager()
def lambda_handler(event, context):
userlist = ["name1", "name2"]
for user in userlist:
url = "https://hooks.slack.com/services/xxxxxxx"
msg = {
"channel": "#" + user, # not sure if the hash has to be here
"username": "WEBHOOK_USERNAME",
"text": event['Records'][0]['Sns']['Message'],
"icon_emoji": ""
}
encoded_msg = json.dumps(msg).encode('utf-8')
resp = http.request('POST',url, body=encoded_msg)
print({
"message": event['Records'][0]['Sns']['Message'],
"status_code": resp.status,
"response": resp.data
})
Another solution you can do is set up email for the slack users, see link:
https://slack.com/help/articles/206819278-Send-emails-to-Slack
When you can just add the emails as subscribers to the sns topic. You can fileter the msg that the receiver gets with Subscription filter policy.

How to use NextToken in Boto3

The below-mentioned code is created for exporting all the findings from the security hub to an S3 bucket using lambda functions. The filters are set for exporting only CIS-AWS foundations benchmarks. There are more than 20 accounts added as the members in security hub. The issue that I'm facing here is even though I'm using the NextToken configuration. The output doesn't have information about all the accounts. Instead, it just displays any one of the account's data randomly.
Can somebody look into the code and let me know what could be the issue, please?
import boto3
import json
from botocore.exceptions import ClientError
import time
import glob
client = boto3.client('securityhub')
s3 = boto3.resource('s3')
storedata = {}
_filter = Filters={
'GeneratorId': [
{
'Value': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark',
'Comparison': 'PREFIX'
}
],
}
def lambda_handler(event, context):
response = client.get_findings(
Filters={
'GeneratorId': [
{
'Value': 'arn:aws:securityhub:::ruleset/cis-aws-foundations-benchmark',
'Comparison': 'PREFIX'
},
],
},
)
results = response["Findings"]
while "NextToken" in response:
response = client.get_findings(Filters=_filter,NextToken=response["NextToken"])
results.extend(response["Findings"])
storedata = json.dumps(response)
print(storedata)
save_file = open("/tmp/SecurityHub-Findings.json", "w")
save_file.write(storedata)
save_file.close()
for name in glob.glob("/tmp/*"):
s3.meta.client.upload_file(name, "xxxxx-security-hubfindings", name)
TooManyRequestsException error is also getting now.
The problem is in this code that paginates the security findings results:
while "NextToken" in response:
response = client.get_findings(Filters=_filter,NextToken=response["NextToken"])
results.extend(response["Findings"])
storedata = json.dumps(response)
print(storedata)
The value of storedata after the while loop has completed is the last page of security findings, rather than the aggregate of the security findings.
However, you're already aggregating the security findings in results, so you can use that:
save_file = open("/tmp/SecurityHub-Findings.json", "w")
save_file.write(json.dumps(results))
save_file.close()

Creating a CloudWatch Metrics from the Athena Query results

My Requirement
I want to create a CloudWatch-Metric from Athena query results.
Example
I want to create a metric like user_count of each day.
In Athena, I will write an SQL query like this
select date,count(distinct user) as count from users_table group by 1
In the Athena editor I can see the result, but I want to see these results as a metric in Cloudwatch.
CloudWatch-Metric-Name ==> user_count
Dimensions ==> Date,count
If I have this cloudwatch metric and dimensions, I can easily create a Monitoring Dashboard and send send alerts
Can anyone suggest a way to do this?
You can use CloudWatch custom widgets, see "Run Amazon Athena queries" in Samples.
It's somewhat involved, but you can use a Lambda for this. In a nutshell:
Setup your query in Athena and make sure it works using the Athena console.
Create a Lambda that:
Runs your Athena query
Pulls the query results from S3
Parses the query results
Sends the query results to CloudWatch as a metric
Use EventBridge to run your Lambda on a recurring basis
Here's an example Lambda function in Python that does step #2. Note that the Lamda function will need IAM permissions to run queries in Athena, read the results from S3, and then put a metric into Cloudwatch.
import time
import boto3
query = 'select count(*) from mytable'
DATABASE = 'default'
bucket='BUCKET_NAME'
path='yourpath'
def lambda_handler(event, context):
#Run query in Athena
client = boto3.client('athena')
output = "s3://{}/{}".format(bucket,path)
# Execution
response = client.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': DATABASE
},
ResultConfiguration={
'OutputLocation': output,
}
)
#S3 file name uses the QueryExecutionId so
#grab it here so we can pull the S3 file.
qeid = response["QueryExecutionId"]
#occasionally the Athena hasn't written the file
#before the lambda tries to pull it out of S3, so pause a few seconds
#Note: You are charged for time the lambda is running.
#A more elegant but more complicated solution would try to get the
#file first then sleep.
time.sleep(3)
###### Get query result from S3.
s3 = boto3.client('s3');
objectkey = path + "/" + qeid + ".csv"
#load object as file
file_content = s3.get_object(
Bucket=bucket,
Key=objectkey)["Body"].read()
#split file on carriage returns
lines = file_content.decode().splitlines()
#get the second line in file
count = lines[1]
#remove double quotes
count = count.replace("\"", "")
#convert string to int since cloudwatch wants numeric for value
count = int(count)
#post query results as a CloudWatch metric
cloudwatch = boto3.client('cloudwatch')
response = cloudwatch.put_metric_data(
MetricData = [
{
'MetricName': 'MyMetric',
'Dimensions': [
{
'Name': 'DIM1',
'Value': 'dim1'
},
],
'Unit': 'None',
'Value': count
},
],
Namespace = 'MyMetricNS'
)
return response
return

How to query the CostController API for cost forecast using boto3

I am trying to query the Cost Controller API of AWS for the cost forecast using boto3. Here is the code:
import boto3
client = boto3.client('ce', region_name='us-east-1', aws_access_key_id=key_id, aws_secret_access_key=secret_key)
#the args object presents the filters
data = client.get_cost_forecast(**args)
The result is:
AttributeError: 'CostExplorer' object has no attribute 'get_cost_forecast'
But the actual documentation for the API says that it provides the get_cost_forecast() function.
There is no method get_cost_forecast, you can refer below document to get cost forecast,
Boto3 CostForecast
eg.
import boto3
client = boto3.client('ce')
response = client.get_cost_forecast(
TimePeriod={
'Start': 'string',
'End': 'string'
},
Metric='BLENDED_COST'|'UNBLENDED_COST'|'AMORTIZED_COST'|'NET_UNBLENDED_COST'|'NET_AMORTIZED_COST'|'USAGE_QUANTITY'|'NORMALIZED_USAGE_AMOUNT',
Granularity='DAILY'|'MONTHLY'|'HOURLY',
},
PredictionIntervalLevel=123
)
So, I figured out that the version of botocore I am using 1.8.45 does not support the method get_cost_forecast(). An upgrade to the version 1.9.71 is needed. I hope that this will help other people facing this issue.

AWS S3 to Google Cloud Storage Transfer not working with Python Client Library because "precondition check failed"

I tried testing out cloud transfer function that would transfer an object from AWS S3 to GCS (as a one-off task) but I keep getting googleapiclient.errors.HttpError: <HttpError 400 when requesting https://storagetransfer.googleapis.com/v1/transferJobs?alt=json returned "Precondition check failed.">.
Here is the code:
import argparse
import datetime
import json
from pprint import pprint
import googleapiclient.discovery
def main(description, project_id, year, month, day, hours, minutes,
source_bucket, access_key, secret_access_key, sink_bucket):
"""Create a one-off transfer from Amazon S3 to Google Cloud Storage."""
storagetransfer = googleapiclient.discovery.build('storagetransfer', 'v1')
# Edit this template with desired parameters.
# Specify times below using US Pacific Time Zone.
transfer_job = {
'description': description,
'status': 'ENABLED',
'projectId': project_id,
'schedule': {
'scheduleStartDate': {
'day': day,
'month': month,
'year': year
},
'scheduleEndDate': {
'day': day,
'month': month,
'year': year
},
'startTimeOfDay': {
'hours': hours,
'minutes': minutes
}
},
'transferSpec': {
'awsS3DataSource': {
'bucketName': source_bucket,
'awsAccessKey': {
'accessKeyId': access_key,
'secretAccessKey': secret_access_key
}
},
'gcsDataSink': {
'bucketName': sink_bucket
}
}
}
result = storagetransfer.transferJobs().create(body=transfer_job).execute()
print('Returned transferJob: {}'.format(
json.dumps(result, indent=4)))
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('description', help='Transfer description.')
parser.add_argument('project_id', help='Your Google Cloud project ID.')
parser.add_argument('date', help='Date YYYY/MM/DD.')
parser.add_argument('time', help='Time (24hr) HH:MM.')
parser.add_argument('source_bucket', help='Source bucket name.')
parser.add_argument('access_key', help='Your AWS access key id.')
parser.add_argument('secret_access_key', help='Your AWS secret access '
'key.')
parser.add_argument('sink_bucket', help='Sink bucket name.')
args = parser.parse_args()
date = datetime.datetime.strptime(args.date, '%Y/%m/%d')
time = datetime.datetime.strptime(args.time, '%H:%M')
main(
args.description,
args.project_id,
date.year,
date.month,
date.day,
time.hour,
time.minute,
args.source_bucket,
args.access_key,
args.secret_access_key,
args.sink_bucket)
And here is the command I used to execute it:
python cloud_transfer_test.py "cloudtransfer" "MY_PROJECT_ID" "2018/04/12" "09:17" "SOURCE_BUCKET" "ACCESS_KEY" "SECRET_ACCESS_KEY" "SINK_BUCKET"
*Note: ACCESS_KEY and SECRET_ACCESS_KEY actually had values but I'm omiting them from this post. Also the date and time values are configured such that it runs as soon as I execute the script.
I suspect your problem will turn out to be the permissions for the not-obviously needed xxx#storage-transfer-service.iam.gserviceaccount.com
You can find out the name of the account you need using the storage transfer get call in the API explorer
https://cloud.google.com/storage/transfer/reference/rest/v1/googleServiceAccounts/get
The needed permissions are documented at https://cloud.google.com/storage/docs/access-control/iam-transfer
Be careful setting these; when I had this problem I had set them, but apparently not correctly, and after I went back to looking at them it still took me a couple tries before it worked. If they are not set correctly on the destination bucket/project you will get the precondition failed error.