AWS Costing API? - amazon-web-services

I am currently trying to identify the API that handles the reporting for AWS Instances.
I am looking for how the total hours and cost can be identified for all of the instances or just one instance ?
I looked at the XHR Tab and identified 2 API's that get it
But i think there should be some way to get this data from AWS-SDK.
Any help would be appreciated . Thanks

You will need to turn on the Detailed Billing Report. This will then send billing information to Amazon S3.
The billing files show every specific charge incurred by your account, broken down by resource, tag (needs configuration), region, etc.
Please note that this level of detail is only available after you have activated Detailed Billing Reports. You can only obtain high-level information prior to this time.

Most features in the AWS console are directly or indirectly accessing the same documented, exposed APIs that are accessed by the SDKs and CLI.
Most, but not all.
Some features, particularly reporting and graphing-type features -- like these billing/cost reports -- are console-only features. CloudWatch graphs and CloudFront graphs and reports are other examples that come to mind. There is no access provided to these other than what's provided in the console.
In each case, the raw underlying data is generally accessible through the documented APIs, but not necessarily the data in its aggregated form as presented on the screen or for download -- you'd need to do your own analysis/aggregation/summary, etc.

Related

Alerts when approaching daily quota on Google mapping API (geocoding, places, directions, etc.)

How can I get alerted when we reach a certain percentage of the daily mapping API usage quotas?
We want to set up alerts that warn us when we reach 85% of the daily quotas that we set up for Google API usage (i.e. geocoding, places, and directions APIs).
One source mentioned that this can be done using the Google Cloud Platform's alerting functionality, but I don't see the mapping APIs listed there in the list of alerting options.
Thanks.
I understand that you want to set up alerts when Google mapping API usage daily quota reaches 85%.
This can be done using the Google Cloud Platform's alerting functionality, in order to create a alerting policy on Google mapping API follow the steps mentioned in the link [1].
[1] https://support.woolpert.io/hc/en-us/articles/360045341333-How-to-set-and-use-service-level-alerts-on-Google-Maps-Platform
For more information you can refer this document.

Best Way to Monitor Customer Usage of AWS Lambda

I have newly created an API service that is going to be deployed as a pilot to a customer. It has been built with AWS API Gateway, AWS Lambda, and AWS S3. With a SaaS pricing model, what's the best way for me to monitor this customer's usage and cost? At the moment, I have made a unique API Gateway, Lambda function, and S3 bucket specific to this customer. Is there a good way to create a dashboard that allows me (and perhaps the customer) to detail this monitoring?
Additional question, what's the best way to streamline this process when expanding to multiple different customers? Each customer would have a unique API token — what's the better approach than the naive way of making unique AWS resources per customer?
I am new (a college student), but any insights/resources would help me a long way. Thanks.
Full disclosure: I work for Lumigo, a company that does exactly that.
Regarding your question,
As #gusto2 said, there are many tools that you can use, and the best tool depends on your specific requirements.
The main difference between the tools is the level of configuration that you need to apply.
cloudwatch default metrics - The first tool that you should use. This is an out-of-the-box solution that provides you many metrics on the services, such as: duration, number of invocations and errors, memory. You can configure metrics over different timeslots and aggregators (P99, average, max, etc.)
This tool is great for basic monitoring.
Its limitation is its greatest strength - it provides monitoring which is common to all the services, thus nothing tailored-fit to serverless applications. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
cloudwatch custom metrics - The other side of the scale - getting much more precise metrics, which allows you to upload any metric data and monitor it: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
This is a great tool if you know exactly what you want to monitor, and you already familiar with your architecture limitations and pain points.
And, of course, you can configure alarms over this data:
Lumigo - 3rd party company (again, as a disclosure, this is my workplace). Provides out-of-the-box monitoring, specifically created for serverless applications, such as an abnormal number of invocations, costs, etc.. This tool also provides troubleshooting capabilities to enable deeper observability.
Of course, there are more 3rd party tools that you can find online. All are great- just find the one that suits your requirement the best.
Is there a good way to create a dashboard
There a are multiple ways and options depending in your scaling, amount of data and requirements. So you could start small and simple, but check if any option is feasible or not.
You can start with the CloudWatch. You can monitor basic metrics, create dashboards and even share with other accounts.
naive way of making unique AWS resources per customer
For the start I would consider creating custom cloudwatch metrics with the customer id as a metric and put the metrics from the Lambda functions.
Looks simple, but you should do the math and a PoC about the number of requested datapoints and the dashboards to prevent a nasty surprise on the billing.
Another option is sending metrics/events to DynamoDB, using atomic functions you could directly build some basic aggregations (kind of naïve stream processing).
When scaling to a lot of events, clients, maybe you will need some serious api analytics, but that may be a different topic.

Filtering the detailed bill generated by AWS account according to Region in aws

I am working on a project and all the project resource is on 'me-south-1'
I have other resources things in other regions.
I need to send a detailed bill to the client.
could anyone suggest me,How can I filter it according to the region?
For detailed billing (with the ability to filter) the best approach is to use Cost Explorer.
By using this service you can apply a range of filters (including region), this can also be done programmatically.
Be aware that using this service does charge $0.01 per request.

Access management for AWS-based client-side SDK

I'm working on client-side SDK for my product (based on AWS). Workflow is as follows:
User of SDK somehow uploads data to some S3 bucket
User somehow saves command on some queue in SQS
One of the worker on EC2 polls the queue, executes operation and sends notification via SNS. This point seems to be clear.
As you might have noticed, there are quite some unclear points about access management here. Is there any common practice to provide access to AWS services (S3 and SQS in this case) for 3rd-party users of such SDK?
Options which I see at the moment:
We create IAM-user for users of the SDK which have access to some S3 resources and write permission for SQS.
We create additional server/layer between AWS and SDK which is writing messages to SQS instead of users as well as provides one-time short-living link for SDK to write data directly to S3.
First one seems to be OK, however I'm hesitant that I'm missing some obvious issues here. Second one seems to have a problem with scalability - if this layer will be down, whole system won't work.
P.S.
I tried my best to explain the situation, however I'm afraid that question might still lack some context. If you want more clarification - don't hesitate to write a comment.
I recommend you look closely at Temporary Security Credentials in order to limit customer access to only what they need, when they need it.
Keep in mind with any solution to this kind of problem, it depends on your scale, your customers, and what you are ok exposing to your customers.
With your first option, letting the customer directly use IAM or temporary credentials exposes knowledge to them that AWS is under the hood (since they can easily see requests leaving their system). It has the potential for them to make their own AWS requests using those credentials, beyond what your code can validate & control.
Your second option is better since it addresses this - by making your server the only point-of-contact for AWS, allowing you to perform input validation / etc before sending customer provided data to AWS. It also lets you replace the implementation easily without affecting customers. On availablily/scalability concerns, that's what EC2 (and similar services) are for.
Again, all of this depends on your scale and your customers. For a toy application where you have a very small set of customers, simpler may be better for the purposes of getting something working sooner (rather than building & paying for a whole lot of infrastructure for something that may not be used).

Can I monitor the usage of individual directories with AWS CloudWatch?

I'm developing a platform where users will in effect have their own site within a directory of my own. Each user site will consist of a package of php scripts and the template/image files for their sites custom layout. Each user site will be connected to their own Amazon RDS. I need to be able to track the resource usage of each directory so that I can bill each user for the resources they have used. Would it be possible to setup custom metrics with CloudWatch so that I can calculate costs?
You should be able to use cloudwatch to do this, however, it might not be the most efficient place to put this information if you are going to bill or report on it. I think you are better off computing the data and then storing it in a database of your own. This way you have easy access to the data and you can do things with data that may not work well in the context of cloudwatch.