I'm looking for a list of all the AWS products and their pricing information and I came across the Bulk API offer index file. This file contain the links for AWS services pricing info (such as SNS). However this file contain many products. I don't understand the difference between a service and a product, shouldn't be for each service only one product ?
In this context, a service is an AWS service, like SNS, SQS, S3, etc. The products represent each individual usage and billing component associated with that service, such as (in the case of SNS) various types of message delivery events (per event), or outbound/inbound data transfer (per GB).
Note that billing components are defined for each individual product. When there is no charge for a particular product -- such as is sometimes the case for certain classes of data transfer -- there is still a billing component, but it happens to be priced as $0.00 per unit.
Related
I have newly created an API service that is going to be deployed as a pilot to a customer. It has been built with AWS API Gateway, AWS Lambda, and AWS S3. With a SaaS pricing model, what's the best way for me to monitor this customer's usage and cost? At the moment, I have made a unique API Gateway, Lambda function, and S3 bucket specific to this customer. Is there a good way to create a dashboard that allows me (and perhaps the customer) to detail this monitoring?
Additional question, what's the best way to streamline this process when expanding to multiple different customers? Each customer would have a unique API token — what's the better approach than the naive way of making unique AWS resources per customer?
I am new (a college student), but any insights/resources would help me a long way. Thanks.
Full disclosure: I work for Lumigo, a company that does exactly that.
Regarding your question,
As #gusto2 said, there are many tools that you can use, and the best tool depends on your specific requirements.
The main difference between the tools is the level of configuration that you need to apply.
cloudwatch default metrics - The first tool that you should use. This is an out-of-the-box solution that provides you many metrics on the services, such as: duration, number of invocations and errors, memory. You can configure metrics over different timeslots and aggregators (P99, average, max, etc.)
This tool is great for basic monitoring.
Its limitation is its greatest strength - it provides monitoring which is common to all the services, thus nothing tailored-fit to serverless applications. https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
cloudwatch custom metrics - The other side of the scale - getting much more precise metrics, which allows you to upload any metric data and monitor it: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html
This is a great tool if you know exactly what you want to monitor, and you already familiar with your architecture limitations and pain points.
And, of course, you can configure alarms over this data:
Lumigo - 3rd party company (again, as a disclosure, this is my workplace). Provides out-of-the-box monitoring, specifically created for serverless applications, such as an abnormal number of invocations, costs, etc.. This tool also provides troubleshooting capabilities to enable deeper observability.
Of course, there are more 3rd party tools that you can find online. All are great- just find the one that suits your requirement the best.
Is there a good way to create a dashboard
There a are multiple ways and options depending in your scaling, amount of data and requirements. So you could start small and simple, but check if any option is feasible or not.
You can start with the CloudWatch. You can monitor basic metrics, create dashboards and even share with other accounts.
naive way of making unique AWS resources per customer
For the start I would consider creating custom cloudwatch metrics with the customer id as a metric and put the metrics from the Lambda functions.
Looks simple, but you should do the math and a PoC about the number of requested datapoints and the dashboards to prevent a nasty surprise on the billing.
Another option is sending metrics/events to DynamoDB, using atomic functions you could directly build some basic aggregations (kind of naïve stream processing).
When scaling to a lot of events, clients, maybe you will need some serious api analytics, but that may be a different topic.
I am designing an application that runs on multiple regions. say R1, R2.
Files are submitted to a multi-region cloud storage bucket. PUT event in the bucket will publish a notification to either directly trigger the cloud function or to an pub/sub topic.
I want 80% of processing to be done by R1, and 20% by R2.
Approach1:
Have 2 Cloud functions: CF-R1, CF-R2.
How do I ensure that 80% of storage bucket notifications trigger CF-R1 & 20% trigger CF-R2?
Approach 2:
Have pub/sub topic which captures notification from the storage bucket.
Is it possible to configure CF-R1 & CF-R2 on the topic so that I can split traffic?
Or any other approach to handle this scenario.
Approach 1: Use a Load balancer with URL maps
You coudl use a Cloud function or Cloud Run and use a load balancer with a URL map (announced in June in this blog post - see documentation).
If you use the load balancer you can trigger the notification to the balancer directly or via pubsub with a PUSH subscription.
Note that the load balancer is a separate product and you must take a close look at usage and price.
Approach 2: Several pubsub subscriptions with a filter
I think the second option could be viable. Crazy to do for your case, but it will work.
Google has now in beta the option to apply a filter to a pubsub topic when you create a subscription.
Then, you can have a cloud function (or a cloud run) reacting to the pubsub notifications they recieve on their own subscription.
With this beta feature, you can filter by message values (equals ==, not equals !=, and hasPrefix).
The trick here is to have enough information to distribute the messages between the functions evenly because you cannot change the filter after you create the subscription.
If you can pass that information in your app, or as part of the filename, you can do it this way in an easy way.
If not, I guess the crc32 might have enough information for the filter you need.
But this filter has a 128 character limit that you hit with this:
hasPrefix(attributes.crc32,"A") OR hasPrefix(attributes.crc32,"B") OR hasPrefix(attributes.crc32,"C") OR hasPrefix(attributes.crc32,"D") OR hasPrefix(attributes.crc32,"E")
With the filter above you have almost 10% of the CRC32 possible cases. Not bad for some simple cases, but not good for you since you would have to configure a lot of subscriptions.
I am working on a project and all the project resource is on 'me-south-1'
I have other resources things in other regions.
I need to send a detailed bill to the client.
could anyone suggest me,How can I filter it according to the region?
For detailed billing (with the ability to filter) the best approach is to use Cost Explorer.
By using this service you can apply a range of filters (including region), this can also be done programmatically.
Be aware that using this service does charge $0.01 per request.
A company's data warehouse receives orders from multiple ordering systems. This data needs to be stored and sales commission needs to be paid based on the state's sales made. A mapping table is present which associates state to Sales manager. How would you implement such a solution? Which AWS services would you use? What are the major design decisions you will take to ensure that a payment is accurately tracked?
You would need a database to store the information, such as Amazon RDS for MySQL. It doesn't sound like the data volume or usage justifies a Data Warehouse solution like Amazon Redshift.
You'll also need to run some application logic somewhere, presumably on an Amazon EC2 instance.
The design of the application, including ensuring that the "payment is accurately tracked", is totally your responsibility. AWS provides the infrastructure for such a system, but not the software application.
Is there some way to purchase a product on Amazon via the API?
Currently I'm buying several products on daily base, where each product can be delivered to differnet addresses, and each time I have to go over the checkout phase on Amazon (many clicks).
According to my searches (for example Programmatically make Amazon purchase?) it seems that there is no way to purchase a product via the API and I understand the reasons for that.
However, I wonder if there is some other way to automate the process of ordering multiple products on Amazon.
Another way at it would be to automate the browser with Selenium. Of course this would require updating the code every time the Amazon website changed.