AWS EC2 Cost comparison in one place? [closed] - amazon-web-services

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Currently having more than twenty AWS accounts and under each account having 6-10 On-demand EC2 instances both Linux and windows of different sizes and types ,altogether around 100 instances . So looking for cost saving options with various options such as Compute Saving Plan, EC2 Saving Plan, Reserve Instance however unable compare of all different options and their estimates side by side.
Though compute and EC2 estimates are given as recommendation through Billing->Cost Explorer but you need to go through each account , then select different option e.c Compute or EC2 saving , then payment options ,then tenure 1 or 3 years and it display estimates .
I want to see all 100 instance and their prices on one page if possible as below
Under Compute saving plan for 1 and 3 year with full , partial or no upfront payment
under EC2 saving plan for 1 and 3 years with full , partial or no upfront payment
Under reserved instance for 1 and 3 year with full , partial or no upfront payment
is there any easier way to get this done ?

This is probably the best you can get in terms of what you are looking for:
https://calculator.s3.amazonaws.com/index.html

From What is AWS Compute Optimizer? - AWS Compute Optimizer:
AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics of your AWS resources. It reports whether your resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads. Compute Optimizer also provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which you can use to evaluate which recommendation provides the best price-performance trade-off. The analysis and visualization of your usage patterns can help you decide when to move or resize your running resources, and still meet your performance and capacity requirements.
Compute Optimizer provides a console experience, and a set of APIs that allows you to view the findings of the analysis and recommendations for your resources across multiple AWS Regions. You can also view findings and recommendations across multiple accounts, if you opt in the management account of an organization. The findings from the service are also reported in the consoles of the supported services, such as the Amazon EC2 console.

https://ec2instances.github.io/ tiny opensource tool that pulls pricing data from AWS json-API and presents it as a table.
Disclaimer: I've built this (for myself)

Related

What is the closest latency efficient AWS region for GCP us-central1 region? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am evaluating a multi-cloud setup and would like to find the closest latency efficient AWS region for GCP's us-central1 region, my options in AWS are US East (2) and US West (2). I was unable to find any service/guide that would give me this mapping.
A couple of Stack Overflow answers used manual scripts to find this mapping. Is there any resource or better way to find AWS region for a GCP region?
I’m not aware if there are some tool to measure multi-cloud latency, in GCP there is a tool called Performance Dashboard in the Network Intelligence menu, where you can choose up to 5 GCP regions to measure the latency among Google’s Network this could be from 1 hour to 6 weeks, the latency graph shows the median latency across all VMs deployed on Google Cloud. The vertical axis shows the median latency in milliseconds (ms), and the horizontal axis shows the change over time, I hope this information gives you a overview to make your decision and compare with AWS
You could use the results in the aforementioned Performance Dashboard, and make a simplifying assumption that one of Google's "east" regions is close to AWS'east and similarly for "west." Additionally, if you don't have an account yet, there is a public dashboard which shows inter-region latency and throughput in Google Cloud, and you could use the same sort of assumption.
But ultimately, unless/until there is a public inter-cloud latency tool available your best bet is probably to fire-up some VMs and take some measurements. John Hanley's caveats about what you see "today" may not be what you see "tomorrow" are definitely worth keeping in mind.

Billing for multiple users of google cloud APIs on one GCP project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I have one GCP project and multiple users/service accounts that use the google cloud APIs (e. g. Cloud text-to-speech, Cloud speech-to-text etc.). In the metrics overview for each API it is possible to see how often an API has been called by whom, but for the billing overview, it is not possible to identify which user/service account caused a specific amount of costs. So my question is: Is it possible to identify the different users/service accounts in the actual billing costs?
Normally, one would use labels to distinguish between different users, but unfortunately labels are not supported for those APIs (see list of currently supported services: https://cloud.google.com/resource-manager/docs/creating-managing-labels#label_support)
Additionally, each user/service account has a separate Cloud run instance connected to it, that runs a server listening for incoming requests and forwards them to the corresponding API. Would this approach somehow facilitates the mapping from user to costs in one GCP project?
Metrics and billing are 2 different things.
Google provides metrics to follow and understand the usage of your service in your project
The billing is at the project level, whatever the user/service account, YOU pay, it's not the concern of Google of how will you rebill the service to your users.
So, here the solution is to use the metrics to get the data and then to equally distribute the cost according to the APIs usages.
Similarly, Cloud Run label will help you to have details in the BigQuery billing export, but google will charge you for all your services.
Ultimately, if the services/customers are independent, you can imagine to create a project per customer, and thus to have 1 free tiers per project (when applicable) and, above all, 1 billing per project, and thus per customer!

AWS Pricing VS Google-Cloud-Platform Pricing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I want to host my website (PHP/MySQL) on a cloud platform. For sure, my website is new and I don't think that there will be too much traffic.
So, I tried to compare the lowest config costs of cloud services between GCP and AWS. The lowest config cost according Google Cloud Platform pricing calculator is as follows:
Google Compute Engine (f1-micro): $4.09
Google Cloud SQL (D0 Instance): $11.30
Datastore (1GB): $0.18
Total: $15.57
(For details, have a look on this link: https://goo.gl/wJZikT )
Meanwhile, the lowest config cost according AWS Pricing calculator is:
Amazon EC2 (t1.micro): $14.64
Amazon RDS (db.t1.micro with 1GB of storage): $18.42
Amazon S3: $0.11
Total: $33.17
(For details, have a look on this link http://goo.gl/Pe7dFt )
My question is: how can it be that there is a big difference in the cost of cloud services between google cloud platform and AWS? Is there any thing wrong in my estimation? If it is the case please share with me a link on the configuration of the minimal configuration on AWS...
Thanks.
2 main reasons for the difference:
Google micro VM uses a shared core and not a dedicated one. Cores seem to be the most expensive part of a VM if you look at prices for both AWS an GCE
Google provide their sustained usage discount on a monthly basis (30% effective discount per month), while amazon forces you to commit for a year with upfront payment to get any discount.
Both factors above allow you to have a lower cost for the Google VM.
I have tried VMs on GCE with shared cores and did not have any problems. If you use Google monitoring, you can actually track how much the core being shared is affecting you using the CPU steal metric. This article from stackdriver explain it really well.
Side note: stackdriver has been since acquired by Google and is what Google use for monitoring the VMs.

AWS EC2 billed hours per instance in a given time period [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
My CIO is asking me for a monthly "per instance" breakdown of EC2 charges, as some of our EC2 instances are run on behalf specific customers. Does anyone know how to accomplish this?
I can use java, python, or the aws command line tools if necessary, but a report tools or service is preferable.
You need to tag resources associated with a particular customer (for example EC2 instances, RDS) and enable the Detailed Billing Report.
Log into the My Account area of the console and go to the Billing Preferences area. Enable Monthly Report, Programmatic Access and Detailed Billing Report.
AWS will start to aggregate your billing to a nominated S3 bucket as CSV files and break it down by tags. There will be a charge for the storage on S3.
Aggregation by tags only starts from when you turn it on so you won't get the full month till the next report.
More details here and here for how to set up and analyse the data.
Tag the instance ,it will reflect in your bills based on your tags .
There is fairly new tool open-sourced by Netflix called Ice which allows you to visualize the billing details as retrieved via the AWS reports generated into your S3 buckets.
You might also want to check the answers over at serverfault to a similar question.
First thing is to enable detailed billing export to a S3 bucket (see here)
Then I wrote a simplistic server in Python that retrieves your detailed bill and breaks it down per service-type and usage type (see it on this GitHib repo).
Thus you can check anytime what your costs are and which services cost you the most etc.
If you tag your EC2 instances, S3 buckets etc, they will also show up on a dedicated line.
I work for Cloudability, and our tool is built to do exactly that. It collects AWS billing and usage data as well as your tags from all of your accounts and puts it into a custom reporting interface. It's completely point-and-click so you don't have to mess around with writing scripts or building spreadsheets.
A lot of companies are using it to do exactly what you're talking about ... split up costs/usage by instance, department, project, client, etc..
You can check it out at https://cloudability.com

Moving wordpress to Amazon Web Services [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm planning to move my Website which is using multiple wordpress to Amazon Services. However, my daily vistors are about 22,000 and sometimes goes to over 90k and the web crashes! However, the hosting company charge me nearly $100 including support without support it would cost $50. the average bandwidth is about 400GB.
Can I ask please how much will it cost me? and how I can start with Amazon Services?
Kind regards
Start out by looking at the different types of hosting that Amazon offers and which one will be the correct fit for your site. Amazon's EC2 (Elastic Cloud Computing) is the servers that you can get hosted in the cloud.
Depending on how much storage space and bandwidth, the costs differ. They have a helpful cost guide on their EC2 page. They offer different pricing for the different types of servers you need. They have on demand spot instances which can be brought up and down on the fly. If you need a server to be running constantly you can put a down payment and have a reserved instance to provide the server.
You can calculate your fees depending on your current usage from the tools AWS provides. http://calculator.s3.amazonaws.com/calc5.html
This is also a good article for getting started with using WordPress under AWS.
http://wp.tutsplus.com/tutorials/scaling-caching/deploy-your-wordpress-blog-to-the-cloud/
AWS also provides a Free Tier of services provided you stay under a certain amount of usage. That is detailed at http://aws.amazon.com/free/ . I also found this YouTube video on setting up EC2 instances very helpful. http://www.youtube.com/watch?v=JPFoDnjR8e8 . From what I understand, unless your WordPress install gets a crazy number of hits you will probably fall under the Free Tier.