AWS web application - rough estimate - amazon-web-services

I'm trying to make a rough estimate to how much it would cost per hour when running a web application within AWS. I understand that this depends on the type of web application, network capacity, throughput etc. Very roughly, how many concurrent sessions can a medium or large server manage? Let's say that the number of clients is at any time 8000 - roughly, what would that cost?
Thank you!

Nobody can answer your question asked like that. A rough estimate can be between 10 and 10k USD. What I suggest is to use the AWS calculator to calculate your estimate. Add some EC2 machines, and transfer out, and you should see some kind of estimate.
https://calculator.s3.amazonaws.com/index.html

Related

How to adjust and measure network performance on AWS

Lately, I have been struggling to understand what is my network speed (downlink) between nodes on AWS (in a multi-homed cluster, computers in different regions).
I have a lot of fluctuations when I measure it with a script which I have written (based on this link and SCP) or with Iperf.
I believe it is based on network use which changes rapidly (mostly between regions), but I still don't understand AWS documentation about what is the performance I am paying for, a minimum and a maximum downlink rate for example (aws instances).
At first, I have tried the T2 type, and as I saw it had burst CPU performance, I thought that maybe the NIC performance is also bursty so I have moved to M4 type, but I have got the same problems with M4.
Is there any way to know my NIC downlink rate based on the type and flavor?
*I have asked a similar question on the AWS forum, but I haven't got a response (https://forums.aws.amazon.com/thread.jspa?threadID=296389).
There is no way to get a better indication that your measuring. AWS does not publish anything indicating this performance, and unless we are talking the larger instance where network performance is actually specifically given. I.e. m5.12xlarge having 10 gbps. Most likely network performance does have a burst component for smaller instance types.
There are pages with other peoples benchmarks, but you won't find any official answer for any of this.

Estimate AWS cost

The company which I work right now planning to use AWS to host a new website for a client. Their old website had roughly 75,000 sessions and 250,000 page views per year. We haven't used AWS before and I need to give a rough cost estimate to my project manager.
This new website is going to be mostly content-driven with a cms backend (probably WordPress) + a cost calculator for their services. Can anyone give me a rough idea about the cost to host such kind of a website in aws?
I have used simple monthly calculator with a single Linux t2.small 3 Year upfront which gave me around 470$.
(forgive my English)
The only way to know the cost is to know the actual services you will consume (Amazon EC2, Amazon EBS, database, etc). It is not possible to give an accurate "guess" of these requirements because it really does depend upon the application and usage patterns.
It is normally recommended that you implement the system and run it for a while before committing to Reserved Instances so that you have a chance to measure performance and test a few different instance types.
Be careful using T2 instances for production workloads. They are very powerful instances, but if the CPU Credits run out, the amount of CPU is limited.
Bottom line: Implement, measure, test. Then you'll know what is right for your needs.
Take Note
When you are new in AWS you have a 1 year free tier on a single t2.micro
Just pulled it out, looking into your requirement you may not need this
One load balancer and App server should be fine (Just use route53 to serve some static pages from s3 while upgrading or scalling )
Use of email subscription and processing of Some document can be handled with AWS Lambda, SNS and SWQ which may further reduce the cost ( you may reduce the server size and do all the hevay lifting from Lambda)
A simple webpage with 3000 request/monthly can be handled by T2 micro which is almost free for one year as mentioned above in the note
You don't have a lot of details in your question. AWS has a wide variety of services that you could be using in that scenario. To accurately estimate costs, you should gather these details:
What will the AWS storage be used for? A database, applications, file storage?
How big will the objects be? Each type of storage has different limits on individual file size, estimate your largest object size.
How long will you store these objects? This will help you determine static, persistent or container storage.
What is the total size of the storage you need? Again, different products have different limits.
How often do you need to do backup snapshots? Where will you store them?
Every cloud vendor has a detailed calculator to help you determine costs. However, to use them effectively you need to have all of these questions answered and you need to understand what each product is used for. If you would like to get a quick estimate of costs, you can use this calculator by NetApp.

Can I improve performance of my GCE small instance?

I'm using cloud VPS instances to host very small private game servers. On Amazon EC2, I get good performance on their micro instance (1 vCPU [single hyperthread on a 2.5GHz Intel Xeon], 1GB memory).
I want to use Google Compute Engine though, because I'm more comfortable with their UX and billing. I'm testing out their small instance (1 vCPU [single hyperthread on a 2.6GHz Intel Xeon], 1.7GB memory).
The issue is that even when I configure near-identical instances with the same game using the same settings, the AWS EC2 instances perform much better than the GCE ones. To give you an idea, while the game isn't Minecraft I'll use that as an example. On the AWS EC2 instances, succeeding world chunks would load perfectly fine as players approach the edge of a chunk. On the GCE instances, even on more powerful machine types, chunks fail to load after players travel a certain distance; and they must disconnect from and re-login to the server to continue playing.
I can provide more information if necessary, but I'm not sure what is relevant. Any advice would be appreciated.
Diagnostic protocols to evaluate this scenario may be more complex than you want to deal with. My first thought is that this shared core machine type might have some limitations in consistency. Here are a couple of strategies:
1) Try backing into the smaller instance. Since you only pay for 10 minutes, you could see if the performance is better on higher level machines. If you have consistent performance problems no matter what the size of the box, then I'm guessing it's something to do with the nature of your application and the nature of their virtualization technology.
2) Try measuring the consistency of the performance. I get that it is unacceptable, but is it unacceptable based on how long it's been running? The nature of the workload? Time of day? If the performance is sometimes good, but sometimes bad, then it's probably once again related to the type of your work load and their virtualization strategy.
Something Amazon is famous for is consistency. They work very had to manage the consistency of the performance. it shouldn't spike up or down.
My best guess here without all the details is you are using a very small disk. GCE throttles disk performance based on the size. You have two options ... attach a larger disk or use PD-SSD.
See here for details on GCE Disk Performance - https://cloud.google.com/compute/docs/disks
Please post back if this helps.
Anthony F. Voellm (aka Tony the #p3rfguy)
Google Cloud Performance Team

Is there a way to realisticlly model or estimate AWS usage?

This question is specifically for aws and s3 but it could be for other cloud services as well
Amazon charges for s3 by storage (which is easily estimated with the amount of data stored times the price)
But also charges for requests which is really hard to estimate.. a page that has one image stored in s3 technically gets 1 request per user per visit, but using cache it reduces it. Further more, how can I understand the costs with 1000 users?
Are there tools that will extrapolate data of the current usage to give me estimates?
As you mention, its depending on a lot of different factors. Calculating the cost per GB is not that hard, but estimating the amount of requests is a lot more difficult.
There are no tools that I know of that will calculate the AWS S3 costs based on historic access logs or the like. These calculations would also not be that accurate.
What you can best do is calculate the costs based on the worst case scenario. In this calculation, you assume that nothing will be cached and will assume you will get peak requests all the time. In 99% of the cases, the outcome of that calculation will be lower than what will happen in reality.
If the outcome of that calculation is acceptable pricing wise, you're good to go. If it is way more than your budget allows, then you should think about various ways you could lower these costs (caching being one of them).
Cost calculation beforehand is purely to indicate if the project or environment could realistically stay below budget. Its not meant to provide a 100% accurate estimate beforehand. Most important thing to do is to keep track of the costs after everything has been deployed. Setup billing/budget alerts and check for possible savings.
The AWS pricing calculator should help you get started: https://calculator.aws/
Besides using the calculator, I tend to prefer the actual pricing pages of each individual service and calculate it within a spreadsheet. This gives me a more in-depth overview of the actual costs.

As an experiment I want to work a bit with AWS. How much might I expect to pay?

I'm about to go to Pycon, and while I have my hosting at Webfaction one of the tutorials (JKM) asks for students to have AWS instances. I've been trying to figure out what some minimum charge examples might look like? I'll have a lamp server with Django and a requisite amount of storage but next to no traffic.
Anyone have some guidance/advice? My Google searches and look here did not turn up much useful info.
It depends on how long you need to run your instance. A small linux instance will cost 8.5 cents per hour. If you spend a week at Pycon and have your instance running the entire week, it would cost $14.28 for the week. You probably won't need it while you are asleep, so you can turn it off when you are done each day. If you only need it for an hour it will cost you 8.5 cents.
Here's more details on the pricing if you need a bigger server or you need a windows server instead:
http://aws.amazon.com/ec2/#pricing
I think the AWS calculator might help also for estimating cost.
See http://calculator.s3.amazonaws.com/calc5.html
Also try here for a comparison of various different on-demand services (plus rough calculations of how much it would cost to roll it yourself): https://secure.slicify.com/Calculator.aspx
(full disclosure - it's a page on my site).