I have asked to give a report on the energy consumption of an m5.12xlarge instance. I am wondering how to come up with a factual and approximate number here. Anyone had come across this issue anywhere ?
An important concept of the Cloud is that server utilization is often much higher.
In a normal data center, a server is running 24 hours but is only used a few hours a day, and often not fully utilized in those hours.
In the Cloud, when an instance is not required it can be turned off, which means that the capacity is available for somebody else to use. Instance size can also be selected so that it is big enough for the desired workload, without having to get large servers for a potential future workload.
Thus, there is significantly less wastage in the cloud compared to on-premises servers. I mention this because somebody has presumably asked you to measure the environmental impact of a choice of computing infrastructure, and it's not really accurate to directly compare on-premises vs the Cloud since on-premises is typically over-provisioned and under-utilized.
A good article on this topic: Cloud Computing, Server Utilization, & the Environment | AWS News Blog
Related
I am completely new to the implementation of cloud solutions. I've just started taking AWS training courses.
But I already have a very fundamental question about the flow of development in cloud projects:
How do you go about developing solutions without incurring costs? I know that there are free tiers, but in practice you need a lot of unfree elements. Especially when working with infrastructure-as-code approaches (e.g. CloudFormation), it can happen that every time you try out the templates, costs can be incurred immediately.
Is there maybe something like a sandbox mode or how else do you go about it in practice?
Outside of the AWS Free Tier you will be billed for creating services.
The best way to keep costs as low as possible is to combing the lowest priced settings (such as instance class) with removing resources you're not using after you're complete. I understand that this will cost, however many resources are now moving to per second billing (where you normally have to pay for at least the first minute) so the cost is kept low.
Additionally when dealing with some services (such as EC2, ECS, Fargate and ECR) you can make use of spot instances to pay sometimes as low as 10% of the original cost which will help to reduce these resources.
To ensure you can recreate resources when you want them use infrastructure as code to reroll out as you need the resources (CloudFormation or Terraform are great offerings for this).
Finally be on the lookout for AWS conferences, they are a great way to pickup AWS credits for attending which will offset your bill against most AWS services.
We have 2 Elastic VMs (Linux) (Currently DS2V2) behind an Azure Load Balancer. We are doing HTTP Posts from our local lan into the Load Balancer, but we seem to be getting throttled. We have tried: Changing the size of the VMs, no difference; adding additional premium SSDs, again no difference; running multiple threads on our end, again no differenece.
What we did do though, was to having the Elastic Engine suck in all of the log files from the Linux boxes and the index rate jump pretty high while it was ingesting them. So we are assuming that it's not really the Linux Elastic boxes that are throttling us.
We do have Kibana installed on the boxes, and as a base line, we're just using the "Cluster Indexing Rate" for both our local posts to the box, and the local ingestion of the log files.
We do understand that yes, there is going to be some latency and overhead since we are now involving the internet, but not the rates we are currently getting. (We have a 1G pipe to the internet, it's nowhere near capacity, so we can rule out at least getting out of our company).
The question is, where else can we look to determine where we might be getting throttled?
For the performance "MUCH slower", it is a bit subjective question and hard to identify. I just provide some information that may impact it.
Azure Compute requests may be throttled at a subscription and on a per-region basis. If you have an API throttling error, you could refer to this document to troubleshoot throttling issues, and best practices to avoid being throttled.
Some factors CPU and storage limits that differ on Azure VM sizes may impact the Azure VM to process incoming data. You may change the size to a higher CPU and premium SSD disk. You could also change Azure resources to another region which is close to your location. You could refer to this article.
I'm trying to determine if it makes sense to switch our hosting to EC2 from a dedicated dreamhost server, and if so, what EC2 instance type I should choose to get a good idea of the cost prior to switching. I would like to go low and then bump up if need be.
Current Usage:
dedicated server with 4 GB RAM and 4 CPUs
average disk usage: 783 MB
average bandwidth: 8.5 GB
This is really all the info I get from our dreamhost control panel, so hopefully it's enough to provide some recommendations on where to start.
Using the calculator located here, I'm leaning towards a t2.xlarge. Is that too much? not enough?
It is not possible for anyone to recommend the 'correct' instance type. This is because it depends on the operation of your particular application. It might be CPU-intensive, RAM-intensive, network-heavy, highly parallel, etc.
Some applications might need to handle occasional spikes of traffic, whereas other applications might be relatively consistent in their load.
The correct way to determine your 'best' instance type is to run tests that simulate the expected application load. If you can create an automated test, then you could run it against many different instance types and compare the performance vs cost.
Also, many applications are designed to be able to run across multiple instances, so it would be better to test various quantities of servers as well as their instance type.
You might also consider using Amazon EC2 Auto Scaling, which gives the ability to automatically add/remove servers based upon workload. This means that you could use much more powerful instances, but automatically turn some of them off during less-used periods. This affects the cost calculation because the more-powerful instances are more expensive, but you won't be using them all the time.
Then, you could also consider using Amazon EC2 Spot Instances, which can be up to 90% less cost but might be terminated when the demand for such instances is higher. You can also combine On-Demand and Spot Instances to give additional capacity at a lower cost.
(Spot and Auto Scaling are only really applicable if you are using more than one instance to host your application.)
And finally, if your application only requires one instance, you could also consider using Amazon Lightsail that combines the price for instance type and network bandwidth to make the price more predictable.
Bottom line: It depends!
One final word: Most companies consider switching to AWS not purely on a cost basis ("if it makes sense to switch our hosting to EC2 from a dedicated dreamhost server"), but rather on the breadth of features that AWS offers that are not available in a traditional server hosting service. If all you need is "a server", it's probably easiest to consider Amazon LightSail or keep whatever is currently working for you. The cost saving with AWS won't be dramatic (or it might not even be cheaper!), but it will offer you a lot more capabilities if you ever grow beyond just requiring "a server".
I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?
Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.
Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.
I'm coming from academia where I had HPC clusters at my disposal. Now I'm trying to deploy something on AWS.
I'm trying to budget for what it would cost, $-wise, to train some standard neural nets on standard data sets so I have an idea what other training will cost. Even ballparkish estimates are appreciated.
I know you can request faster or more sets of GPUs, so I also don't know the spread of speed vs. cost either; any insight here is also appreciated.
What would it cost to train ResNet-50 (or really any smallish ResNet) on CIFAR-10, a relatively small net on a small data set? (say, 100 epochs with reasonable batch size)
I do not know anything on ResNet or CIFAR, but as far as pricing for AWS EC2 goes, it depends on instance family, type and reservations:
On Demand Instances: Higest cost. Ideal for prototyping and short lived environments. Pricing
Reserved Instances: Discount are based on the tenure of the reservation. You also have a No-Upfront option where you can reserve instances from 1 year to 3 years which does not provide maximum savings but significantly saves cost. Pricing
Spot Instances: Cheapest option of all, but your application should be designed in way to handle interruptions as AWS will terminate your instance without notice. Recent announcement from AWS provides support for a termination notification for certain types of spot instances, which you may want to investigate. Pricing