Why not get 2 nano instances instead 1 micro instance on AWS? - amazon-web-services

I'm choosing instances do run microservices on an AWS EKS cluster.
When reading about it on this article an taking a look on the aws docs it seems that choosing many small instances instead of on larger instance results on a better deal.
There seems to be no downside on taking, for instance, 2 t3.nano (2 vCPU / 0.5GiB each) vs 1 t3.micro (2 vCPU / 1GiB each). The price and the memory are the same but the CPU provided has a huge difference the more instances you get.
I assume there are some processes running on each machine by default, but I found no places metioning its impact on the machine resources or usage. Is it negligible? Is there any advantage on taking one big instance instead?

The issue is whether or not your computing task can be completed on the smaller instances and also there is an overhead involved in instance-to-instance communication that isn't present in intra-instance communication.
So, it is all about fitting your solution onto the instances and your requirements.

There is no right answer to this question. The answer depends on your specific workload, and you have to try out both approaches to find out what works best for your case. There are advantages and disadvantages to both approaches.
For example, if the OS takes 200 MB for each instance, you will be left with only 600 MB both nano instances combined vs the 800 MB on the single micro instance.
When the cluster scales out, initializing 2 nano instances might roughly take twice as much time as initializing one micro instance to provide the same additional capacity to handle the extra load.
Also, as noted by Cargo23, inter-instance communication might increase the latency of your application.

Related

AWS OpenSearch Instance Types - better to have few bigger or more smaller instances?

I am a junior dev ops engineer and have this very basic question.
My team is currently working on providing an AWS OpenSearch cluster. Due to the type of our problem, we require the storage-optimized instances. From the amazon documentation I found that they recommend a minimum number of 3 nodes. The required storage size is known to me, in the OpenSearch Service pricing calculator I found that I can either choose 10 i3.large instances or 5 i3.xlarge ones. I checked the prices, they are the same.
So my question is, when I am faced with such a problem, do I choose the lesser bigger instances or the bigger number of smaller instances? I am particularly interested in the reason.
Thank you!
Each VM has some overhead for the OS so 10 smaller instances would have less compute and RAM available for ES in total than 5 larger instances. Also, if you just leave the default index settings (5 primary shards, 1 replica) and actively write to only 1 index at a time, you'll effectively have only 5 nodes indexing data for you (and these nodes will have less bandwidth because they are smaller).
So, I would usually recommend running a few larger instances instead of many smaller ones. There are some special cases where it won't be true (like a concurrent-search-heavy cluster) but for those, I'd recommend going with even larger instances in the first place.

Increasing number of vCPUs for a single computation and billing

While studying basic ML algorithms on MNIST database, I noticed that my netbook is too week for such purpose. I started a free trial on Google Cloud and successfully set up VM instance with 1 vCPU. However, it only boosts up the performance 3x and I need much more computing power for some specific algorithms.
I want to do the following:
use 1 vCPU for setting up an algorithm
switch to plenty of vCPU to perform a single algorithm
go back to 1 vCPU
Unfortunately, I am not sure how Google will charge me for such maneuver. I am afraid that it will drain my 300$ which I have on my account. It is my very first day playing with VMs and using clouds for computing purpose so I really need a good advice from someone with experience.
Question. How to manage namber of vCPUs on Google Cloud Compute Engine to compute single expensive algorithms?
COSTS
The quick answer is that you will pay what you use, if you make use of 16 cpu for 1 hour you will pay 16 cpu for 1 hour.
In order to have a rough idea of cost I would advice you to take a look to Price Calculator and try to create your own estimation with the resources you are going to use.
Having a 1VCPU and 3.75GB of RAM machine running for one day cost around 0.80$ (if it is not a preentible instance and without any committed use discounts), a machine having 32 VCPU and 120GB of RAM on the other hand would cost around 25$/day.
Remember the rule: when it is running, you are paying it; you can change the machine type how many times you want according your needs and during the transition you would pay just the persistent disk. Therefore it could make sense to switch off the machine each time you are not using it.
Consider that you will have to pay as well networking and storage, but the costs in your use case are kind of marginal, for example 100GB of storage for one day costs $0.13.
Notice that since September 2017 Google extended per-second billing, with a one minute minimum, to Compute Engine. I believe that this is how most of the Cloud Provider works.
ADDING VCPU
When the machine is off, you can modify from the edit menu the number of VCU and the amount of memory, here you can find a step to step official guide you can follow through the process. You can change machine type as well through the command line, for example setting a custom machine type with 4 vCPUs and 1 GB of memory :
$ gcloud compute instances set-machine-type INSTANCE-NAME --machine-type custom-4-1024
As soon you are done with your computation, stop the instance and reduce the size of the machine (or leave it off).

Which AWS RDS instance to upgrade given following usage pattern

I have been using t2.medium RDS instance, which is experiencing regular exhaustion of CPU-Credit-balance. Following is the graph of CPU-Credit-balance for an interval of 6 weeks
Since next available instance i.e. t2.large offers same vCPU and ECU, does it provide any improvement in terms of processing capabilities (like increased CPU credits). What is the best course of action I could take in this scenario in terms of RDS instance and other measures(apart from optimizing queries which I will do but I need quick solution so that users don't suffer slow speed)
It's not just about CPU Credits which should be considered but also CPU Utilisation, Memory Utilisation, Queue Depth etc... Looks like you are using CPU Intensive queries.. Credit going down to zero looks like a serious concern should should be resolved.
With t2 instances; you do NOT get 100% of them.
As recommended by Krishna; I agree that you should try moving to m4.large instead of t4.large.

Can I improve performance of my GCE small instance?

I'm using cloud VPS instances to host very small private game servers. On Amazon EC2, I get good performance on their micro instance (1 vCPU [single hyperthread on a 2.5GHz Intel Xeon], 1GB memory).
I want to use Google Compute Engine though, because I'm more comfortable with their UX and billing. I'm testing out their small instance (1 vCPU [single hyperthread on a 2.6GHz Intel Xeon], 1.7GB memory).
The issue is that even when I configure near-identical instances with the same game using the same settings, the AWS EC2 instances perform much better than the GCE ones. To give you an idea, while the game isn't Minecraft I'll use that as an example. On the AWS EC2 instances, succeeding world chunks would load perfectly fine as players approach the edge of a chunk. On the GCE instances, even on more powerful machine types, chunks fail to load after players travel a certain distance; and they must disconnect from and re-login to the server to continue playing.
I can provide more information if necessary, but I'm not sure what is relevant. Any advice would be appreciated.
Diagnostic protocols to evaluate this scenario may be more complex than you want to deal with. My first thought is that this shared core machine type might have some limitations in consistency. Here are a couple of strategies:
1) Try backing into the smaller instance. Since you only pay for 10 minutes, you could see if the performance is better on higher level machines. If you have consistent performance problems no matter what the size of the box, then I'm guessing it's something to do with the nature of your application and the nature of their virtualization technology.
2) Try measuring the consistency of the performance. I get that it is unacceptable, but is it unacceptable based on how long it's been running? The nature of the workload? Time of day? If the performance is sometimes good, but sometimes bad, then it's probably once again related to the type of your work load and their virtualization strategy.
Something Amazon is famous for is consistency. They work very had to manage the consistency of the performance. it shouldn't spike up or down.
My best guess here without all the details is you are using a very small disk. GCE throttles disk performance based on the size. You have two options ... attach a larger disk or use PD-SSD.
See here for details on GCE Disk Performance - https://cloud.google.com/compute/docs/disks
Please post back if this helps.
Anthony F. Voellm (aka Tony the #p3rfguy)
Google Cloud Performance Team

EC2 server, lots of micro instances or fewer larger instances?

I was wondering which would be better, to host a site on EC2 with many micro instances, or fewer larger instances such as m1.large. All will sit behind one or a few larger instances as load balancers. I will say what my understanding is, and anybody who knows better can add or correct me if I'm wrong
Main reason for choosing micro instances is cost. A single micro instance on average will give around 0.35ECU for $0.02/hour, while one small instance will give 1ECU for $0.085. If you do the math of $/ECU/hour, a micro instance works out to be $0.057/ECU/hour, whereas for a small instance it's $0.085/ECU/hour. So for the same average computing power, choosing 100 micro instances would be cheaper than 35 small instances.
Main problem with micro instances is more fluctuating performance, but I'm not sure if this will be less of a problem when you have many instances.
So does anybody have experience benching such setups and see the benefits and drawbacks? I'm trying to choose which way to go.
PS: an article on the subject, http://huanliu.wordpress.com/2010/09/10/amazon-ec2-micro-instances-deeper-dive/
Beware of micro-instances, they may bite you. We have out test environment all on micro-instances. Since they are just functional test environment, it works smoothly. However, we happened to have update some application (well, Jetty 7.5.3) that has known bug of spinning higher CPU usage. This rendered those instances useless as Amazon throttles the available CPU to 2%.
Also, micro instances are EBS backed. EBS is not advisable (over instance-store) for high IO operations like the ones require for Cassandra or the likes.
If you want to save money and your software is architected to handle interruptions, you may opt for spot instances. They usually cost less than on-demand ones.
If all these are not a issue to you, I would say, micro-instances is the way to go! :)
Basics questions about micro instances performance
CPU pattern for micro
Stolen CPU on micro
I would say: depends on what kind of architecture your app will have and how reliable it will need to be:
AWS Load Balancers does not provide instant (maybe real-time is a better word?)
auto-scale which is different of fail-over concept. It works with
health checks from time to time and have its small delay because it
is done via http requests (more overhead if you choose https).
You will have more points of failure if you choose more instances depending on architecture. To avoid it, your app will need to be async between instances.
You must benchmark and test more your application if you choose more
instances, to guarantee those bursts won't affect your app too much.
That's my point of view and it would be a very pleasant discussion between experienced people.