When we use small AWS instnaces (e.g., d2.xlarge etc.), it is possible that multiple instances are allocated to the same host. I want to check if two vm instances are on the same host. Is there a way for us to get the physical instance ID of vms? With this info, we can check if two instances are on the same physical host.
The primary motivation behind this is to improve the reliability of running stateful service in the cloud. We use d2.xlarge instances to run hbase/kafka workload in the cloud. These services require data replicatio. As one physical host can host up to 8 d2.xlarge instances. If one physical node is down, it may affect multiple vm instances, and cause data loss.
As far as I know Amazon wouldn't let you know anything about their underlying infrastructure. And I cannot think of a reason why they should.
But I've found this blog post saying that you can use CPUID instruction to find out the actual CPU of the underlying physical machine.
From that post:
The “cpuid” instruction is supported by all x86 CPU manufacturers, and
it is designed to report the capabilities of the CPU. This instruction
is non-trapping, meaning that you can execute it in user mode without
triggering protection trap. In the Xen paravirtualized hypervisor
(what Amazon uses), it means that the hypervisor would not be able to
intercept the instruction, and change the result that it returns.
Therefore, the output from “cpuid” is the real output from the
physical CPU.
Having that said, if you need this information to ensure they don't fail all at once, I'd recommend using launching instances from different availability zones. This way even if the whole AZ goes down you'd still have some instances up and running.
There is no official support from AWS on getting the VM placement info. Some large AWS customers are able to get customized support on this.
Related
We are migrating our production environment from DigitalOcean to GCP.
However, because it is different, we don't know where to get some information about our VMs.
Is it possible to have a report that tells me the amount of CPUs, Machine Type, amount of RAM, amount of SSD and amount of SSD used by VM?
Compute Engine lets you export detailed reports of your Compute Engine usage (daily & monthly) to a Cloud Storage bucket using the usage export feature. Usage reports provide information about the lifetime of your resources.
VM instance insights help you understand the CPU, memory, and network usage of your Compute Engine VMs.
As #Dharmaraj mentioned in the comment, GCP introduced a new observability tab designed to give insights into common scenarios and issues associated with CPU, Disk, Memory, Networking, and live processes. With access to all of this data in one location, you can easily correlate between signals over a given time frame.
Finally, the Stackdriver agent can be installed on GCE VMs, allowing additional metrics like memory monitoring. You can also use Stackdriver's notification and alerting features. However, premium-tier accounts are the only ones that can access agent metrics.
I'm studying for my Associate Architect exam at AWS, and I can't find an explanation for this question. Why Dedicated Host are more expensive than Dedicated Instances? I understand the main differences between the two, it is just that in my brain it doesn't make sense.
This is my perspective: if you ask for a dedicated host, you control the entire hardware. CPUs, RAM, Sockets, etc. You can use your own license (BYOL). But if you ask for a Dedicated Instance, the hardware it is still just for you. Your AWS account is still the only one using that hardware. You have less control over it, but even though you are locking down a single piece of hardware just for your purposes.
So, why dedicated hosts are more expensive than dedicated instances, if after all, in either case, you "own" the hardware? Again, in either case, AWS won't be able to use that hardware for something else.
you are locking down a single piece of hardware just for your purposes.
Dedicated Instance does not work like this. Your instance runs on some dedicated hardware. Its not lockdown to you. If you stop/start instance, you can get some other hardware somewhere else. Basically, the hardware is "yours" (you are not sharing it with others) for the time your instance is running. You stop/start it, you may get different physical machine later on (maybe older, maybe newer, maybe its specs will be a bit different), and so on. So your instance is moved around on different physical servers - whichever is not occupied by others at the time.
With Dedicated Host the physical server is basically yours. It does not change, it's always the same physical machine for as long as you are paying.
Dedicated Host
As soon as you 'allocate' a Dedicated Host, you start paying for that whole host.
A host computer is very big. In fact, it is the size of the largest instance of the selected family, but can be divided-up into smaller instances of the same family. ("You can run any number of instances up to the core capacity associated with the host.")
Any instances that run on that Host are not charged, since you are already being billed for the Host.
That is why a Dedicated Host is more expensive than a Dedicated Instance -- the charge is for the whole host.
Dedicated Instance
"Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer... Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances."
This means that no other AWS Account will run an instance on the same Host, but other instances (both dedicated and non-dedicated) from the same AWS Account might run on the same Host.
Billing is per-instance, with a cost approximately 10% more than the normal instance charge (but no extra charge if it is the largest instance in the family, since it requires the whole host anyway).
In simple terms:
Dedicated Instance: The physical machine or underlying hardware is reserved for use for the whole account. You can have instances for different purposes on this hardware.
Dedicated Host: The physical machine or the underlying hardware is reserved for "Single Use" only, eg. a certain application. Thus the physical machine may not be entirely used.
The reason why dedicated host maybe more expensive is because you are reserving more resources without actually using them. I think it would make sense when you are aiming for compliance requirements.
Think of dedicated hosts for licensing use cases:
Where I want to bring in a license that I already have in my on-prime instance.
Sometimes I have noted that If I don't apply updated security packages in some EC2 instances on AWS then instances run slower. I have seen it repeatedly on different machines. Is it possible that Amazon is applying some policies for machines that are not updated?
AWS has zero insight into what you run on an Amazon EC2 instance. You are responsible for installing and maintaining the operating system, applications and data. AWS is responsible for providing the platform that enables the virtual machine.
Every Amazon EC2 instance is given resources (CPU, RAM, Network) based on the Instance Type. The same instance type will always receive the same amount of resources, and the resources are not over-subscribed.
Therefore, any slowdown that you might observe would be related to the operating system and software that you are running on the instance. You can use standard monitoring tools to inspect the operating system to investigate what might be happening.
My scenario is mentioned below, please provide the solution.
I need to run 17 HTTP Rest API's for 30K users.
I will create 6 AWS instances (Slaves) for running 30K (6 Instances*5000 Users) users.
Each AWS instance (Slave) needs to handle 5K Users.
I will create 1 AWS instance (Master) for controlling 6 AWS slaves.
1) For Master AWS instance, what instance type and storage I need to use?
2) For Slave AWS instance, what instance type and storage I need to use?
3) The main objective is a Single AWS instance need to handle 5000Users (5k) users, for this what instance type and storage I need to use? This objective needs to solve for low cost (pricing)?
Full ELB DNS Name:
The answer is I don't know, this is something you need to find out how many users you will be able to simulate on this or that AWS instance as it depends on the nature of your test, what it is doing, response size, number of postprocessors/assertions, etc.
So I would recommend the following approach:
First of all make sure you are following recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
Start with single AWS server, i.e. t2.large and single virtual user. Gradually increase the load at the same time monitor the AWS health (CPU,RAM, Disk, etc) using either Amazon CloudWatch or JMeter PerfMon Plugin. Once there will be a lack of the monitored metrics (i.e. CPU usage exceeds 90%) stop your test and mention the number of virtual users at this stage (you can use i.e. Active Threads Over Time listener for this)
Depending on the outcome either switch to other instance type (i.e. Compute Optimized if there is a lack of CPU or Memory Optimized if there is a lack of RAM) or go for higher spec instance of the same tier (i.e. t2.xlarge)
Once you get the number of users you can simulate on a single host you should be able to extrapolate it to other hosts.
JMeter master host doesn't need to be as powerful as slave machines, just make sure it has enough memory to handle incoming results.
I have a web app hosted on Ubuntu-based Azure classic virtual machine (size DS14). The CPU usage, load, memory, disk I/O and network I/O changes over the previous 7 days are as follows:
Clearly, there's opportunity to save money here by scaling my infrastructure dynamically up and down alongwith changes in load, instead of having a DS14 instance running all the time.
Can someone please outline the steps I'll need to do to enable this? My VM is not part of any availability set as of now.
You could add a classic VM to an availability set. Please refer to this link:Add an existing virtual machine to an availability set.
Notes: ARM VM does not support add an existing VMs to an availability set.
If you want to VM supports dynamically auto scale, you need at least two VMs in a same availability set. You could refer to this link:Automatic scale - CPU.
According to your description, you want to your VM auto up and down, I think it is not possible. When VM is up and down, the VM needs restart, your service will be interrupted for a minutes. As a production environment, this is not acceptable.