I have an Amazon RDS instance.
It has Multi AZ enabled.
Some other specs:
Instance and IOPS
Instance Class
db.t2.micro
DB Instance General Purpose (SSD)
IOPS disabled
60 GB
Lately it's been lagging really hard.
Notice the latency on read, write, and queue depth.
There is no change in Read Throughput or Write Throughput.
Can anyone assist in debugging this?
What's your database engine? If you are using MYSQL, check with profiler application to find which parts make latency.
Related
Can someone help m e to understand with how the throughput works with EBS Optimised Instance. I am in process of benchmarking EBS disk for MSSQL server on top of EC2 instances. Today i did perform a IOPS test using CrystalDiskMark tool and i found that when with perfmon counters the maximum throughput its showing is the one assigned to the instance , as an example I am using X1e.4XLarge which is having a throughput of ~230 MiB/s and during test i am getting this number only , so just wanted to understand why i am not getting EBS throughput as what it supposed to provide , reason being per aws documentation with EBS optimized instances we get dedicated network between ec2 instances and ebs volumes and instance throughput is additional on top of EBS throughput..
Is that something wrong i am performing or seeing data stats with benchmarking. Highly appreciate help.
I have created a Windows custom AMI with some custom Windows application.I use this AMI to generate EC2 instances.
I have run into a strange issue:
All the applications run smoothly in the EC2 instance created from the custom AMI.
However, after 24 hours, when I created an EC2 instance using the same custom image, the performance of the applications deteriorate.
Even opening an application on the EC2 instance is much slower compared to the EC2 instance which was created 24 hours prior.
Any Suggestions would be really helpful.
This might be caused by the use of a T2 instance. These are burstable instances.
From CPU Credits and Baseline Performance for Burstable Performance Instances - Amazon Elastic Compute Cloud:
Traditional Amazon EC2 instance types provide fixed performance, while burstable performance instances provide a baseline level of CPU performance with the ability to burst above that baseline level. The baseline performance and ability to burst are governed by CPU credits. A CPU credit provides the performance of a full CPU core for one minute.
So, if your Amazon EC2 instance is consuming a lot of CPU, then it might run out of the CPU credit balance, and therefore be limited in the amount of CPU it can use.
You can monitor the CPU credit balance in Amazon CloudWatch. You can also see the historical CPU usage in CloudWatch, or do it within the Windows instance itself using the Task Manager.
I got the issue. Apparently any windows app we launch , Microsoft automatically tries to connect to internet for updates for every 24 hours . In my case , Internet was turned off , The updates where not getting downloaded. hence the connection was in wait state of 15 seconds by default, Hence the application was slow to launch
I've got an EBS volume (16GB) attached to a EC2 instance that has full access to an RDS instance. The thing is I've extracted the DB to the RDS instance, so I don't use the EC2 instance for storing the web application database anymore. I did this because I was having a lot of problems with the EBS credits (they were consuming very quickly). I thought that by having the DB on a separate instance (RDS) this will decrease to almost cero the EBS credit consumption because I'm not reading nor writing on the EBS but on the RDS. However, the EBS credits keep consuming (and decrease to 0) every time users access to the web application and I don't understand why. Perhaps is because I still don't fully understand how EBS credit usage works... Can anyone enlighten me with this? Thanks a lot in advance.
You can review volume types including info on their burst credits here. You should also review I/O Characteristics and Monitoring. From that page:
If your I/O latency is higher than you require, check
VolumeQueueLength to make sure your application is not trying to drive
more IOPS than you have provisioned. If your application requires a
greater number of IOPS than your volume can provide, you should
consider using a larger gp2 volume with a higher base performance
level or an io1 volume with more provisioned IOPS to achieve faster
latencies.
You should review that metric and the others it mentions if this is causing you performance problems. If your IOPs are constantly above your baseline and causing them to queue you will always consume credits as fast as they are given.
I need to deploy Cassandra on AWS but am confused as to what type of AWS storage is most suitable for Cassandra.
The Datastax documentation here:
http://docs.datastax.com/en/cassandra/3.0/cassandra/planning/planPlanningEC2.html
says that EBS volumes are recommended. At the same time the Datastax AMI documentation:
http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installAMI.html
says that:
Uses RAID0 ephemeral disks for data storage and commit logs.
Launches EBS-backed instances for faster start-up, not database
storage.
So which one is the recommended storage type for Cassandra? The EBS storage or the Instance storage?
Many of the new eC2 instances are EBS only (http://www.ec2instances.info/) I am not sure when the cassandra document was written but EBS disk have improved a lot recently and amazon launches new type frequently, so you will be able to find what you're looking for with one of the type
You can check https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html?icmpid=docs_ec2_console and its recommended Provisioned IOPS SSD (io1)
To add a reason why AWS is moving to EBS and why it would be good for cassandra data is because of ephemeral type of data, you might not want your data to disappear if your instance is terminated (because of a crash or a stop you made) at least when your instance is gone, you still have access to your data and can attach the EBS volume to a new instance (really useful also when up/down-grading instances)
I came upon this presentation, which clearly answers the question with a very interesting use case:
https://www.youtube.com/watch?v=1R-mgOcOSd4
To summarize:
EBS has changed a lot since 2011 when major companies like Netflix
had problems with it.
EBS and GP2 are now the recommended storage
for Cassandra and you should not expect any bottlenecks there.
Datastax have recently updated their documentation to also recommend
EBS:
http://docs.datastax.com/en/cassandra/3.0/cassandra/planning/planPlanningEC2.html
No doubt EBS,
Memory optimized boxes are best suited for cassandra
T2
T2 are Burstable Performance Instances that offer a baseline level of CPU performance with the capability to burst above the baseline
M4
M4 instances are the most recent general-purpose instances. The M4 family of instances offers a balance of memory, network, and compute resources, and it is a better option for several applications
C4
These instances are recent additions to the compute-optimized instances that feature maximum performance processors with the lowest compute/price performance in EC2 Instance types.
X1
These instances are best suited for enterprise-class, large-scale, in-memory applications and offer the lowest price for each GiB of RAM among AWS EC2 instance types. The X1 instances are the latest addition to the EC2 memory-optimized instance group and are intended for executing high-scale, in-memory databases and in-memory applications over the AWS cloud.
for pricing and other information
https://aws.amazon.com/ec2/instance-types/
I'm new to AWS and also to Cassandra. I just read about EBS and S3 storage available in AWS. I was trying to figure out if we have Cassandra installed in EC2, which storage would it use? EBS or S3? Or is there other storage? I'm little confused with this. Please help me understand this.
Thanks
Aravind
You shouldn't run Cassandra on EBS, as recommended per Datastax itself :
"EBS volumes are not recommended for Cassandra data volumes for the following reasons:
EBS volumes contend directly for network throughput with standard packets. This means that EBS throughput is likely to fail if you saturate a network link.
EBS volumes have unreliable performance. I/O performance can be exceptionally slow, causing the system to back load reads and writes until the entire cluster becomes unresponsive.
Adding capacity by increasing the number of EBS volumes per host does not scale. You can easily surpass the ability of the system to keep effective buffer caches and concurrently serve requests for all of the data it is responsible for managing."
http://docs.datastax.com/en/cassandra/1.2/cassandra/architecture/architecturePlanningEC2_c.html
The answer above comes from Cassandra 1.2, a relatively old version. Documentation for newer versions of Cassandra indicate that EBS Optimized instances using GP2 SSD can be used for production workloads.
http://docs.datastax.com/en/cassandra/3.x/cassandra/planning/planPlanningEC2.html
Things that changed since then were the creation of EBS Optimized instances, which reduces and/or eliminates noisy neighbor throughput problems, and using GP2 SSD for EBS storage.
If you are just getting started, I would recommend EBS Optimized. The performance should be pretty good, but you gain a critical ability -> creating snapshots. This reduces the risk of your instance becoming unstable because you would have S3-backed volume snapshots for AWS to rebuild data from if a drive died.
This reduces the need to setup your Cassandra cluster across regions. One of the concerns that you have to build around when using Ephemeral is a whole region potentially going down, which could wipe out your entire cluster if you didn't build a multi-region cluster. With EBS, this isn't really a concern.
For Cassandra you need to use EBS. S3 is an object store with and API to store and retrieve objects, but not easy querying mechanisms. The use cases include backup and archiving, Disaster Recovery, Static Website Hosting, etc
However, you can use S3 for Cassandra backup.
You can also consider ephemeral disks (as Jeff mentions) and storage which comes with AWS instance.