Amazon RDS - Increase storage size did not increase IOPS - amazon-web-services

We started sending a lot of data to a recently deployed application yesterday, that quickly used all the IOPS burst of the RDS instance that got stucked to 30 IOPS (The application was created on elastic beanstalk with a Postgresql database and 10 GB SSD storage)
Following the documentation I increased the storage space to 50 GB in order to get more IOPS (150 I guess). I did this 18 hours ago but the instance is still limited to 30 IOPS which create a very high latency on our application...
Any idea on how I can get this fixed?

Depending on the type of storage class you use, if you set it to apply immediately it should come into affect otherwise it will take affect in the next maintainance window.

Related

How to Make AWS Infrastructure perform comparable to local server running on a MacBook Pro

I have a web application that is caching data in CSV files, and then in response to an HTTP request, reading the CSV files into memory, constructing a JavaScript Object, then sending that Object as JSON to the client.
When I run this on my Local Server on my Macbook Pro (2022, Chip: Apple M1 Pro, 16GB Memory, 500GB Hard Drive), the 24 CSV files at about 15MB each are all read in about 2.5 seconds, and then the subsequent processing takes another 3 seconds, for a total execution time of about 5.5 seconds.
When I deploy this application to AWS, however, I am struggling to create a comparably performant environment.
I am using AWS Elastic Beanstalk to spin up an EC2 instance, and then attaching an EBS volume to store the CSV files. I know that as the EBS is running on a separate instance, there is possible network latency, but my understanding is typically pretty negligible as far as overall effect on performance.
What I have tried thus far:
Using a Compute focused instance (c5.4xlarge) which is automatically EBS optimized. Then using a Provisioned IOPS (io2) with 1000 GiB storage and 400 IOPS. (Performance, about 10 seconds total)
Using a High Throughput EBS volume, which is supposed to offer greater performance for sequential read jobs (like what I imagined reading a CSV file would be), but that actually performed a little worse than the Provisioned IOPS EBS instance. (Performance, about 11 seconds total)
Can anyone offer any recommendations for which EC2 Instance and EBS Volume should be configured to achieve comparable performance with my local machine? I don't expect to get it matching exactly, but do expect that it can be closer than about twice as slow as the local server.

Amazon Web services: Why my used elastic compute cloud storage is at 8.35 GB out of 30 GB?

So I am running a nodejs discord bot on AWS EC2 (free tier). I would want to stay in free range as much as possible. In the billing section I came across my usage and found that I am using 8.35 gb.
There are 2 instances linked to my account out of which only 1 is running (I used other one to host an AI app which is in stop state). Both instances are allocated 30 gb separately. I ran df -h in both instances, one reported 2.8 gb occupied and other reported 2gb occupied. So the quick question is why is it 8.35 gb when it should be around 5gb?
Attached a screenshot of bills section.
Please help. What is it that I am missing?
I assume you are referring to Amazon EBS Volumes, for which the AWS Free Tier provides 30GB per month for the first 12 months of your account.
This can be one 30GB volume for an entire month, or 2x15GB volumes for one month, or 1x900GB volume for one day (900* 1/30 = 30). Hence the term "GB-month", which means "Gigbytes for a month".
The fact that you are 8/30 for the allocation means that your account has consumed 8GB-month our of the free 30GB-month.
Don't panic too much -- the cost is only 10c/GB-month, so a 30GB volume for an entire month would cost $3.
Please note that Amazon EBS Volumes are charged based on provisioned storage. So, as soon as you create the volume, the space has been allocated and your account will be charged for it, even if nothing has yet been stored in the volume.
If you wish to minimise costs, then minimise the size of each volume and minimise the number of volumes. The purpose of the Free Tier is to provide a trial of AWS services -- it is not intended to be enough to run your on-going applications.
You are probably looking in the wrong place. When you say "I am using 8.35 gb" I assume you are talking about EBS storage, right? What you see in AWS billing, it is provisioned storage (that is, what you allocated when you launched the instance). It doesn't matter how much you are using - that's what df -h shows you inside the box. Also, it doesn't matter whether the instance is running or stopped - you are still incurring charges for EBS (if it is beyond free tier).
By the way, 8.3GB is what is usually allocated for each Linux instance; so having 8.3GB for two looks suspiciously small
UPDATE: It's not 8 GB that you see - it is 8GB-Mo. So, if you provisioned 30GB, then on the 8th of the month you will see 8GB-Mo (I think AWS updates every 4 hours). Therefore, by the end of the month you will have approximately 30GB-Mo, which is the limit for free tier.

Amazon Elastic Block Store and EC2 drive

I am fairly new with AWS(just have a free AWS EC2 instance to test out AWS stuff) so the question might sound silly.
Today I got a mail that my Amazon Elastic Block Storeage has reached 85% usage on my free AWS account which is about 25 GB of the allocated 30 GB.
From what I read today, Amazon EBS is a persistent store used for EC2 instances.
However I can see in my EC2 instance that df -h just show 2 GB usage and available disk as 28 GB as this is just my practice instance.
Am I missing some important piece of information here?
EBS devices are block devices.
This means the service does not know how much data you actually store on them -- it only knows how much storage space you allocated. So, the results of df -h don't matter. The actual size of the volume is all that matters -- that's the basis for billing. The rest of the space (that space you aren't current using) is still storing something, even it if's just 0's, but the service is unaware of what you've stored. (Other storage services like S3 and EFS bill for actual data stored, because they are not block storage services.)
Now, the free tier allows 30 gigabyte-months of EBS volume usage. You can use more than that, but this is the limit that's provided for free. You'll be billed for any more than this.
A gigabyte-month means 1 gigabyte of block storage space, allocated for 1 month, regardless of how you use it.
Also, 2 gigabytes of allocated storage for 15 days is 1 GB-month.
Also, 10 gigabytes of allocated storage for 3 days is 1 GB-month.
...etc.
The free tier, then, would allow you to have a 30 GB volume for 30 days, or a 60 GB volume for 15 days, or even a 900 GB volume... but you could have it for only 1 day. But to avoid continuing charges, such a volume must be deleted -- not just the files on the volume.
The warning message was correct. If you have a 30 GB volume in place for 26 days, then you have used 26 GB-months of storage, which is 86.7% of the free tier limit of 30 GB-months.
For anyone stumbling upon this in 2021:
Free Trial provides 750 Hrs of Amazon EC2 Linux t2.micro and 750 Hrs of Amazon EC2 Windows t2.micro per month. However, both of them will add in to the free 30GB EBS pool. So make sure the memories allotted to both the VMs sum up to 30 GB or less.
There are limits on the type and number of resources you can allocate for each account.
It sounds like in your case, you are allowed to create a total of 30GB of EBS volumes. Once you allocate an EBS volume, in your case 25GB, it counts against that limit, even if it isn't used.
There is a section in the EC2 console (near the top) called "Limits" that will show you what your limits are, and what you are using.
Most limits can be extended with a simple support ticket.

copy files on EC2 slowing down very fast

I'm running a simple test when i'm copying 6 GB of files from one directory to other.
I'm doing this test on R4.4xlarge instance of AWS.
The disk is 300 GIB and the IOPs are configured to 600 on the specific volume.
What i'm facing is very wierd, in the begining it start copying in rate of more than 600 MB/sec and after few seconds it slowing down dramatically to 20 MB /sec.
The volume type (file system type) is IO1, and the OS is windows
Any idea of what could cause this behaviour?
AWS storage topics is a beast. In short, i suggest you go through the following material.
Slide share : Deep Dive: Maximizing EC2 and EBS Performance
Youtube : Deep Dive: Maximizing EC2 and EBS Performance

AWS t2.medium performance issues after adding 32 gb volume

On AWS EC2 t2.medium instance we run a site http://www.pricesimply.com/
Our database is installed on the same machine.
By default we had 8 gb storage and the site speed was lightning quick.
Then, we added a 32 gb volume Type - general purpose volume.
Only difference between these 2 volumes -
8 gb default volume - IOPS 24/3000
32 gb new added volume - IOPS 96/3000
Volume type & Availability zones are same.
The site MUCH SLOWER now compared to earlier.
Some random ideas:
1) Performance does vary between volumes. Do some benchmarks to see if it's really slower. (Unlikely, but possible.)
2) Perhaps the volume is a red herring -- Maybe your entire dataset was small enough to fit into RAM before, and now that you've expanded and grown, your data doesn't fit, creating constant I/O?
3) If the drive was created from a snapshot, it may be fetching your data from your snapshot in the background, slowing the drive.
Adding an additional disk shouldn't slow down your machine, you'll need to investigate things a bit more to identify the bottleneck.
Poor performing infrastructure generally fits into one of three categories:
CPU: Check your CPU utilization to see if the t2.medium instance is a suitable size. Amazon CloudWatch can show you CPU history.
Memory (RAM): Your application may be short on memory, causing page swaps to disk. You'll need to monitor memory utilization from within your instance. (CloudWatch cannot see memory utilization.)
Disk IO: If you are reading & writing to disk a lot, then this could be your bottleneck. CloudWatch can give you some metrics, especially the Queue Length, which indicates that IO was waiting to be processed.
Once you've identified which of these three factors appears to be the bottleneck, try to improve them:
CPU: Use a larger instance type
Memory: Use an instance type with more RAM
Disk: Use a faster disk
You are using General Purpose (SSD) EBS volumes. These volumes have an IOPS (Input/Output per second) related to volume size. So, your "96/3000" volume gives a guaranteed 96 IOPS (about the speed of a magnetic hard disk) with the ability to burst up to 3000 IOPS if you have enough IO 'credits'. If you are continually using more than 96 IOPS, you will run out of credits and will be limited to 96 IOPS.
See: Amazon EBS Volume Types