How to Load Balance RDS on AWS - amazon-web-services

How can I load balance my Relational Database on AWS so that I don't have to pay for a large server that I don't need 99% of the time? I am pretty new to AWS so I am not totally sure if this is even possible. My app experiences surges (push notifications) that would crash the smaller DB instance class, but the larger (r5.4xlarge) isn't needed 99% of the time. How can I avoid paying for the larger instance? We are using MySQL.
This is the max CPUs utilization over the past 2 weeks for 16 CPUs and 128 GiBs RAM

Related

Mariadb10.6.8 runs out of disk space

I have installed Mariadb-10.6.8 on Amazon Web Service (AWS) RDS instance. The RDS instance has a disk capacity of 150GB and 16GB RAM and it hosts 3 databases of total size 13gb. This database serves a website which hardly has much DML operations and predominately read data from this database using stored procedures. These stored procs extensively use temporary tables and the query performance is well under 1 sec. On the website there would be only around 10 to 25 concurrent users most of the time and at peak time there would be 30 to 35 users.
The problem
When I start the RDS instance the disk space available is 137 GB (with 13 gb used by the data held by the databases). Now as the day progresses and the users access the website the disk space starts reducing drastically and reduces by 35gb in 1 day (though there are hardly couple of inserts/updates). If now I restart the RDS instance then the disk space of 137GB is available again and as the day progresses the disk space keeps on reducing again. So the issue is why is the disk space reducing automatically.

AWS Fargate Prices Tasks

I have set up a Task Definition with CPU maximum allocation of 1024 units and 2048 MiB of memory with Fargate being the launch type. When I looked at the costs it was way more expensive than I thought ($ 1.00 per day or $ 0.06 per hour [us-east-1]). What I did was to reduce to 256 units and I am waiting to see if the costs goes down. But How does the Task maximum allocation work? Is the task definition maximum allocation responsible for Fargate provisioning a more powerfull server with a higher cost even if I dont use 100%?
The apps in containers running 24/7 are NestJS application + apache (do not ask why) + redis and I can see that it has low CPU usage but the price is too high for me. Is the fargate the wrong choice for this? Should I go for EC2 instances with ECS?
When you run a task, Fargate provisions a container with the resources you have requested. It's not a question of "use up to this maximum CPU and memory," but rather "use this much CPU and memory." You'll pay for that much CPU and memory for as long as it runs, as per the AWS Fargate pricing. At the current costs, the CPU and memory you listed (1024 CPU units, 2048MiB), the cost would come to $0.04937/hour, or $1.18488/day, or $35.55/month.
Whether Fargate is the right or wrong choice is subjective. It depends what you're optimizing for. If you just want to hand off a container and allow AWS to manage everything about how it runs, it's hard to beat ECS Fargate. OTOH, if you are optimizing for lowest cost, on-demand Fargate is probably not the best choice. You could use Fargate Spot ($10.66/month) if you can tolerate the constraints of spot. Alternatively, you could use an EC2 instance (t3.small # $14.98/month), but then you'll be responsible for managing everything.
You didn't mention how you're running Redis which will factor in here as well. If you're running Redis on Elasticache, you'll incur that cost as well, but you won't have to manage anything. If you end up using an EC2 instance, you could run Redis on the same instance, saving latency and expense, with the trade off that you'll have to install/operate Redis yourself.
Ultimately, you're making tradeoffs between time saved and money spent on managed services.

Ensuring consistent network throughput from AWS EC2 instance?

I have created few AWS EC2 instances, however, sometimes, my data throughput (both for upload and download) are becoming highly limited on certain servers.
For example, typically I have about 15-17 MB/s throughput from instance located in US West (Oregon) server. However, sometimes, especially when I transfer a large amount of data in a single day, my throughput drops to 1-2 MB/s. When it happens on one server, the other servers have a typical network throughput (as previously expect).
How can I avoid it? And what can cause this?
If it is due to amount of my data upload/download, how can I avoid it?
At the moment, I am using t2.micro type instances.
Simple answer, don't use micro instances.
AWS is a multi-tenant environment as such resource are shared. When it comes to network performance, the larger instance sizes get higher priority. Only the largest instances get any sort of dedicated performance.
Micro and nano instances get the lowest priority out of all instances types.
This matrix will show you what priority each instance size gets:
https://aws.amazon.com/ec2/instance-types/#instance-type-matrix

Heroku Django application slower after moving postgres database to Amazon free RDS

I have a pilot Django project installed on Heroku using the free tier and the free Postgres database. However due to the size limitation on Heroku, I moved the database to Amazon RDS free tier which offers a lot more space and no row limit.
However after the move I notice a very slow performance in my Django app! Is there a way to reconfigure my setup to make my application/database go faster?
If it's the "free tier" of RDS, you are using a very small database (in terms of CPU and memory) so it shouldn't come as a surprise that it is slow. Specifically, the free tier is a t2.micro, which has one virtual CPU and 1gb of memory.
Also, your storage type (magnetic, ssd, provisioned iops) may make a substantial difference. You can observe the disk stats in cloudwatch for the RDS instance to see if that's the problem.

Computing power of AWS Elastic Beanstalk instances

I have a CPU-intensive application that I'm considering hosting on 1+ AWS Elastic Beanstalk instances. If at all possible, I'd like to throttle it so that I don't dip over the "free" utilization of the instances.
So I need to figure out what kind of hardware/virtualized hardware the Beanstalk instances are running on, and compare that to the maximum CPU utilization of the free versions.
So for instance, if each Beanstalk instance is running on, say, 2GHz CPUs, and my app performs a specific "supercalc" operation that takes 50 million CPU operations, but the free version of the app only allows me to utilize 100 billion operations per day, then I am limited to 100billion/50million = 2,000 "supercalcs" per day on a free instance. So if the CPU is 2GHz, then my app instance could only run for 2GHz/50million = 40 seconds before I've already "maxed out" the free CPU utilization on the Beanstalk instance.
This is probably not a great example, but hopefully illustrates what I'm trying to achieve. I need to figure out how much I need to throttle my app, or how long my app could run before I max out the Beanstalk CPU utilization, and it really comes down to how beefy the AWS Beanstalk machines are. Thanks in advance!
Amazon EC2 instances aren't based on a "CPU utilisation" billing system (I think Google App Engine is?) - EC2 instance billing is based on the amount of time the machine is "on" regardless of what is doing. See the Amazon EC2 Pricing for the amount it costs to run the different instances sizes in different regions.
There is a special case which is the "Micro" instance - this provides the ability to have short bursts of higher CPU usage than the "small" instance at a lower cost, but if you overuse it you get throttled back for a period (which you don't with a Small). This isn't the same as having an operation limit though, and the price remains the same whether you're throttled or not.
Also note that with Elastic Beanstalk you're also paying for the Elastic Loadbalancer, any storage and bandwidth, and also any database service you are using.
Given all that though - AWS does have a Free Tier - however this is only for the first 12 months of a new account. The Free Tier will cover the cost of a micro EC2 instance, Elastic Loadbalancer, RDS database and other ancillary services - see the link for more info.