AWS LightSail RDS - How Much RAM Do I Need - amazon-web-services

I'm just setting up a hight availability WordPress network and I need to decide how much RAM I need for the database instance. On a web server you run "top" and find out how much RAM is being used per MySql process and then look in your config file and look at the maximum number of processes that are allowed to run.
How do you calculate how much RAM you will need on a High availability MySQL database running in AWS LightSail? The plans seem very light on RAM. For example a $20 webserver gets 4GB of RAM whereas a $60 database server get 1GB of RAM. Why is this and how many processes will 1GB run?

Related

How to Make AWS Infrastructure perform comparable to local server running on a MacBook Pro

I have a web application that is caching data in CSV files, and then in response to an HTTP request, reading the CSV files into memory, constructing a JavaScript Object, then sending that Object as JSON to the client.
When I run this on my Local Server on my Macbook Pro (2022, Chip: Apple M1 Pro, 16GB Memory, 500GB Hard Drive), the 24 CSV files at about 15MB each are all read in about 2.5 seconds, and then the subsequent processing takes another 3 seconds, for a total execution time of about 5.5 seconds.
When I deploy this application to AWS, however, I am struggling to create a comparably performant environment.
I am using AWS Elastic Beanstalk to spin up an EC2 instance, and then attaching an EBS volume to store the CSV files. I know that as the EBS is running on a separate instance, there is possible network latency, but my understanding is typically pretty negligible as far as overall effect on performance.
What I have tried thus far:
Using a Compute focused instance (c5.4xlarge) which is automatically EBS optimized. Then using a Provisioned IOPS (io2) with 1000 GiB storage and 400 IOPS. (Performance, about 10 seconds total)
Using a High Throughput EBS volume, which is supposed to offer greater performance for sequential read jobs (like what I imagined reading a CSV file would be), but that actually performed a little worse than the Provisioned IOPS EBS instance. (Performance, about 11 seconds total)
Can anyone offer any recommendations for which EC2 Instance and EBS Volume should be configured to achieve comparable performance with my local machine? I don't expect to get it matching exactly, but do expect that it can be closer than about twice as slow as the local server.

Mariadb10.6.8 runs out of disk space

I have installed Mariadb-10.6.8 on Amazon Web Service (AWS) RDS instance. The RDS instance has a disk capacity of 150GB and 16GB RAM and it hosts 3 databases of total size 13gb. This database serves a website which hardly has much DML operations and predominately read data from this database using stored procedures. These stored procs extensively use temporary tables and the query performance is well under 1 sec. On the website there would be only around 10 to 25 concurrent users most of the time and at peak time there would be 30 to 35 users.
The problem
When I start the RDS instance the disk space available is 137 GB (with 13 gb used by the data held by the databases). Now as the day progresses and the users access the website the disk space starts reducing drastically and reduces by 35gb in 1 day (though there are hardly couple of inserts/updates). If now I restart the RDS instance then the disk space of 137GB is available again and as the day progresses the disk space keeps on reducing again. So the issue is why is the disk space reducing automatically.

How to Load Balance RDS on AWS

How can I load balance my Relational Database on AWS so that I don't have to pay for a large server that I don't need 99% of the time? I am pretty new to AWS so I am not totally sure if this is even possible. My app experiences surges (push notifications) that would crash the smaller DB instance class, but the larger (r5.4xlarge) isn't needed 99% of the time. How can I avoid paying for the larger instance? We are using MySQL.
This is the max CPUs utilization over the past 2 weeks for 16 CPUs and 128 GiBs RAM

Suitable Google Cloud Compute instance for Website Hosting

I am new to cloud computing, but want to use it to host a website I am building. The website will be a data analytics site, and each user will interacting with a MySQL database and reading data from text files. I want to be able to accommodate about 500 users at a time. The site will likely have around 1000-5000 users fully scaled. I have chosen GCP, and am wondering if the e2-standard-2 VM instance would be enough to get started. I will also be using a GCP HA MySQL server, I am thinking that 2 vCPU's and 5GB memory will be enough, with 50GB high availability SDD storage. Any suggestions would be appreciated? Also, is there anything other service I will need? Thank you!!
Your question is irrelevant. On Google Cloud Platform you have real time monitoring for CPU and RAM usage so you if your website is gaining more users you can just upgrade or downgrade your CPU or RAM with 2 or 3 mouse clicks. Start small and upgrade later if you see CPU or RAM is getting close to 100% usage. Start with a N1 chip micro instance 600MB RAM.

What is the number of cores in aws.data.highio.i3 elastic cloud instance given for a 14 day trial period?

I wanted to make some performance calculations hence i need to know the number of cores that this aws.data.highio.i3 instance deployed by elastic cloud on aws has, I know that it has 4 GB of ram so if anyone can help me with the number of cores that would be really very helpfull.
I am working on elasticsearch deployed on elastic cloud and my use case requires me to make approx 40 million writes in a day so if you can help me suggest what machines i must use that can work accordingly to my use case and are I/O optimized as well.
The instance used by Elastic Cloud for aws.data.highio.i3 in the background is i3.8xlarge, see here. That means it has 32 virtual CPUs or 16 cores, see here.
But you down own the instance in Elastic Cloud, from reference hardware page:
Host machines are shared between deployments, but containerization and
guaranteed resource assignment for each deployment prevent a noisy
neighbor effect.
Each ES process runs on a large multi-tenant server with resources carved out using cgroups, and ES scales the thread pool sizing automatically. You can see the number of times that the CPU was throttled by the cgroups if you go to Stack Monitoring -> Advanced and down to graphs Cgroup CPU Performance and Cgroup CFS Stats.
That being said, if you need full CPU availability all the time, better go with AWS Elasticsearch service or host your own cluster.