Reducing Amazon RDS costs - amazon-web-services

I have an Amazon RDS instance class db.m1.medium. I would like to downgrade to db.m1.small to save on costs since it's not being used much.
When I do this, are there any software changes involved? My concern is that settings will get changed when it downgrades. I don't want anything getting corrupt or MySQL settings getting changed.
Please advise. Thanks!

Your RDS settings will not be automatically changed if you change the instance type. However, you should check the monitoring on the db.m1.medium before downgrading to make sure you'd have enough memory in a db.m1.small. You'd be dropping from 3.75GB to 1.7GB.

I wrote some information and some concepts explaining what are the most expensive parts of RDS and how to plan to reduce costs. See if it helps https://shatteredsilicon.net/blog/2021/06/10/how-to-reduce-rds-costs-on-aws/

Related

AWS Patch Manager - rollback

I am preparing a patching plan for one of my customers. If I am using Patch Manager, should I create AMI/Snapshot before patching in case of failure and do I need to perform rollback? Thank you in advance for clarification :)
It's good practice to have regular snapshots of servers in-case anything goes wrong. You can use lambda or AWS Backup for this.
For Patching, you need to set baseline as per your needs & your OS. This way you reduce the chance of anything going wrong.

The zone 'projects/*******/zones/northamerica-northeast1-b' does not have enough resources available

I am unable to restart my VM for 2 hours now, my services are down because of that error :
The zone 'projects/******/zones/northamerica-northeast1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
I can't rely on gcloud having to be down for hours because of ressources. what should I do, I can't afford changing zone, it needs to be in Canada. I can't also afford changing the IP it's behind a DNS. I just need to restart my VM. my business is down...
What's the issue/solution ?
thank you
I'm glad to see that you solved your issue by trying a different machine type. I was about to suggest trying a different machine type and then checking whether it allowed you to restart your VM.
I wanted also to mention in case this can help other users that in case that trying a non-shared core machine type, or a VM from a different family doesn't help you can try to recreate your VM in a different zone of the same region (I've been using northamerica-northeast1-a without any issue so far).
However, in case you want to prevent this from happening at all after a given restart, I recommend you to create a reservation to make sure that these resources are available to you and don't impact your workload/application.
Finally I found this links that maybe you can be interested on: Patterns for scalable apps. It discusses how it's best to deploy your app/workload in different zones to make sure it is more resilient by being balanced and you wouldn't need to change your DNS records every time you need to switch the VM serving the backend.

"Max storage size not supported" When Upgrading AWS RDS

I am using db.m5.4xlarge but our users increase lot so the server is going too slow, we want to upgrade RDS to db.m5.8xlarge, But when I try to upgrade RDS, it gave me an error (Max storage size not supported).
I think the reason is that, unlike db.m5.4xlarge, db.m5.8xlarge does not support MySQL. From docs:
Judged on discussion with you I think it might actually be more beneficial for you to take a look at creating read replicas rather than an ever growing instance.
The problem with increasing the instance as you are are now is that everytime it will simply reach another bottleneck and it becomes a single point of failure.
Instead the following strategy is more appropriate and may end up saving you cost:
Create read replicas in RDS to handle all read only SQL queries, by doing this you are going to see performance gains over your current handling. You might even be able to scale down the write cluster.
As your application is read heavy look at using caching for your applications to avoid heavy read usage. AWS provides ElastiCache a a managed service using either Redis or MemcacheD as the caching service. This again could save you money as you won't need as many live reads.
If you choose to include caching too take a look at these caching strategies to work out how you would want to use it.

Couchdb with Clouseau plugin is taking more storage than expected

I've been using an AWS instance with CouchDb as a backup to IBM's Cloudant database of my application (using replication).
Everything seems to work fine but I've been noticing the permanent increase of Volume size in the AWS instance (it gets full all the time with the annoying problem of increasing a volume when there's no space in the partition).
Actual use of storage
The data in the screenshot is using almost 250 GB. I would like to know the possible reason for this issue, my guess is that the Clouseau plugin is using more space to enable the search index queries.
As I'm not an expert with this database, Anyone could explain to me why this is happening and how could I mitigate the issue?
My best regards!
If you are only backing up a Cloudant database to a CouchDB instance via replication, you should not need Clouseau enabled.
Clouseau is only required for search indices and if you are not doing queries on your backup database you can disable Clouseau in there. The indices are not backed up in the replication.

not have enough resources available to fulfil the request try a different zone

not have enough resources available to fulfill the request try a different zone
All of my machines in the different zone
have the same issue and can not run.
"Starting VM instance "home-1" failed.
Error:
The zone 'projects/extreme-pixel-208800/zones/us-west1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later."
I am having the same issue. I emailed google and figured out this has nothing to do with quota. However, you can try to decrease the need of your instance (eg. decrease RAM, CPUs, GPUs). It might work if you are lucky.
Secondly, if you want to email google again, you will get the message sent from the following template.
Good day! This is XX from Google Cloud Platform Support and I'll be
glad to help you from here. First, my apologies that you’re
experiencing this issue. Rest assured that the team is working hard to
resolve it.
Our goal is to make sure that there are available resources in all
zones. This type of issue is rare, when a situation like this occurs
or is about to occur, our team is notified immediately and the issue
is investigated.
We recommend deploying and balancing your workload across multiple
zones or regions to reduce the likelihood of an outage. Please review
our documentation [1] which outlines how to build resilient and
scalable architectures on Google Cloud Platform.
Again, we want to offer our sincerest apologies. We are working hard
to resolve this and make this an exceptionally rare event. I'll be
keeping this case open for one (1) business day in case you have
additional question related to this matter, otherwise you may
disregard this email for this ticket to automatically close.
All the best,
XXXX Google Cloud Platform Support
[1] https://cloud.google.com/solutions/scalable-and-resilient-apps
So, if you ask me how long you are expected to wait and when this issue is likely to happen:
I waited for an average of 1.5-3 days.
During the weekend (like from Friday to Sunday) daytime EST, GCP has a high probability of unavailable resources.
Usually when you have one instance that has this issue, others too. For me, keep trying in different region waste my time. (But, maybe it just that I don't have any luck)
The error message "The zone 'projects/[...]' does not have enough resources available to fulfill the request. Try a different zone, or try again later." is always in reference to a shortage of resources in a zone.
Google recommends spreading your workload across different zones to reduce the impact of these issues on your workload. Otherwise, there isn't much else to do other than wait or try another zone/region
Faced this Issue yesterday [01/Aug/2020] when GCP free credit was over and below steps helped to workaround this.
I was on asia-south-c zone and moved to us zone
Going to my Google Cloud Platform >>> Compute Engine
Went to Snapshots >>> created a snapshot >>> Select your Compute Engine instance
Once snapshot was completed I clicked on my snapshot.
Ended up under "snapshot details". There, on the top, just click create instance. Here you are basically creating an instance with a copy of your disk.
Select your new zone, don't forget to attach GPUs, all previous setting, create new name.
Click create, that's it, your image should now be running in your new zone
No worry of losting configuration as well.