AWS RDS: Can't get modify of Instance Class to go through - amazon-web-services

In the AWS RDS Console using an IAM user with full permission, I selected our current RDS instance which is a db.t1.micro, clicked on "Instance Actions" and chose "Modify". I then changed both the MySQL version to 5.6.37 (current version is 5.6.34) and the Instance Class to db.t2.small. I also checked the "Apply Immediately" checkbox, and applied the modification. However, the modification didn't happen.
Instead, I'm seeing the following in the Maintenance Details for the instance:
Maintenance Window: sat:20:00-sat:20:30
Pending Modifications: DB Instance Class: db.t2.small, Engine Version: 5.6.37
Pending Maintenance: None
I figured maybe the "Apply Immediately" didn't go through, so I decided to just wait for the Maintenance window this Saturday. However, nothing happened on Saturday, and the situation remains the same.
This morning I tried using "Modify Instance" again and made sure I for sure selected the "Apply Immediately", but the result is the same.
I also tried to use the command-line interface to upgrade the instance with this command:
aws rds modify-db-instance --db-instance-identifier xxxxx --db-instance-class db.t2.small --apply-immediately
But this gives the following error (perhaps a hint?):
Service rds not available in region US West (Oregon)
The instance I tried to modify is in the US West (Oregon) region.
Any help is appreciated. I'm willing to use a different method to upgrade the instance, but I'm hoping to avoid having to change all the DB address and login settings on our websites and applications.

I solved this issue by stopping the instance with a saved snapshot and then started it again. This cleared out the "Pending Maintenance" but did not actually perform the upgrade. I then went through the "Modify" action again but only chose to modify the instance class. This time the modify happened right away and now it's the correct instance class.

Related

GCP VM instances schedule is not starting the attached instance

Last Friday I've updated daily start/stop schedule for an instance (deleted previous one and created a new one with different timing).
The instance was not changed. It's a preemptible e2-medium instance.
For some reason the schedule did not starting the VM, I also don't see any logs from it.
I did not change the permission, but just to be sure I've confirmed that the Google APIs Service Agent still has the standard Editor permission.
No other changes were made anywhere on this GCP.
I've tried to create other schedules with CRON expressions, different timezones, different instances, tried setting the initiation date. None of this worked.
The schedule zone is us-central, the instance zone is us-central1-a.
I've tried to wait for 15 minutes and more.
The problem was indeed caused by the missing permission. I had to give permission compute.instances.start to the right account
service-<my-gcp-numeric-id>#compute-system.iam.gserviceaccount.com” <- this one
<my-gcp-numeric-id>#cloudservices.gserviceaccount.com. <- not this one
But what's interesting is:
Previously (a year ago) created schedules worked fine.
The above mentioned account (service-<my-gcp-numeric-id>#) is not displayed anywhere, even after I given it persmissions.
When I create schedule on a brand new project it complains about that account missing the permission and doesn't let me attach instances, but in the original case there were no error messages.

force-stop RDS instance

I currently have one AWS RDS Mariadb instance stuck at rebooting. In the current state, I cannot either modify or stop the instance.
I contacted AWS requesting to have the instance stopped and they responded We cannot STOP the instance on behalf of our customers. It is available to do actions from your end.
I understand that it might have hit this mariadb bug
I'd like to try update some innodb_* parameters to see if the instance can be started.
But there's nothing I can do at the moment because the instance is stuck at rebooting.
I don't think I am the only one who had this kind of issue.
AWS support is not helping at all. The only solution they suggest is delete the instance and restore from backup. Restoring from backup would be my last option. Luckily, this is not our production database. If restoring from backup is the only option if you hit a mariadb bug and the instance end up stuck at rebooting, I'd reconsider whether I should host mariadb on EC2 instances.

How can I connect to a running AWS instance when my dashboard says no instances are running?

I feel like this is a beginner question, but after messing with it for days I'm completely stumped.
I set up an instance on Amazon AWS last year, and I'd like to SSH into the instance to upgrade some software. I am unable to find the original .pem file anywhere, and everything I find to try to solve the problem — including these directions — refer to selecting the running instance on my EC2 Dashboard.
However, when I log in as a root user, it shows there are no running instances. By default it comes up as N. Virginia, but when I check the other US locations none of them show any running resources. My instance's address (the link I use for mySQL and phpMyAdmin, for example) is in the form of ec2-XXX-XXX-XXX-XXX.ca-central-1.compute.amazonaws.com, if that makes any difference.
Any ideas on next steps? I have all the data on the running instance backed up so I can recreate things as necessary. I admit that I'm a beginner with AWS (obviously) but I super-pinky-promise to store my .pem file in a safe place next time...
By default it comes up as N. Virginia, but when I check the other US
locations none of them show any running resources. My instance's
address (the link I use for mySQL and phpMyAdmin, for example) is in
the form of ec2-XXX-XXX-XXX-XXX.ca-central-1.compute.amazonaws.com, if
that makes any difference.
Your instance is running in the AWS Canada region, as indicated by the region name ca-central-1 in the address, which is why you aren't seeing it in any US region.

Which method to use for updating CA certificates for AWS RDS

I currently need to update the CA certificates for my AWS RDS instance, and as far as I am aware there are two ways to do this: by modifying my DB instance or by applying DB instance maintenance (source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html).
Does it matter which method I choose? Is one way particularly better than the other/better in some circumstances?
In both methods, it is given that the RDS instance needs a reboot (read as outage!).
In our case, the RDS client application (java-based) had troubles re-establishing JDBC/SSL connection with the rebooted RDS instance (after CA upgrade), so we had to manually trigger a restart of RDS client application to bring the situation to normalcy. Hence, we need to exactly know at what point RDS CA upgrade was complete.
Hence, the workflow would be like this:
1/ Add CA (2019) to your client application's trust store first!
2/ On the RDS side, use 'Apply Immediately' option in lower environments (in Production, we also used 'Apply Immediately' but executed it during the approved maintenance window).
3/ Wait for a few minutes for AWS to apply CA and reboot the RDS instance.
4/ Go and perform post-actions like restart your client application (if needed) and regression tests.
In this way, we were able to limit the outage to a couple of minutes.
Alert: If we would have chosen 'Apply during maintenance window' option, we would not have been 'in control' of at what point AWS would upgrade RDS (CA) because AWS may choose any point in time during the maintenance window specified to perform the upgrade, it is not guaranteed to be at the start of maintenance window.
Hope this helps!
I like to test the update manually by modifying the DB instance in a test environment. Then I confirm any dependent software, and make sure that everything is working.
Then in production I let it modify during the maintenance window update. Since this change requires a reboot, I let it apply during my 3 a.m. Sunday maintenance window.
So both methods are handy depending on your needs. The end result is identical.

Why do Spot Instances(EC2) change from cancelled_terminating to cancelled?

I have been struggling with this since last 2 days - A. Trying to create AWS Spot Instance with Deep Learning AMI for Linux (free).
B. Upon launching EC2 Instance it says Spot Instance request successfully created but it fails to create the instance.
C. Using Spot Fleet role, and later have been trying to change it to provide Admin access to this role through Policies.
However, the instance is never created and in the History tab I see Event Type = fleetRequestChange goes from Submitted, active, cancelled_terminating within a minute and later cancelled.
I have been reading through its documentation but don't see a reason for it to fail. Verified the Region and AMI as well. Tried changing bid price and with default recommended option as well. But nothing seems to work.
This is the link I'm referring - AWS setup for Deep Learning
Please skip the initial portion of getting credits and you can directly jump to EC2 instance configuration setup.
Kindly help! I am unable to proceed for the past 2 days.
Thank you!
It worked perfectly fine for me.
Launched the Deep Learning AMI (ami-df77b6a7) in the Oregon region
Spot pricing as documented in the article you referenced
I could ssh into the instance after it launched
One thing you could check... Click the Limits link in your EC2 console to confirm that you can launch this type of instance.
Mine said:
Running On-Demand g2.2xlarge instances: 5