How can I migrate an RDS serverless cluster to a provisioned - amazon-web-services

I know the question is normally the other way around, but it seems important to know how can we go back to a provisioned solution from a serverless cluster.
We will migrate to an Postgres RDS serverless cluster in a few days and we would like to know how can we go back to a provisioned cluster if something goes wrong with the serverless solution. I didn't find any answers, as DMS doesn't seem to allow serverless sources.
Is there any way to achieve this, other than using pg_dump?
Thank you!

It depends on the version of AWS RDS Aurora Serverless that you are using:
For Serverless v1, just create a snapshot and restore it to a provisioned cluster. Snapshots are compatible between provisioned clusters and Serverless v1.
Serverless v2 uses the same clusters as provisioned instances (see the first paragraph of the AWS Docs for Serverless v2). Select the serverless instance of the cluster, click the "Modify" button and select another "DB instance class" under "Instance configuration". Note: You can also have serverless and provisioned instances in the same cluster, which allows for more complex migration paths like first adding a new instance, remove the old one and using writer instance promotion.
You may first want to test those methods with test data even though I validated them for you with empty databases.
Please also keep in mind that Severless v2 only released in April 2022 (after a long beta testing period solely with MySQL), so you may want to thoroughly test it if you want to use it in production.

Related

Upgrading from AWS Aurora Serverless v1 to v2 fails

So with Aurora Serverless v2 available we wanted to upgrade from our Postgres Serverless v1.
The steps described are basically to take a snapshot, create a new Provisioned Aurora cluster (not Serverless) and then upgrade the Provisioned Cluster to Postgres 13.6 and after that Clone the new 13.6 Cluster into a Serverless v2.
However, I get stuck on the last part as when trying to clone it I get "Serverless (incompatible minor version)" and the option to chose "Serverless" is greyed out...
What am I missing?
OK, so the information in the documentation here is very unclear: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless-v2.upgrade-from-serverless-v1-procedure
You shouldn't try to migrate the Cluster from Provisioned to Serverless, the option is to convert the DATABASE (Writer instance) in the Provisioned Cluster to Serverless V2!
So, here's the steps that might save someone else a few tries (and hours):
Create a Snapshot for the existing Aurora Serverless Cluster overview page (takes about 5 minutes (depending on DB size))
Open snapshots and choose to restore Snapshot to highest available PROVISIONED Aurora Cluster (for me it was Postgres 10.20), no other settings needs to be changed other than "Provisioned" and the version (this will take several minutes, about 15 minutes)
Refresh now and again using the "refresh" button (as the AWS console "refresh" isn't very reliable) to see when the cluster is ready (database doesn't need to be ready, only the Cluster!)
Once the Cluster is "available", open the Cluster and click "Modify"
Choose the DB engine version as 13.6 (this is the only version working for Aurora Serverless v2) then scroll down and click "Continue"
Select "Apply immediately" and click "Modify Cluster"
Back on the Cluster overview page, again wait for the Cluster to upgrade (this will take several minutes, about 20 minutes)
Once the Cluster and database are available, select the DATABASE and choose "Modify"
Select Serverless v2 and then "Continue"
Choose to apply Immediately and modify the database
Wait for it to modify completely and you'll have your new Serverless V2 done!
Another thing to note is that with Aurora V2 we'll apparently have a Cluster and a database attached:
I'd assume this is because with Serverless V2 (which is pretty cool!) you can attach additional read replica databases which will "off-load" your writer instance making it faster...
add on on #anders answer, your rds instances must be one of the instance type from this list else it will throw below error.
for supported combinations of instance class and database engine version, see the documentation.

Best practices to Initialize and populate a serverless PostgreSQL RDS instance by a CloudFormation stack deployment

We are successfully spinning up an AWS CloudFormation stack that includes a serverless RDS PostgreSQL instance. Once the PostgreSQL instance is in place, we're automatically restoring a PostgreSQL database dump (in binary format) that was created using pg_dump on a local development machine on the instance PostgreSQL instance just created by CloudFormation.
We're using a Lambda function (instantiated by the CloudFormation process) that includes a build of the pg_restore executable within a Lambda layer and we've also packaged our database dump file within the Lambda.
The above approached seems complicated for something that presumably has been solved many times... but Google searches have revealed almost nothing that corresponds to our scenario. ​We may be thinking about our situation in the wrong way, so please feel free to offer a different approach (e.g., is there a CodePipeline/CodeBuild approach that would automate everything). Our preference would be to stick with the AWS toolset as much as possible.
This process will be run every time we deploy to a new environment (e.g., development, test, stage, pre-production, demonstration, alpha, beta, production, troubleshooting) potentially per release as part of our CI/CD practices.
Does anyone have advice or a blog post that illustrates another way to achieve our goal?
If you have provisioned everything via IaC (Infrastructure as Code) anyway that is most of the time-saving done and you should already be able to replicate your infrastructure to other accounts/ stacks via different roles in your AWS credentials and config files by passing the —profile some-profile flag. I would recommend AWS SAM (Serverless application model) over Cloudformation though as I find I only need to write ~1/2 the code (roles & policies are created for you mostly) and much better & faster feedback via the console. I would also recommend sam sync (it is in beta currently but worth using) so you don’t need to create a ‘change set’ on code updates, purely the code is updated so deploys take 3-4 secs. Some AWS SAM examples here, check both videos and patterns: https://serverlessland.com (official AWS site)
In terms of the restore of RDS, I’d probably create a base RDS then take a snapshot of that and restore all other RDS instances from snapshots rather than manually creating it all the time you are able to copy snapshots and in fact automate backups to snapshots cross-account (and cross-region if required) which should be a little cleaner
There is also a clean solution for replicating data instantaneously across AWS Accounts using AWS DMS (Data Migration Services) and CDC (Change Data Capture). So, process is you have a source DB and a target DB and also a 'Replication Instance' (eg EC2 Micro) that monitors your source DB and can replicate that across to different instances (so you are always in sync on data, for example if you have several devs working on separate 'stacks' to not pollute each others' logs, you can replicate data and changes out seamlessly if required from one DB) - so this works with Source DB and Destination DB both already in AWS however you can also use AWS DMS of course for local DB migration over to AWS, some more info here: https://docs.aws.amazon.com/dms/latest/sbs/chap-manageddatabases.html

Can we use AWS Data Migration Service for replication from Aurora Serverless as source?

My DMS replication instance (which is in same VPC as of Aurora serverless DB instance) is not able to find DB while creating endpoint in DMS.
However, I am able to create a cloud9 instance in same VPC as aurora serverless instance and connect to it from there.
Am I missing something here or it is not possible to use AWS DMS for migrating data from Aurora serverless as source?
Above issue was resolved by explicitly specifying the connection details for aurora serverless cluster (instead of dropdown selection). But the answer to original question of using Aurora serverless DB as source in DMS replication -
Yes, if only one time replication is required
No, If ongoing replication is required. For ongoing replication, It is required to change the values of binlog_format parameter for source database. Although, Aurora serverless allows changing value for this parameter but it has no impact in actual. Only a few parameters are supported for change which are listed here

How to make AWS aurora Mysql into a writable instance

I am working on a project where we are creating a new region for resilience purpose.
As we create a new Region for the project, we also plan to create a Read replica of Aurora MySQL DB.
My question here has two parts
If the project in existing region goes down, how to make the read replica of Aurora MySQL in the other region as the new Master and writable?
I searched through few videos and stackoverflow questions and I do understand how to make the RDS mysql as writable instance. but, I don't see how to make such a "read-only" parameter modification in Aurora MySQL!
How could I do this change with a Jenkins Job?
Any help is highly appreciated!
Thanks!
You can use the AWS CLI for this, specifically the:
aws rds promote-read-replica-db-cluster
command which contrary to the name actually applies to Aurora as stated in the documentation for the command:
Note:
This action only applies to Aurora DB clusters.
You can find more information in the AWS documentation for Aurora, specifically the section on Promoting a Read Replica to Be a DB Cluster.
Seems like you should be able to use the Jenkins AWS plugin but there are many options.

neo4j cluster on Amazon AWS

This might sound like a newbie question but I have a neo4j instance running on Amazon cloud. The instance is set to Autoscale at 80% usage. That means Amazon one the usage is reaches 80%, Amazon will create another instance on Neo4j with the same configuration and will keep adding more once this one reaches 80%..
My questions are -
1) Does this setup on Amazon means we have a cluster of neo4j in place?
2) Do I need to do anything else in order to have neo4j cluster, what I have read is that you need some tool like zookeeper to maintain the cluster..
3) Does this current setup on Amazon will have both instances as Master or will it be more like master/slave setup..
Any help, feedback, suggestions would be helpful.
Thanks in advance,
Ravi
Yes, if you are using Auto Scaling group for Neo4j you need to set a cluster. As #stefan-armbruster mentioned, you need to Neo4j Enterprise edition for that. In that case it's master/slave setup.
Neo4j has its own solution for Cluster management, instead of Zookeeper.
But with AWS and EC2 there are few open question, how to properly deploy Neo4j with Auto Scaling group.
From configuration file perspective
* You need to maintain unique clusterId for each machine in cluster.
* You need to know ip addresses/hostnames of other machines in cluster.
Neo4j Enterprise edition features clustering, see the docs on this. With some well written scripts around that to configure the new instances properly I don't see a reason why AWS autoscale should not work.