Migrate to Cloud SQL for PostgreSQL using Database Migration Service - database-migration

I would like to migrate a specific database from my Cloud SQL (Postgresql) instance to another Cloud SQL instance using GCP Database Migration Service.
According to GCP documentation:
Database Migration Service migrates ALL databases under the
source instance other than the following databases:
For Cloud SQL sources: template databases template0 and template1
How can I migrate only one specific database from the Cloud SQL Instance?

Database Migration Service could be used to move one Cloud SQL instance from one to another.
As per your requirement that I understood from above question details , you can specify specific parameters as below to select a specific database from the available ones:
Open the Source database engine drop-down list and select the
classification type of your source database
In the Connection profile name field, enter a name for the connection
profile for your source database, such as My Connection Profile.
Keep the auto-generated Connection profile ID.
Enter Connectivity information: If you're replicating from a
self-hosted database, then enter the Hostname or IP address (domain
or IP) and Port to access the host. (The default PostgreSQL port is
5432.)
Database Migration Service uses this information in the connection profile to migrate data from your specific source database to the destination Cloud SQL database instance.
This is a well documented procedure, of which you can find information in documentation

Related

Provide steps for Cloud SQL(GCP) Database Restoration from automated Backups

we have SQL Backup retention for 7 days, Here we have the backup of 7 days.
we followed the process of restoring from https://cloud.google.com/sql/docs/mysql/backup-recovery/restoring.
I have created a new Postgres SQL instance with the same configuration as the running SQL Postgres instance.
I have selected the latest backup from automated Backups and restored it into a new SQL instance. When I connected one of the restored databases using PGAdmin data in a new backuped SQL Database and running SQL database are not the same i.e some data from tables are missing.
Please provide steps on how to recover full data from backups
You can try to use Database migration service. Select your Azure database as source (you might require a public IP on your database, or set up a VPN to allow private communication), and then schedule a migration job.

Data Replication from Amazon RDS for MySQL to Amazon Aurora for PostgreSQL

We need to replicate data from an Amazon RDS (MySQL) databaes to an Aurora PostgreSQL database. Each database is on a different AWS account and region.
The data must be replicated in the PostgreSQL instance every 6 hours, so we need the guarantee that the data doesn't duplicate even if a field updated for a record.
Which method, tool, or design is better to do that? (Could be different to AWS.)
You could facilitate one of the following services provided by AWS:
Database Migration Service
Glue
The AWS Database Migration Service supports:
Using an Amazon-Managed MySQL-Compatible Database as a Source for AWS DMS
Using a PostgreSQL Database as a Target for AWS Database Migration Service

How do I migrate data from a local to a remote MySQL instance

I have a local MySQL database and I want to migrate the data inside of it to a remote MySQL database (using RDS on AWS). How can I migrate my data between the two instances?
AWS DMS helps you migrate large, terabyte-scale databases to the AWS
Cloud easily and securely. During migration, the source database
remains fully operational, minimizing downtime.
But judging form your question you want homogeneous data migration and as per AWS Documentation:
If you're performing a homogeneous migration, use your engine’s native
tools, such as MySQL dump or MySQL replication.
Refer to this answer for using SQL Dump on larger data.
Thanks
Use AWS database migration service that is available in Aws. You need to provide your database end-point I,e your on premises data- base server end-point in it and also set your db engine parameters to your requirement and launch. it talks 10-15 minutes to migrate your data to cloud and from there you can continue accessing your database from the AWS it self.
The other method is, take a recent back up of your on premises database. Launch an instance in aws EC2 and install the db that you are using on premises.copy the back file in your system to cloud.using the backup file available launch the database.set up an RDS instance of the type that you have installed in EC2, and connect the end points.

DB role in a WSO2 Identity Server Clustered Deployment

I want to set up a cluster of WSO2 Identity Servers for HA. From the documentation I understand that there can be two IS nodes which are load-balanced either through ELB or Apache.
In my case the user-store will be an Active Directory server.
My question is around the database requirements for the cluster. Given that the user store is AD, what kind of a database setup do I need? Do I need a database cluster such as MySQL (which makes sense for HA), and what would it be used for?
Your user store is AD and database can be anything. There is no any special recommendation for databases. You can use any SQL database but WSO2IS has been only tested with following database types.H2, DB2, MsSQL, MySQL, MySQL Cluster, Oracle, Oracle RAC,PostgreSQL, Informix.sql You can use any of these. But embedded H2 is not production recommended. Also database does not depend on the user store that you are using. You can use any type of user store (JDBC,AD,LDAP) and any type of database independently. As you have mentioned, it would be better, if you can have database level clustering to achieve the HA. If you are using MySQL, you can probably use the MySQL cluster, more detail would be available in the WSO2 article as well

Amazon RDS - Creating/cloning multiple databases in a single rds instance

We are developing a Java/j2ee application which uses RDS.
We want to create a separate database per customer on a single RDS instance.
We want to create a template SQL schema with tables and some metadata.
When the new customer is created we want to clone the template schema and create a separate db for that customer.
Can you let me know if this is possible using AWS SDK APIs? Or if there is any other way?
Regard,
Dattatray.
The general design for handling individual databases for multi-tenant applications would be like
Have a separate DB for identification / allocation of specific database to a particular client [ Meta Data Database ]
During a launch / on-boarding of a new client, you need to fire the SQL snippet -> with a unique Client's DB name and have this information updated in the Meta Data Database
You can think of dynamically updating the SQL snippet DB NAME and then firing the Schema for the new client or use ORM like Hibernate to create the specified Database elements.
Amazon RDS doesn't impose any restrictions on number of Databases you can created in a single instance, so you need to worry about the upper limit. You do not need to use any of the AWS SDKs or APIs you just need to concentrate on the App and Connection Strings.
Extract from AWS FAQs for RDS :
Q: How many databases or schemas can I run within a DB Instance?
RDS for MySQL: No limit imposed by software
RDS for Oracle: 1 database per instance; no limit on number of schemas per database imposed by
software
RDS for SQL Server: 30 databases per instance
RDS for PostgreSQL: No limit imposed by software
You wouldn't need to use any SDK for RDS, as you are not really modifying the instance in any way. The instance will always be running, and you just want to create new database schemas on that instance. This would be done using the SQL connector library you are using in your Java code (or could be scripted in another language such as Perl or Python for example).