AWS DMS - Migrate - only schema - amazon-web-services

We have noticed that if a table is empty in SQL Server, the empty table does not come via DMS. Only after inserting a record it starts to show up.
Just checking, is there a way to get the schema only from DMS?
Thanks

You can use Schema conversion tool for moving DB objects and Schema. Its a free tool by AWS and can be installed on On-Prem server or on EC2. It gives a good report before you can actually migrate the DB schema and other DB objects. It shows how many Tables, SP's Funcs etc can be directly migrated and shows possible solutions too.

Related

Is it possible to retrieve the Timestream schema via the API?

The AWS console for Timestream can show you the schema for tables, but I cannot find any way to retrieve this information programatically. While the schema is dynamically created from the write requests, it would be useful to check the schema to validate that writes are going to generate the same measure types.
As shown in this document, you can run the command beneath, it'll return the table's schema.
DESCRIBE "database"."tablename"
Having the right profile setup at your environment, to run this query from the CLI, you can run the command:
aws timestream-query query --query-string 'DESCRIBE "db-name"."tb-name"' --no-paginate
This should return the table schema.
You can also retrieve the same data running this query from source codes in Typescript/Javascript, Python, Java or C# just by using the Timestream-Query library from AWS's SDK.
NOTE:
If the table is empty, this query will return empty so, make sure you have data in your db/table before querying for it's schema, ok? Good luck!

Row level changes captured via AWS DMS

I am trying to migrate the database using AWS DMS. Source is Azure SQL server and destination is Redshift. Is there any way to know the rows updated or inserted? We dont have any audit columns in source database.
Redshift doesn’t track changes and you would need to have audit columns to do this at the user level. You may be able to deduce this from Redshift query history and save data input files but this will be solution dependent. Query history can be achieved in a couple of ways but both require some action. The first is to review the query logs but these are only saved for a few days. If you need to look back further than this you need a process to save these tables so the information isn’t lost. The other is to turn on Redshift logging to S3 but this would need to be turned on before you run queries on Redshift. There may be some logging from DMS that could be helpful but I think the bottom line answer is that row level change tracking is not something that is on in Redshift by default.

AWS DMS validation fails

I migrated data from SQL Server database to Aurora Postgres, using AWS DMS. Everything works and data is migrated correctly, but then validation fails. There are two types of validation errors:
GUIDS in the source database are all uppercase and in the target: lowercase.
{'record_id': 'DA7D98E2-06EA-4C3E-A148-3215E1C23384'}
{'record_id': 'da7d98e2-06ea-4c3e-a148-3215e1c23384'}
For some reason, validation fails between timestamp(4) column in Postgres and datetime2(4) column of SQLServer. It seems like the time in Postgres has two extra 0's at the end, but when selecting data from the table normally, the data is exactly the same.
{'created_datetime_utc': '2018-08-24 19:58:28.4900'}
{'created_datetime_utc': '2018-08-24 19:58:28.490000'}
Any ideas how to fix this? I tried to create transformation rules for columns, but they do not work.
Thank you.
Thanks to this article https://www.sentiatechblog.com/aws-database-migration-service-dms-tips-and-tricks, these new mapping rules fixed all validation issues. These rules cannot be added using AWS Console, only in the script.

Is it possible to change a database (schema) name in AWS Athena?

I created a database and some tables with Data on AWS Athena and would like to rename the database without deleting and re-creating the tables and database. Is there a way to do this? I tried the standard SQL alter database but it doesn't seem to work.
thanks!
I'm afraid there is no way to do this according to this official forum thread. You would need to remove the database and re-create it. However, since Athena does not store any data by itself, deleting a table or a database won't impact your data stored on S3. Therefore, if you kept all the scripts that create external tables, re-creating a database should be fairly quick thing to do.
Athena doesn't support renaming database. You need to recreate database with a new name.
You can use Presto which is an open source version of Athena and Presto supports more DDL queries.

AWS Data Migration Service (DMS) not moving identity, foreign keys, default values, indexes

I was able to clone one of my SQL Server database using the DMS. It copied clustered indexes, primary key definition etc along with the data.
However, it didn't not move/copy other constraints (identity, foreign key definition, default values) or any indexes.
I have generated / scripted out the indexes, default constraints and foreign keys, executed successfully. But is there a way to turn on the IDENTITY on respective columns ?
Figured out there is no way i can do this with AWS DMS as it do not import secondary/foreign keys, Indexes and Identity columns as well. You need to do it manually yourself by generating a script from SSMS or writing your own script.
Check this FAQ from Amazon:
Q. Does AWS Database Migration Service migrate the database schema for me?
To quickly migrate a database schema to your target instance you can rely on the Basic Schema Copy feature of AWS Database Migration Service. Basic Schema Copy will automatically create tables and primary keys in the target instance if the target does not already contain tables with the same names. Basic Schema Copy is great for doing a test migration, or when you are migrating databases heterogeneously e.g. Oracle to MySQL or SQL Server to Oracle. Basic Schema Copy will not migrate secondary indexes, foreign keys or stored procedures. When you need to use a more customizable schema migration process (e.g. when you are migrating your production database and need to move your stored procedures and secondary database objects), you can use the AWS Schema Conversion Tool for heterogeneous migrations, or use the schema export tools native to the source engine, if you are doing homogenous migrations like (1) SQL Server Management Studio's Import and Export Wizard, (2) Oracle's SQL Developer Database Export tool or script the export using the dbms_metadata package, (3) MySQL's Workbench Migration Wizard.