How to use MapBooleanAsBoolean in AWS Database Migration Service? - amazon-web-services

In latest release AWS DMS introduced MapBooleanAsBoolean connection parameter to allow keeping booleans as booleans when migrating from Postgres to Redshift. Unfortunately docs are very imprecise about how to use it. I tested adding it as extra connection parameter in both source and target endpoints and mapBooleanAsBoolean and migrateBooleanAsBoolean, but nothing worked for me. Has anyone been able to make it work?
Link to docs for reference:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html

Not sure if you found an answer for this, but adding the extra connection attribute to the source postgres endpoint:
mapBooleanAsBoolean=true;
Worked for me. My target was s3 parquet files though.
It can be done via the console.

Related

Migrate Apache Cassandra to Amazon DynamoDB

I want to migrate the database from Apache Cassandra to Amazon DynamoDB.
I am following this user guide
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/agents.cassandra.html
When I try to create a clone data centre for extraction it throws
If you read through that document, you'll find that the conversion tool supports very old versions of Cassandra: 3.11.2, 3.1.1, 3.0, 2.1.20.
There will be a lot of configuration items in your cassandra.yaml that will not be compatible with the conversion tool including replica_filtering_protection since that property was not added until C* 3.0.22, 3.11.8 (CASSANDRA-15907).
You'll need to engage AWS Support to figure out what migration options are available to you. Cheers!

Leveraging AWS Neptune Gremlin Client Library

We're looking to leverage the Neptune Gremlin client library to get load balancing and refreshes automatic.
There is a blog article here: https://aws.amazon.com/blogs/database/load-balance-graph-queries-using-the-amazon-neptune-gremlin-client/
This is also a repo containing the code here:
https://github.com/awslabs/amazon-neptune-tools/tree/master/neptune-gremlin-client
However, the artifacts aren't published anywhere. Is it still possible to do this? Ideally, we avoid vendoring the code into our codebase since we would then forefeit updates.
The artifacts for several of the tools in that repo can be found here.
https://github.com/awslabs/amazon-neptune-tools/releases/tag/amazon-neptune-tools-1.2

Cannot set consistency level when querying Amazon Keyspaces service from DataGrip

I'm trying to perform inserts on Amazon's Managed Cassandra service from IntelliJ's DataGrip IDE, however I recieve the following error:
Consistency level LOCAL_ONE is not supported for this operation. Supported consistency levels are: LOCAL_QUORUM
This is due to Amazon using the LOCAL_QUORUM consistency level for writes.
I tried to set the consistency level with CONSISTENCY LOCAL_QUORUM; before running other queries but it returned the following error:
line 1:0 no viable alternative at input 'CONSISTENCY' ([CONSISTENCY])
From my understanding, this is because CONSISTENCY is a cqlsh command and not a CQL command.
I cannot find any way to set the consistency level from within DataGrip so that I can run scripts and populate my tables.
Ultimately, I will use plain cqlsh if I cannot find a solution but I was hoping to use DataGrip as I find it useful and have many databases already configured. I hope someone can shed some light on the issue, this seems like it should be a basic feature.
I am Max from DataGrip team, and the correct answer is:
It could be JDBC driver issue and the desired method hasn't been implemented yet. Since you're trying to run pure cqlsh command as SQL. Follow the issue DBE-10638.
It's a DataGrip bug, see https://youtrack.jetbrains.com/issue/DBE-10182 :
Cassandra 'CONSISTENCY' command is not supported
So upvote that bug, and maybe add a comment that it makes DataGrip useless for writing to Amazon Managed Cassandra
Amazon Keyspaces (Apache Cassandra)
Now I used DataGrip version 2020.1.3 (Buy Licensed)
Encounter problems as well.
Cannot change type CONSISTENCY ONE to LOCAL_QUORUM
I have opened an issue already and waiting for the investigation.
So, I try so many tools and found that DBeaver is working,
The CONSISTENCY can be selected in the configuration GUI.
https://dbeaver.com/download

Installing Sitecore9.2 in AWS

Can anyone provide me answer to below query?
I wanted to install Sitecore9.2 on AWS, does the installation process requires SQL VMs?
or Can someone point me to right article to this
Thanks in advance.
From my own experience Sitecore XM can use AWS RDS for the Database. If that is a good idea you must know yourself. For the installation, the Sitecore 9 installation uses contained database that may broke the SIF installation, you can turn it on in AWS RDS or use normal database user account but you need a workaround. like first installing on SQL server and migrate to RDS, or installing manual without SIF. or adjust SIF
For more information see:
https://jeroen-de-groot.com/2018/07/19/deploying-sitecore-9-in-aws-rds/
https://sitecore.stackexchange.com/questions/11047/sitecore-9-installation-using-sql-active-directory-user/11063
https://sitecore.stackexchange.com/questions/13859/why-do-we-require-contained-database-for-sitecore-9
https://sheenumalhi.wordpress.com/2019/02/19/sitecore-9-with-aws-rds/

Using impdp/expdp with RDS Oracle on AWS

I'm very new to Amazon web services, especially using their RDS system. I have set up an Oracle database (11.2) and I now want to import a dump we made locally from our server using expdp. Apparently, the ability to use expdp/impdp on AWS is quite new. From what I understand, when creating an ORACLE database on RDS, a DATA_PUMP_DIR is automatically created. What is less obvious is how to access this directory and made our local dump available to RDS. I've tried to read the following information http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html on their website. But there is a lot of things I don't understand:
Why do I have to setup an EC2 instance when the dump file is actually on my local computer (and I can access remotely the RDS database using sqlplus or sql developper)
They are often using the 'sys' or 'system' user in their examples but, when reading the security settings for Oracle, it said that these users are made unavailable on RDS => you cannot connect to a database as Sysdba.
Could someone please point me to a simple and clear tutorial on how to use impdp on AWS ?
Thanks
It is possible to use Data Pump on RDS now.
duduklein's answer was correct when he wrote it. But the RDS docs now have details about using Oracle Data Pump. The doc page url is unmodified from the link as originally posted in the question (nice job, Amazon!) but it has new content on using Data Pump now.
It's not possible for now. I have just contacted amazon (through the premium support) for the same issue and they just told me that this is a feature request that was already passed to the RDS team, but there is no estimation of when this will be available.
The only way you can import files dumps is using the "exp" utility instead of the "expdp". In this case, you can use the "imp" utility to import data to RDS