AWS - Data Migration Service - amazon-web-services

Trying to Migrate RDS mysql to Redshift, When connecting the AWS RedShift Database in Target Connection it Throws the Error:
Test Endpoint failed: Application-Status: 1020912, Application-Message: IN/A, Application-Detailed-Message: N/A
Please help to resolve...

I have resolved my issue by adding few ingress/egress rules weren't defined on the security groups which are attached to DMS and Redshift.
Check this link for further information: AWS DMS endpoint connection to Redshift not working

For me, the private IP of redshift in the server name of the endpoint worked. I don't know why but it's working now.

AWS - Data Migration Service Endpoint Redshift error (Test failed)
The Leader private IP of redshift in the server name of the endpoint will work.

In your DMS (Data Migration Service) configuration where you enter the Server Name value, avoid the Redshift cluster DNS Name or the Public IP address. Instead, try using the private IP of the Redshift Leader node. It worked for me after I changed to the private IP of leader node.

Related

Query Setup Error when Testing Endpoint in AWS DMS

I'm getting the following error when trying to test a target connection to a Redshift db using AWS DMS tool:
Test Endpoint failed: Application-Status: 1020912, Application-Message: Query setup error
The redshift cluster is in the same VPC, subnet group & security group as the replication instance. The security group allows all traffic inbound from itself. Is there a location I can get full logs for the endpoint test, or more details about this error?
Query setup error is not a network error - but rather relates to errors coming from SQL engine itself. In my case AWS support was able to look up the logs. In there was an error line about the database name being wrong.

Kafka connector Failing to connect AWS MSK

I am trying to configure MSK connect in AWS and the below is the configuration.
INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:235)
[Worker-02003b81ffe0ee9c3] [2022-06-02 14:26:40,955] INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager:235)
[Worker-02003b81ffe0ee9c3] org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1654180000954, tries=1, nextAllowedTryMs=1654180001055) timed out at 1654180000955 after 1 attempt(s)
As per https://aws.amazon.com/premiumsupport/knowledge-center/msk-connector-connect-errors/ I have opened all traffic for the MSK connector to be able to reach the msk cluster, yet I notice timeout errors.
The connector and the cluster are both in same subnets and uses same security group ID. I ma able to telnet to the broker from a VM in the same subnet.
Note: I have plaintext enabled and no authentication. I have also given proper IAM permission and role attached. This is verified.
Adding the solution in case if it helps someone.
https://docs.aws.amazon.com/msk/latest/developerguide/mkc-tutorial-setup.html
I had to create a vpc endpoint as mentioned in the above doc and also associate the subnet route tables that my kafka uses.
Additionally also make sure your SG's have correct inbound and outbound rules

RDS SQL server TLS/SSL encrytion from application servers

Need to encrypt data in transit from application severs to RDS SQL server with SSL/TLS?
I see aws gives the option to make force encryption = true in parameter group with self signed certs.
Is there a way to use customer certs to import into RDS?
Any configuration steps to do this at application server and on RDS?
Appreciate any info on this . Didn't find anything in AWS knowledge base.
Note: Application servers sit behind load balancer.
For RDS SQL Server you will need to use the PEM that AWS provides for TLS.
You have a choice of either:
Root certificate
Intermediary and root certificate
The application server will need to have access to this certificate before it can connect to the RDS instance.
Unfortunately at this time only Aurora supports uploading your own certificates (and then accessing via ACM), you will need to use the provided one.
For connecting and configuring the RDS there is a specific Using SSL with a Microsoft SQL Server DB Instance page.

How to permit Google Cloud Data Fusion to connect to an AWS RDS MySQL database?

I'm getting an error in configuring a database connection in a Google Cloud Data Fusion Pipeline.
"Encountered SQL error while getting query schema: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."
We can't connect outside of the company building as the company IP's are whitelisted in AWS security settings. I can query easily using mysql workbench inside the company so, I'm guessing I need to add some IPs to our AWS security groups to provide Data Fusion permissions? I can't find a guideline on this. Where can I find the ip's required to provide in AWS? (Assuming that might fix it)
I've added a mysql plugin artefact using 'mysql-connector-java-8.0.17.jar', which is referred to by plugin name 'mysql-connector-java'.
Do VPN between your GCP VPC and your AWS VPC where your RDS is residing
https://cloud.google.com/solutions/using-gcp-apis-from-an-external-network
https://cloud.google.com/solutions/automated-network-deployment-multicloud
Simple way
Create Haproxy with public IP
Data Fusion --> VM Haproxy Public IP --> AWS RDS Private IP

AWS Data Pipeline Cannot Connect with RDS Mysql (connection time out)

I am stuck on making a AWS Data Pipeline which takes data from RDS Mysql to s3.
I ahve tried Template but failed alot. Then I made this self configured pipeline but still no success. Can anyone point out the problem by seeing the architect?
Here are the RDS MySQL Details -> NOTE <- that username in picture is different because I am using a separate user and the username in picture is administrator
This is the Data Pile Line Architect
Below are the settings of first block i.e Configuration
Below are the settings of RDS MySQL DataBase
Below are settings of EC2 Machine
Below are the Settings of SQL Data node - which i guess gets data from RDS
Below are the Settings of Copy Activity
Below are the settings of S3 Data Node - which i guess puts data on S3
Here is the ERROR LOG
I read that it could be an error due to VPC (Virtual Private Cloud) permissions but I am not sure how to add these settings as the server is a Production Server and I am afraid to perform this test. Can any one provide a solid solution please?
As previously mentioned, your ec2 instance is not able to contact the Database endpoint. Please use the link to configure the security groups correctly http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
To test this, spin up a ec2 instance in the subnet and telnet to the database endpoint to ensure the connection is fine. You can then resume the activation of your pipeline.
Commands
sudo yum install telnet
telnet hostname port