AWS Data Pipeline Cannot Connect with RDS Mysql (connection time out) - amazon-web-services

I am stuck on making a AWS Data Pipeline which takes data from RDS Mysql to s3.
I ahve tried Template but failed alot. Then I made this self configured pipeline but still no success. Can anyone point out the problem by seeing the architect?
Here are the RDS MySQL Details -> NOTE <- that username in picture is different because I am using a separate user and the username in picture is administrator
This is the Data Pile Line Architect
Below are the settings of first block i.e Configuration
Below are the settings of RDS MySQL DataBase
Below are settings of EC2 Machine
Below are the Settings of SQL Data node - which i guess gets data from RDS
Below are the Settings of Copy Activity
Below are the settings of S3 Data Node - which i guess puts data on S3
Here is the ERROR LOG
I read that it could be an error due to VPC (Virtual Private Cloud) permissions but I am not sure how to add these settings as the server is a Production Server and I am afraid to perform this test. Can any one provide a solid solution please?

As previously mentioned, your ec2 instance is not able to contact the Database endpoint. Please use the link to configure the security groups correctly http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
To test this, spin up a ec2 instance in the subnet and telnet to the database endpoint to ensure the connection is fine. You can then resume the activation of your pipeline.
Commands
sudo yum install telnet
telnet hostname port

Related

AWS ECS Task can't connect to RDS Database

I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).

Connect to RDS eb2 by tableplus?

I have table plus app and I create eb then deploy my project then connect to database and all thing is good and cool!
I need to connect to database(MYSQL) to import some data to the AWS database so I do these steps:
open new workspace in table plus
take endpoint and username of database and the password and the name of database like so:
press Test button and after wait some times I got this error:
I change the port to also 5432 and got same first error
I change the port to 3306 and got this error:
where is the problem ?
Ok the way i did it was by following this video:
https://www.youtube.com/watch?v=saX75fTwh0M&ab_channel=AdobeinaMinute
In short you need to get the details of your instance and set up an ssh connection to it using its hostname (ie that of the instance not the db), your ec2 username (usually ec2-user) and your pem file. Then you get the connection details for the db and enter them. See the screenshot.
I think that your problem is that the configuration you created is set to a Redshift connection. It expects some network communications that are different from a MySQL connection.
Can you try to create a MySQL connection instead?
I had the same issue but my problem was with the security rules in aws. perhaps, this may help.
Navigate to the console
edit inbound rules of your rds instance
add a new security rule
where: type: 'all traffic', source: anywhere
This video gives a google explanation: aws rds setup

AWS RDS Certificate Authority update

I recently received an email regarding a required update to my RDS Certificate Authority.
The instructions on the RDS side seems straight forward: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL-certificate-rotation.html
However on step 4 there was an important message, "When you schedule this operation, make sure that you have updated your client-side trust store beforehand."
I cant seem to find any information about updating my server which connects to RDS for the CA update.
My Setup is EC2 instances on Beanstalk.
Does anyone know how/what I am supposed to do?
Thank you.
similar question: Update Amazon RDS SSL/TLS Certificates - Elastic Beanstalk
Basically, the installation of certification is only required when you use the SSL connection from your application to the RDS server. Regardless of the SSL connection, it is recommended to update the certificate of your server but it is not necessary when you did not use the SSL connection to the RDS.
Server-side Usage
When you use the SSL connection, you should change the certificate of the RDS server as soon as possible. Go to the RDS console, then you can find the Certificate update menu from the left menu list. Find your DB cluster, check and update your SSL right now or reserve the update for the next maintenance.
Client-side Usage
The details about the SSL certificate are noted in the documentation. From here, you can download the root CA certificate of rds 2019. The link is below.
https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem
This CA certificate is used to connect the rds server, e.g.
mysql -h myinstance.c9akciq32.rds-us-east-1.amazonaws.com
--ssl-ca=[full path]rds-combined-ca-bundle.pem --ssl-mode=VERIFY_IDENTITY
or add it to the Trusted Root CA for the client OS.
For example in Windows, you can run certmgr.msc and right-click the trusted root ca, import this certificate. In Mac, open keychain access and import this certificate. This is an option.
In order to change your CA Certificate on an Elastic Beanstalk environment by Amazon (AWS) do the following:
Log in to your console (https://console.aws.amazon.com/)
Click services and search for "RDS"
Inside RDS (RDS is where the databases from Beanstalk lives even though they are directly attached to the Beanstalk environment) click "Certificate Update" down in the right corner (there will be a very read notification on the link)
If you have any certificates to upgrade, they will show up here.
Click the RDS instance name (the weird aws name of the database server) aka "DB identifier"
(Well inside this you can see some more info about it under configuration), for instance your db username which could help you identify the instance if you have many and forgot to rename them.
Click Actions > Upgrade now (this will reboot your instance now) OR Actions > Upgrade at next window (choose this if you have a lot of traffic and many users, so it will be less disruptive ie not stop in the middle of the day but in the night according to the maintenance schedule of your location/server)
That's it. You do not need to install anything in your Beanstalk environment.
This is how we are managing SSL communication from Elastic Beanstalk to an external RDS PostgreSQL database. We add the following config file to .ebextensions (.ebextensions/rds.config):
commands:
01-create-folder:
command: mkdir -p /home/webapp/.postgresql
02-download-cert:
command: aws s3 cp s3://rds-downloads/rds-ca-2019-root.pem /home/webapp/.postgresql/root.crt
03-change-owner:
command: chown webapp:webapp /home/webapp/.postgresql/root.crt
04-change-mode:
command: chmod 400 /home/webapp/.postgresql/root.crt
The file downloads the certificate from the public S3 folder and places in the .postgresql folder as the root certificate. We are having a Java application and the JDBC driver successfully connects to RDS with SSL enabled.

How to set up a front end for AWS DBs without using the Internet

On AWS, I know how to set up a web server with inbound rules allowing HTTP and HTTPS and a database security group that only connect to the web server. The issue is I need to create a front end to manage the databases without using Internet access - this will be internal only and precludes the use of a public IP / public DNS. Does anyone know how I would do this?
To further elaborate, some of our AWS accounts are for internal use only - we can log in to the console, use CygWin to SSH in, see what's there, etc. But these accounts are for development purposes, and in a large enterprise such as this one, these are not allowed an IGW. So - no inbound Internet access is allowed. How do I create an app (e.g., phpMyAdmin type) in which our manager can easily view and edit the data in the database given the restriction that this must be done without inbound Internet access?
Host your database on RDS inside a VPC and create a VPN connection between your client network and your VPC.
host your database on one EC2 and also upload your front end there. your database will be running on locally on EC2 and you can connect front end to database. where database will not have public DNS it will running locally you can access only using SSH and front end script.
you check this official documentation from aws : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
for frontend script you can use https://www.adminer.org/ which is one file database management system. one simple file is there using this make connection to locally running database on EC2

Jasperreports server works fine with actual EC2, but not from an instance taken from the same AMI

I have an EC2-1 which has a jasperreports server installed on it, and I could easily access it through http://IP_ADDRESS1:8081/jasperserver.
Now I have taken an image of EC2-1. once AMI is available, I launched a new EC2-2. As usual I logged in to EC2 using SSH , and was able to run the script ./ctrlscript.sh start ,to access the application. but when I tried to login to http://IP_ADDRESS2:8081/jasperserver and run the report, I am getting below error in jasperserver.log and unable to get the report
300 ERROR WebServiceConnector,pool-4-thread-1:139 - Communication error java.net.ConnectException: Connection timed out
320 ERROR AsyncJasperPrintAccessor,pool-4-thread-1:321 - Error during report execution
can any one give me some clarification on my understanding of **EC2 vs AMI**.As per my understanding EC2-1 and EC2-2 has to be same. but in this case why I am not able to run the reports in EC2-2, when I am still able to run the reports in EC2-1.
Also please guide me if I am missing something here. thank you all.
You are correct that a new EC2 instance that was launched from an AMI taken of the original EC2 instance should include any configuration changes you made on your source instance.
From your description, it sounds like everything about the new EC2 instance is good: you can SSH into it, you can start JasperReports Server, and you can log into the web interface. The problems only begin when you try to run a report -- an important detail, because running a report has an external dependency on an external data source.
Test Connection To Data Sources
In JasperReports Server Web UI, find your Data Source and go to its edit page to test the connection. You should be able to find it at the bottom of the edit page for most data sources. For example, in the JDBC UI:
Try to test Jasper's connection to the data source from this page on your new instance.
Verify Networking Rules
This reads to me like a networking error, specifically between the new EC2 JasperReports instance and the report's data source. There is likely a networking rule external to this EC2 instance that existed for your original instance, but wasn't updated for the new instance. For example, if you had a security group that allowed inbound traffic from the original instance's CIDR to the data source, and it wasn't updated for the new instance's CIDR, you would see these sort of timeouts when JasperReports Server attempts to connect to the data source.
If testing the connection to the data source above failed, check external networking rules on resources such as security groups or VPC network ACLs, and verify that all rules for your original EC2 instance have been updated to also be valid for your new EC2 instance.