AWS RDS Aurora - How to connect using PgAdmin? - amazon-web-services

Yesterday AWS launched Aurora serverless for PostgreSQL, but it doesn't seem to have the same configuration options as other RDS databases, I can't set it to public facing for example, it forces me to have a VPC.
Now, I have no clue how to apply these VPC things to PgAdmin, I've tried setting the inbound for the security group to all ports and ips but it still won't connect (no server response).
How can I connect to a RDS Database inside a VPC using PgAdmin?
Opening the security group didn't work.

I realize this question is old, but I kept coming back to it as I worked this out.
This solution is similar to #genkilabs solution but simpler.
Steps:
Spin up an ec2 micro instance in the same vpc as the database. You will tunnel through this.
Add the security group for your ec2 to the inbound rules of the database's security group.
ssh into the ec2 instance and install psql (and postgress...) with:
sudo amazon-linux-extras install postgresql10
Verify that you can connect to your database with psql:
psql -h {server} -p 5432 -U {database username} -d {database name} -p
In PGAdmin create a new server connection
Enter the database host, username, and password as usual.
Go to the SSH Tunnel tab
turn on ssh tunneling
enter your ec2 hostname for the tunnel host
enter your ssh username
select the identity file and find the .pem or .cer file for your ec2 instance.
Save and done. You should now be able to connect to the serverless Aurora database from your local PGAdmin.
If you have trouble connecting to the database form the ec2, this guide may be helpful. The same steps apply connecting from ec2 as from cloud9.

EDIT Sept '22: With Serverless V2 you can now select "public access" during the initial create, and connect directly (provided your VPC and security groups allow it). However, it is still recommended for production / "enterprise" use to still connect only though a "bastion" or "jump-box".
Officially, you can't...
Per the docs:
You can't give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within a virtual private cloud (VPC) based on the Amazon VPC service.
However, connecting to a serverless DB from a non-Amazon product is just officially discouraged, it is not impossible.
The best solution I have found so far is to create an autoscaling cluster of bastion boxes within the same VPC. Then use them to tunnel through. The great part about this strategy is that it exposes a standard postgre format URL, so it can be used with pgAdmin, Navicat, ActiveRecord or any other ORM that uses typical connection urls.
...The bad part is that (so far) it seems to enforce a 30 sec timeout on connections. So you better get all your transactions wrapped up quick like.
If anyone can do better, I'd love to hear how as well.

Related

How to connect to AWS Redis cluster locally?

I have a redis instance on AWS that I want to connect using Redis Desktop Manager from my local machine
I am able to ssh into my ec2 instace and then run redis-cli -h host and connect to it.
But the same is not possible from my local machine.
I am sure there must be a way to monitor my redis using the GUI, and I think if I can connect to the ec2 using pem file and I can connect to redis from insde there, must be a way to combine both? And connect to the redis instance locally via my ec2 instace? Any ideas?
By design AWS EC domain is deployed for use only within AWS. From docs:
Elasticache is a service designed to be used internally to your VPC. External access is discouraged due to the latency of Internet traffic and security concerns. However, if external access to Elasticache is required for test or development purposes, it can be done through a VPN.
Thus, it can't be accessed directly from outside of your VPC. For this, you need to setup a VPN between your local home/work network and your VPC, or what is often easier to do for testing and development, establish a ssh tunnel.
For the ssh tunnel you will need a public proxy/bastion EC2 instance through which the tunnel will be established. There are number tutorials on how to do it for different AWS services. General procedures are same, whether this is ES, EC, Aurora Serverless or RDS Proxy. Some examples:
SSH Tunnels (How to Access AWS RDS Locally Without Exposing it to Internet)
How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?
As #Marcin mentioned, AWS recommends only using Elasticache within your VPC for latency reasons, but you've got to develop on it some how... (Please be sure to read #Marcin's answer)
AWS is a huge mystery, and it's hard to find beginner-intermediate resources, so I'll expand upon #Marcin's answer a little for those that might stumble across this.
It's pretty simple to set up what's often referred to as a "jump box" to connect to all sorts of AWS resources - this is just any EC2 instance that's within the same VPC (network) as the resource you're trying to connect to - in this case the Elasticache redis cluster. (If you're running into trouble, just spin up a new instance - t4g.nano or something super small works just fine.)
You'll want to make sure you're in the directory with your key, but then should be able to run the following command to link whatever port you'd like to use to the remote redis cluster:
ssh -i ${your_ssh_key_name.pem} ${accessible_ec2_host} -L ${port_to_use_locally}:${inaccessable_redis_or_other_host}:${inaccessable_redis_port}
Then you can use localhost and ${port_to_use_locally} to connect to redis

How to develop a AWS Web App that uses AWS RDS locally?

Before moving to Amazon Web Services, I was using Google Cloud Platform to develop my aplication, CloudSQL to be specific, and GCP have something called Cloud SQL Proxy that allows me to connect to my CloudSQL instance using my computer, instead of having to deploy my code to the server and then test it. How can I make the same thing using AWS?
I have a python environment on Elastic Beanstalk, that uses Amazon RDS.
AWS is deny be default so you cannot access an RDS instance outside of the VPC that your application is running in. With that being said... you can connect to the RDS instance via a VPN that can be stood up in EC2 that has rules open to the RDS instance. This would allow you to connect to the VPN on whatever developer machine and then access the RDS instance as if your dev box was in the VPC. This is my preferred method because it is more secure. Only those with access to the VPN have access to the RDS instance. This has worked well for me in a production sense.
The VPN provider that I use is https://aws.amazon.com/marketplace/pp/OpenVPN-Inc-OpenVPN-Access-Server/B00MI40CAE
Alternatively you could open up a hole in your VPC to the RDS instance and make it publicly available. I don't recommend this however because it will leave your RDS instance open to attack as it is publicly exposed.
You can expose your AWS RDS to the internet by proper VPC setting, I did it before.
But it has some risks
So usually you can use those ways to figure it out:
Create a local database server and restore snapshot from your AWS RDS
or use VPN to connect to your private subnet which hold your RDS
A couple people have suggested putting your RDS instance in a public subnet, and allowing access from the internet.
This is generally considered to be a bad idea, and should be the last resort.
So you have a couple of options for getting access to RDS in a private subnet.
The first option is to set up networking between your local network and your AWS VPC. You can do this with Direct Connect, or with a point-point VPN. But based on your question, this isn't something you feel comfortable with.
The second option is to set up a bastion server in the public subnet, and use ssh port forwarding to get local access to the RDS over the SSH tunnel.
You don't say if you on linux or Windows, but this can be accomplished on either OS.
What I did to solve was:
Go to Elastic Beanstalk console
Chose you aplication
Go to Configurations
Click on the endpoint of your database in Databases
Click on the identifier of your DB Instance
In security group rules click in the security groups
Click in the inbound tab
Click edit
Change type to All Traffic and source to Anywhere
Save
This way you can expose the RDS connected to your Elastic Beanstalk aplication to the internet, which is not recommended as people sugested, but it is what I was looking for.

problems connecting to AWS DocumentDB

I created a Cluster and an Instance of DocumentDB in amazon. When I try to connect to my Local SSH (MacOS) it displays the following message:
When I try for the MongoDB Compass Community:
mongodb://Mobify:<My-Password>#docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0
It loads many minutes and in the end it has this result:
After solving this problem, I would like to know if it is possible to connect a cluster of documentDB to an instance in another zone of availability ... I have my DocumentDB in Ohio and I have an EC2 in São Paulo ... is it possible?
Amazon DocumentDB clusters are deployed in a VPC to provide strong network isolation from the Internet. To connect to your cluster from outside of the VPC, please see the following: https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
AWS document DB is hosted on a VPC (virtual private cloud) which has its own specific subnets and security groups; basically, anything that resides in a VPC is not publicly accessible.
Document DB is deployed in a VPC. In order to access it, you need to create an EC2 instance or AWS Could9.
Let's access it from the EC2 instance and access AWS document DB using SSH tunneling.
Create an EC2 instance (preferably ubuntu) of any configuration and select the same VPC in which your document DB cluster is hosted.
After the EC2 is completely initialized, start an SSH tunnel and bind the local port # 27017 with document DB cluster host # 27017.
ssh -i "<ec2-private-key>" -L 27017:docdb-2019-04-07-23-28-45.cluster-cmffegva7sne.us-east-2.docdb.amazonaws.com:27017 ubuntu#<ec2-host> -N
Now your localhost is tunneled to ec2 on port 27017. Connect from mongosh or mongo, enter your cluster password and you will be logged in and execute any queries.
mongosh --sslAllowInvalidHostnames --ssl --sslCAFile rds-combined-ca-bundle.pem --username Mobify --password
Note: SSL will be deprecated. Use tls, just replace SSL with tls in the above command.

Access an RDS instance created in Elastic Beanstalk

When you set up a new Elastic Beanstalk cluster you can access your EC2 instance by doing this:
eb ssh
However, it's not clear how to access the RDS instance.
How do you access an RDS in an Elastic Beanstalk context in order to perform CRUD operations?
The RDS command-line can be accessed from anywhere, by adjusting the RDS security group.
Check your AWS VPC configuration.
The security-group will need to be
adjusted to allow you to connect from a new source/port.
Find the security Group-id for the RDS.
Find that group in AWS Console > VPC > secuirty groups
Adjust the Inbound and Outbound Rules accordingly.
You need to allow access to/from the IP or security group that needs to connect to the RDS.
FROM: https://stackoverflow.com/a/37200075/1589379
After that, all that remains is configuring whatever local DB tool you would like to use to operate on the database.
EDIT:
Of additional note, if the ElasticBeanstalk Environment is configured to use RDS, the EC2 Instances will have environment variables set with the information needed to connect to the RDS.
This means that you can import those variables into any code that needs access.
Custom environment variables may also be set in Elastic Beanstalk Environment Configuration, and these too may be included this way.
PHP
define('RDS_HOSTNAME', getenv('RDS_HOSTNAME'));
$db = new rds(RDS_HOSTNAME);
Linux CommandLine
mysql --host=$RDS_HOSTNAME --port=$RDS_PORT -u $RDS_USERNAME -p$RDS_PASSWORD
RDS is a managed database service, which means it is that you can only access it through database calls.
If it is a MySQL database you can access through your EC2 instance through mysql like this:
mysql -u user -p password -h rds.instance.endpoint.region.rds.amazonaws.com
or set it up to work with your app with settings needed for that.
Make sure that you set up security groups correctly so that your EC2/other service has access to your RDS instance.
Update:
If you want what you are asking for then you should use an EC2 instance with a mysql server on. It would cost the same (even though a fraction of performance is lost in comparison). An EC2 instance you can turn off when you are not using as well.

Deploy Django using MySQL to AWS EC2 and RDS

I'm trying out AWS EC2 and RDS. I had followed this tutorial and it worked, but the tutorial is missing a database migration. https://www.youtube.com/watch?v=YJoOnKiSYws So, could someone point me in the right direction to continue with the database migration?
I've tried https://support.cloud.engineyard.com/entries/21009887-Access-Your-Database-Remotely-Through-an-SSH-Tunnel but it didn't work for me because of an RDS permission issue.
I've also tried the instructions on Amazon, http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html but it just confused me even more.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.NonRDSRepl.html
I've posted a message on AWS forum, but no one answered. And I was hoping that someone here can help me out. I just need some plain simple example such as
step1. From local pc export MySQL with mysqldump (done)
step2. Upload local mysqldump.sql file to EC2 (I don't know how to do)
step3. import mysqldump.sql on EC2 to RDS (I don't know how to do)
step4. connect django web app to use new MySQL database that has data dump (don't know how)
I really appreciate your help.
BTW, the data is in MySQL on my local computer running maverick OS (I hope that info is helpful).
If you are using RDS you don't have SSH access so creating an SSH tunnel is not an option.
So what's missing from your question is whether you are in a VPC or not, so assuming you are not in a VPC. In essence:
On the RDS security group, open the ingress EC2 security group that your EC2 instance you are going use to access the database with is in. For example, on the RDS console this shows up as the actual EC2 security group name
(To get to Security groups click RDS->Security Groups->Create DB Security Group, you first new to create an DB Security Group to see the screenshot above or you can use the default DB Security Group):
Dump your database on the server that you are running your original MySQL database. Hopefully this is the same server that is in the EC2 security group that you are allowing as an ingress on your RDS security group above: mysqldump -uroot -p <database-name> > database-name.sql
Load the database from the EC2 instance in EC2 security group allowed by RDS security group:
mysql -uroot -h<RDS-hostname> -p < database-name.sql
The RDS hostname is something like this: database-name.xxxxxxxxxx.us-east-1.rds.amazonaws.com
You don't have to connect to the RDS database to load a dump into it. But if you want to connect to it you can just run: mysql -uroot -p -h<RDS-hostname>
In case you you are in a VPC make sure that the EC2 instance that you are running the commands from is in the same VPC and subnet as your RDS instance. You have to create a VPC Subnet group for your VPC on the RDS console, this Subnet group has to be in two different Availability Zones if you are running a multi-az RDS instance. Other than that the procedure is the same.