Before moving to Amazon Web Services, I was using Google Cloud Platform to develop my aplication, CloudSQL to be specific, and GCP have something called Cloud SQL Proxy that allows me to connect to my CloudSQL instance using my computer, instead of having to deploy my code to the server and then test it. How can I make the same thing using AWS?
I have a python environment on Elastic Beanstalk, that uses Amazon RDS.
AWS is deny be default so you cannot access an RDS instance outside of the VPC that your application is running in. With that being said... you can connect to the RDS instance via a VPN that can be stood up in EC2 that has rules open to the RDS instance. This would allow you to connect to the VPN on whatever developer machine and then access the RDS instance as if your dev box was in the VPC. This is my preferred method because it is more secure. Only those with access to the VPN have access to the RDS instance. This has worked well for me in a production sense.
The VPN provider that I use is https://aws.amazon.com/marketplace/pp/OpenVPN-Inc-OpenVPN-Access-Server/B00MI40CAE
Alternatively you could open up a hole in your VPC to the RDS instance and make it publicly available. I don't recommend this however because it will leave your RDS instance open to attack as it is publicly exposed.
You can expose your AWS RDS to the internet by proper VPC setting, I did it before.
But it has some risks
So usually you can use those ways to figure it out:
Create a local database server and restore snapshot from your AWS RDS
or use VPN to connect to your private subnet which hold your RDS
A couple people have suggested putting your RDS instance in a public subnet, and allowing access from the internet.
This is generally considered to be a bad idea, and should be the last resort.
So you have a couple of options for getting access to RDS in a private subnet.
The first option is to set up networking between your local network and your AWS VPC. You can do this with Direct Connect, or with a point-point VPN. But based on your question, this isn't something you feel comfortable with.
The second option is to set up a bastion server in the public subnet, and use ssh port forwarding to get local access to the RDS over the SSH tunnel.
You don't say if you on linux or Windows, but this can be accomplished on either OS.
What I did to solve was:
Go to Elastic Beanstalk console
Chose you aplication
Go to Configurations
Click on the endpoint of your database in Databases
Click on the identifier of your DB Instance
In security group rules click in the security groups
Click in the inbound tab
Click edit
Change type to All Traffic and source to Anywhere
Save
This way you can expose the RDS connected to your Elastic Beanstalk aplication to the internet, which is not recommended as people sugested, but it is what I was looking for.
Related
I have a redis instance on AWS that I want to connect using Redis Desktop Manager from my local machine
I am able to ssh into my ec2 instace and then run redis-cli -h host and connect to it.
But the same is not possible from my local machine.
I am sure there must be a way to monitor my redis using the GUI, and I think if I can connect to the ec2 using pem file and I can connect to redis from insde there, must be a way to combine both? And connect to the redis instance locally via my ec2 instace? Any ideas?
By design AWS EC domain is deployed for use only within AWS. From docs:
Elasticache is a service designed to be used internally to your VPC. External access is discouraged due to the latency of Internet traffic and security concerns. However, if external access to Elasticache is required for test or development purposes, it can be done through a VPN.
Thus, it can't be accessed directly from outside of your VPC. For this, you need to setup a VPN between your local home/work network and your VPC, or what is often easier to do for testing and development, establish a ssh tunnel.
For the ssh tunnel you will need a public proxy/bastion EC2 instance through which the tunnel will be established. There are number tutorials on how to do it for different AWS services. General procedures are same, whether this is ES, EC, Aurora Serverless or RDS Proxy. Some examples:
SSH Tunnels (How to Access AWS RDS Locally Without Exposing it to Internet)
How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?
As #Marcin mentioned, AWS recommends only using Elasticache within your VPC for latency reasons, but you've got to develop on it some how... (Please be sure to read #Marcin's answer)
AWS is a huge mystery, and it's hard to find beginner-intermediate resources, so I'll expand upon #Marcin's answer a little for those that might stumble across this.
It's pretty simple to set up what's often referred to as a "jump box" to connect to all sorts of AWS resources - this is just any EC2 instance that's within the same VPC (network) as the resource you're trying to connect to - in this case the Elasticache redis cluster. (If you're running into trouble, just spin up a new instance - t4g.nano or something super small works just fine.)
You'll want to make sure you're in the directory with your key, but then should be able to run the following command to link whatever port you'd like to use to the remote redis cluster:
ssh -i ${your_ssh_key_name.pem} ${accessible_ec2_host} -L ${port_to_use_locally}:${inaccessable_redis_or_other_host}:${inaccessable_redis_port}
Then you can use localhost and ${port_to_use_locally} to connect to redis
Is there an alternative to AWS's security groups in the Google Cloud Platform?
Following is the situation which I have:
A Basic Node.js server running in Cloud Run as a docker image.
A Postgres SQL database at GCP.
A Redis instance at GCP.
What I want to do is make a 'security group' sort of so that my Postgres SQL DB and Redis instance can only be accessed from my Node.js server and nowhere else. I don't want them to be publically accessible via an IP.
What we do in AWS is, that only services part of a security group can access each other.
I'm not very sure but I guess in GCP I need to make use of Firewall rules (not sure at all).
If I'm correct could someone please guide me as to how to go about this? And if I'm wrong could someone suggest the correct method?
GCP has firewall rules for its VPC that work similar to AWS Security Groups. More details can be found here. You can place your PostgreSQL database, Redis instance and Node.js server inside GCP VPC.
Make Node.js server available to the public via DNS.
Set default-allow-internal rule, so that only the services present in VPC can access each other (halting public access of DB and Redis)
As an alternative approach, you may also keep all three servers public and only allow Node.js IP address to access DB and Redis servers, but the above solution is recommended.
Security groups inside AWS are instance-attached firewall-like components. So for example, you can have a SG on an instance level, similar to configuring IP-tables on regular Linux.
On the other hand, Google Firewall rules are more on a Network level. I guess, for the level of "granularity", I'd say that Security Groups can be replaced to instance-level granularity, so then your alternatives are to use one of the following:
firewalld
nftables
iptables
The thing is that in AWS you can also attach security groups to subnets. So SG's when attached to subnets, are also kind of similar to google firewalls, still, security groups provide a bit more granularity since you can have different security groups per subnet, while in GCP you need to have a firewall per Network. At this level, protection should come from firewalls in subnets.
Thanks #amsh for the solution to the problem. But there were a few more things that were required to be done so I guess it'll be better if I list them out here if anyone needs in the future:
Create a VPC network and add a subnet for a particular region (Eg: us-central1).
Create a VPC connector from the Serverless VPC Access section for the created VPC network in the same region.
In Cloud Run add the created VPC connector in the Connection section.
Create the PostgreSQL and Redis instance in the same region as that of the created VPC network.
In the Private IP section of these instances, select the created VPC network. This will create a Private IP for the respective instances in the region of the created VPC network.
Use this Private IP in the Node.js server to connect to the instance and it'll be good to go.
Common Problems you might face:
Error while creating the VPC Connector: Ensure the IP range of the VPC connector and the VPC network do not overlap.
Different regions: Ensure all instances are in the same region of the VPC network, else they won't connect via the Private IP.
Avoid changing the firewall rules: The firewall rules must not be changed unless you need them to perform differently than they normally do.
Instances in different regions: If the instances are spread across different regions, use VPC network peering to establish a connection between them.
If one has a publicly accessible rds database on aws, and wants to instead use a bastion ec2 instance to access and perform database functions (anyone on the internet should be able to use the app and perform database functions in accordance with the features provided by the app), how should one go about performing this shift? I have tried searching the internet but often I get loads of information with terminology that isn't entirely easy to digest. Any assistance would be greatly appreciated.
Again, I want the general public to be able to use and access the app's provided db functions, but not have them be able to access the database directly.
A typical 3-tier architecture is:
A Load Balancer across public subnets, which sends traffic to...
Multiple Amazon EC2 instances in private subnets, preferable provisioned through Amazon EC2 Auto Scaling, which can scale based on demand and can also replace failed instances, which are all talking to...
A Database in a private subnet, preferably in Multi-AZ mode, which means that a failure in the database or in an Availability Zone will not lose any data
However, your application may not require this much infrastructure. For low-usage applications, you could just use:
An Amazon EC2 instance as your application server running in a public subnet
An Amazon RDS database in a private subnet, with a security group configured to permit access from the Amazon EC2 instance
Users would connect to your application server. The application server would connect to the database. Users would have no direct access to the database.
However, YOU might require access to the database for administration and testing purposes. Since the database is in a private subnet, it is not reachable from the Internet. To provide you with access, you could launch another Amazon EC2 instance in a public subnet, with a security group configured to permit you to access the instance. This instance "sticks out" on the Internet, and is thus called a Bastion server (named after the part of a castle wall that sticks out to allow archers to fire on invaders climbing the caste wall).
You can use port forwarding to connect to the Bastion server and then through to the database. For example:
ssh -i key.pem ec2-user#BASTION-IP -L 3306:DATABASE-DNS-NAME:3306
This configures the SSH connection to forward localhost:3306 to port 3306 on the named database server. This allows your local machine to talk to the database via the Bastion server.
You will need to create private subnets for this and update DBsubnet groups accordingly with private subnets only. Moreover in DB security group add bastion and app instances security group as source for db port.
Like if you're using mysql engine, allow 3306 for target instances secuirty group id's.
I have an instance and s3 bucket in AWS (which I'm limiting to a range of IPs). I'm wanting to create a VPN and be able to authenticate myself while trying to log into that VPN to get to that instance.
To simplify, I'm trying to set up a dev environment for my site. I'm wanting to make sure I can limit access to that instance. I'm wanting to use a service to authenticate anybody wanting to get to that instance. Is there a way to do all of this in AWS?
Have you looked at AWS Client VPN:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/what-is.html
This allows you to create a managed VPN server in your VPC which you can connect to using any OpenVPN client. You could then allow traffic from this vpn to your instance using security group rules.
Alternatively you can achieve the same effect using OpenVPN on an EC2 server, available from the marketplace:
https://aws.amazon.com/marketplace/pp/B00MI40CAE/ref=mkt_wir_openvpn_byol
Requires a bit more set up but works just fine, perfect if AWS Client VPN isn't available in your region yet.
Both these approaches ensure that your EC2 instance remains in a private subnet and is not accessible directly from the internet. Also, the OpenVPN client for mac works just fine.
In trying to move a website to operate via Elastic Beanstalk (ELB), I chose the t2 series of EC2 instances, and in doing so, was forced to create a Virtual Private Cloud (VPC). This site connects to a MySQL database via RDS, and I'm not having any luck getting the ELB site to access the database.
I've tried reviewing this:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html?icmpid=docs_elasticbeanstalk_console
The above link starts by saying "works great for development and testing environments, but is not ideal for a production environment", which confuses me as it doesn't say what would be better in its place - I need a database connected to the site!
It has all sorts of information and I tried several of the things it suggested regarding connecting to an existing database (not creating a new one). It mentions on step 6 of the "To modify the ingress rules on your RDS instance's security group" section to access the ingress tab, which doesn't exist for me.
I've tried editing the security group associated with the database via the RDS dashboard under "security groups", but it does not list the security groups that are associated with the VPC or the EC2 instance launched by ELB. I tried pushing the IP addresses, elastic IPs, and still can't get the site to see the database.
I'm at a loss. Can anyone explain how to connect an ELB distributed EC2 instance with an RDS database through the VPC required by t2 instances?
The statement that "This works great for development and testing environments, but is not ideal for a production environment" is just referring to having ElasticBeanstalk create the RDS instance for you. This can be done by configuring the "Database" section when creating a new EB environment.
The downside of letting EB create the RDS instance for you is that your web instance and database instance will be strongly connected, and if you ever terminate your web instance, your database will also be terminated, including all of your snapshots.
However, I think you're taking the "external" part of "external database" too literally. Your RDS instance should definitely be within the same VPC as your web instance. However, you should create it and connect your web instance to it manually. Connecting to the database involves setting five environment variables (listed below) and configuring the security group to allow connections from the web instance to the database.
The environment variables you'll need to set on your web instance are as follows:
RDS_HOSTNAME=instancename.region.rds.amazonaws.com
RDS_DB_NAME=databasename
RDS_PASSWORD=databasepassword
RDS_USERNAME=databaseuser
RDS_PORT=5432