AWS - How to Connect Elastic Beanstalk to Private RDS Instance - amazon-web-services

Can someone please clearly explain in a step-by-step guide and in simple terms from start to finish how to properly setup a private RDS instance that connects to:
Elastic Beanstalk instance where the the environment is using a load balancing, auto scaling web server environment using PHP as it’s platform.
MySQL Workbench
Side note, the EB and RDS instance(s) are all in the same VPC. I suppose in reality this may be more of a how to properly setup and connect IAM profiles and roles question.
In essence, I want to restrict all internet access from the RDS instance, while still allowing my EB instance or other resources i.e other EC2 instances (all located in the same VPC) the ability to connect to the RDS instance, while also allowing me to use (connect to) a DB tool like MySQL Workbench.
Elastic Beanstalk Security Questions:
Instance Profile: How should I setup/config this role and it’s associated policy
Service Profile: How should I setup/config this role and it’s associated policy
RDS Security Questions:
VPC Security Groups: How should I setup/config this security group(s) to allow access from EB instance, other specified resources (EC2), and MySQL Workbench

Related

Elastic Beanstalk in VPC

I made and deployed my Django application in AWS Elastic Beanstalk. It has a connection to a Postgres DB in RDS, through the EBS console.
When I click configuration -> network in EBS, I see: "This environment is not part of a VPC."
How can I make it a part of a VPC? Thanks!
NOTE: very new to this ;)
You have to recreate the Elastic Beanstalk environment and pick the VPC during the creation. It is not possible to move an existing environment into a VPC.
But, unless you have access to EC2-Classic, the EC2 servers that were launched are already be in a VPC. They are just in the default VPC. But as far as Elastic Beanstalk is concerned, it seems oblivious to this.
I am not sure if there are any features that are exclusively available to VPC environments. My suggestion is to try to use your current environment, and if you happen to recreate the environment later for some other reason, then you can try picking a VPC and see if it offers anything new.
As already explained by #stefansundin you can't move existing EB into a custom VPC. You have to create new one.
These are general steps to consider:
Create a custom VPC with public and private subnets as described in the docs: VPC with public and private subnets (NAT). NAT is needed for instances and rds in private subnet to communicate with internet, but no inbound internet traffic will be allowed. This ensures that your instances and rds are not accessible from the outside.
Create new RDS, external to EB. This is good practice as otherwise the lifetime of your RDS is coupled with EB environment. Starting point is the following AWS documentation: Launching and connecting to an external Amazon RDS instance in a default VPC
Create new EB environment and ensure to customize its settings to use the VPC. Pass the RDS endpoint to the EB instances using environmental variables. Depending on how you want to handle password to the RDS, there are few options, starting from using environmental variables (low security) through SSM Parameter Store (free) or AWS Secrets Manager (not free).
Setting this all up correctly can be difficult for someone new to AWS. But with patience and practice it can be done. Thus, I would recommend with starting with default VPC, as you have now. Then once you are comfortable with how to work with external RDS, think on creating custom VPC as described.

Why do hostnames of connections to Aurora Serverless come from outside the VPC?

I have a php website running in Beanstalk behind a load balancer.
The website is connecting to a MySQL compatible database running as Aurora Serverless.
Both the elastic beanstalk instance and Aurora is set up in the same VPC.
The VPC CIDR is 10.10.0.0/24
The elastic beanstalk instance has local IP 10.10.0.18
The serverless Aurora cluster is using VPC endpoints in the two subnets of the VPC and their IP addresses are 10.10.0.30 and 10.10.0.75.
Even though Aurora Serverless is limited to only accepting connections from within the VPC, out of habit I have still only granted my user permission if they are coming from the VPC.
So for instance I have granted permissions to 'user'#'10.10.0.%'
When my website tries to connect to the database however it gets permission denied because it is trying to access it with a user that was not granted permission because the host is not in the 10.10.0.0/24 subnet.
Here are some of the errors that I am getting:
Access denied for user 'user'#'10.1.17.79' (using password: YES)
Access denied for user 'user'#'10.1.18.17' (using password: YES)
Access denied for user 'user'#'10.1.19.1' (using password: YES)
Access denied for user 'user'#'10.1.19.177' (using password: YES)
As you can see, none of those hosts are within my VPC.
Is this because the cluster is running in its own VPC, linked to mine via the private links?
And if so, are my only option to use % as the host for users I grant privileges?
Personally I would like to have locked it down to only my VPC just in case Serverless Aurora opens up for connections from the internet in the future.
Don't whitelist to any specific IP addresses like that for RDS. Especially with Aurora serverless where the node IPs can change at a moments notice a it scales you will find there is no way to know the true IP address of the node.
Remember that all the RDS database services technically run within a AWS managed VPC, you do however get an ENI attached to your VPC that allows you to connect to the instance. This allows you to communicate as if the resource is actually created within your VPC.
The best way to enhance security is going to be through security groups and NACLs, combined with using TLS and encryption at rest. Finally ensure your passwords are strong and rotated frequently.
The RDS Best Practices should help you to dive into other practices you can follow to enhance your security.

Access an RDS instance created in Elastic Beanstalk

When you set up a new Elastic Beanstalk cluster you can access your EC2 instance by doing this:
eb ssh
However, it's not clear how to access the RDS instance.
How do you access an RDS in an Elastic Beanstalk context in order to perform CRUD operations?
The RDS command-line can be accessed from anywhere, by adjusting the RDS security group.
Check your AWS VPC configuration.
The security-group will need to be
adjusted to allow you to connect from a new source/port.
Find the security Group-id for the RDS.
Find that group in AWS Console > VPC > secuirty groups
Adjust the Inbound and Outbound Rules accordingly.
You need to allow access to/from the IP or security group that needs to connect to the RDS.
FROM: https://stackoverflow.com/a/37200075/1589379
After that, all that remains is configuring whatever local DB tool you would like to use to operate on the database.
EDIT:
Of additional note, if the ElasticBeanstalk Environment is configured to use RDS, the EC2 Instances will have environment variables set with the information needed to connect to the RDS.
This means that you can import those variables into any code that needs access.
Custom environment variables may also be set in Elastic Beanstalk Environment Configuration, and these too may be included this way.
PHP
define('RDS_HOSTNAME', getenv('RDS_HOSTNAME'));
$db = new rds(RDS_HOSTNAME);
Linux CommandLine
mysql --host=$RDS_HOSTNAME --port=$RDS_PORT -u $RDS_USERNAME -p$RDS_PASSWORD
RDS is a managed database service, which means it is that you can only access it through database calls.
If it is a MySQL database you can access through your EC2 instance through mysql like this:
mysql -u user -p password -h rds.instance.endpoint.region.rds.amazonaws.com
or set it up to work with your app with settings needed for that.
Make sure that you set up security groups correctly so that your EC2/other service has access to your RDS instance.
Update:
If you want what you are asking for then you should use an EC2 instance with a mysql server on. It would cost the same (even though a fraction of performance is lost in comparison). An EC2 instance you can turn off when you are not using as well.

Connecting Kubernetes minions to classic (non-VPC) AWS resources

I'm looking to spin up a Kubernetes cluster on AWS that will access resources (e.g. RDS, ElastiCache) that are not on a VPC.
I was able to set up access to RDS by enabling ClassicLink on the kubernetes-vpc VPC, but this required commenting out the creation of one of Kubernetes' route tables (which conflicted with ClassicLink's route tables), which breaks some of Kubernetes networking. ElastiCache is more difficult, as it looks like its access is only grantable via classic EC2 security groups, which can't be associated with a VPC EC2 instance, AFAICT.
Is there a way to do this? I'd prefer not to use a NAT instance to provide access to ElastiCache.

(AWS) Can't launch RDS in my chosen VPC

I'm following AWS's instructions Scenario 2: VPC with Public and Private Subnets and am having issues at the point I try to launch a DB server.
When I launch my instance, all is fine and I am able to assign it to my newly created VPC. However, when it comes to launch the RDS, the only VPC available (on step 4, configure advanced settings) is the default VPC (ie not the one I created as per their instructions).
Has anyone any idea about this or indeed how to resolve it?
RDS requires a little more setup than an EC2 instance if you want to launch it within a VPC.
Specifically, you need to create:
a DB subnet group within the VPC
a VPC security group for the RDS instance
The documentation is a little buried in the AWS RDS documents. It can be found here:
Creating a DB Instance in a VPC