Amazon RDS Aurora master/replica access restrictions - amazon-web-services

I am running a DB cluster with two instances at Amazon RDS Aurora. One instance is the master, the other instance is a read-only replica. The purpose of the replica is to allow a third party application access to certain tables of the database for reporting. Therefore, the reporting tool accesses the read-only cluster endpoint, which works perfectly fine. In order to achieve zero-downtime maintenance, AWS promotes the "replica" to be the "master" at any time. That's pretty cool and does not affect the reporting tool, because it accesses the cluster-ro endpoint, which always routes the traffic to the correct (read-only) replica.
However, this means I have to enable the "Publicly accessible: Yes" flag on both instances, so that the reporting tool (which is located outside the VPC) has access to all instances, because I can not predict which instance becomes the master or replica, correct?
I'd prefer, that the "master" instance (whatever instance that is) can only be accessed from inside the VPC. How can I achieve that?
My understand is that every change I do on the "master" instance, is automatically done on the replica(s), including adding/removing security groups for example. So if I open the firewall to allow access to the replica(s) for the reporting tool, the same IP addresses can also access the normal cluster endpoint and instance (not only the cluster-ro endpoint). How can I prevent that?

You will need to build something custom for this unfortunately. Few options to consider from a design point of view are as follows:
Aurora cluster shares the security group settings across all instance like you called out. If you want to have custom settings, then what you could consider is making your whole cluster VPC only, and then have either ALBs or EC2 proxies that forward requests to your DB instance(s). You can then have multiple of these "proxies" and associate separate security groups for each of them.
One big callout with this sort of an architecture is that you need to make sure that you take care of failovers cleanly. Your proxies should always talk to the cluster endpoints and never to the instance endpoints, as instances can change from READER to WRITER behind the scenes. For example, ALBs do no let you create listeners that forward requests to a DNS, they only work with IPs. This means that you would need additional infrastructure to monitor the IPs of readers and writers and keep the ALB updated.
EC2 proxies are definitely a better option for such a design, with the caveat of added cost. I can go into more details if you have specific questions around this setup. This is definitely a summary of the approach, and not prod ready.
On the same note, why can't you use read restricted db users instead and keep the cluster public (with ssl enabled of course)?

Related

AWS: How to set up disaster recovery for ec2 instances in 2 VPCs?

What I have:
One VPC with 2 EC2 Ubuntu instances in it: One with phpmyadmin,
another one with mysql database. I am able to connect from one
instance to another.
What I need to achieve:
Set up the Disaster recovery for those instances. In case of networking issues or if the first VPC is not available for any reason all requests sent to the first VPC are
redirected to the second one. If I got it right it can be achieved
with VPC endpoints. Cannot find any guide on how to proceed with
this. (I have 2 VPCs with 2 ec2 instances in each of them)
Edit:
Currently I have 2 VPC with 2 EC2 instances in each of them.
Yes, ideally I need to have 2 databases running and sync the date between them. Not it is just 2 separate db instances with no sync.
First ec2 instance in each VPC has web app running. So external requests to the web app should be sent to the first VPC if it is available and to the second VPC if smth is wrong with the first one. Same with the DBs: if DB instance in the first VPC is available - web app requests should update data in this DB. If not requests should access the data from the second DB instance
Traditionally, Disaster Recovery (DR) involves having a secondary copy of 'everything' (eg servers in a different data center). Then, if something goes wrong, failover would involve pointing to the secondary copy.
However, the modern cloud emphasises High Availability rather than Disaster Recovery. An HA architecture actually has multiple systems continually running in separate Availability Zones (AZs) (which are effectively Data Centers). When something goes wrong, the remaining systems continue to service requests without needing to 'failover' to alternate infrastructure. Then, additional infrastructure is brought online to make up for the failed portion.
High Availability can also operate at multiple levels. For example:
High Availability for the database would involve running the database under Amazon RDS "Multi-AZ" configuration. There is one 'primary' database that is servicing requests, but the data is being continually copied to a 'secondary database in a different AZ. If the database or AZ should fail, then the secondary database takes over as the primary database. No data is lost.
High Availability for web apps running on Amazon EC2 instances involves using a Load Balancer to distribute requests to Amazon EC2 instances running in multiple AZs. If an instance or AZ should fail, then the Load Balancer will continue serving traffic to the remaining instances. Auto Scaling would automatically launch new instances to make up for the lost capacity.
To compare:
Disaster Recovery is about having a second set of infrastructure that isn't being used. When something fails, the second set of infrastructure is 'switched on' and traffic is redirected there.
High Availability is all about continually handling loads across multiple Data Centers (AZs). When something fails, it keeps going and new infrastructure is launched. There should be no 'outage period'.
You might think that running multiple EC2 instances simultaneously to provide High Availability is more expensive. However, each instance would only need to handle a portion of the load. A single 'Large' instance costs the same as two 'Medium' instances, so splitting the workload between multiple instances does not need to cost more.
Also, please note that VPCs are logical network configurations. A VPC can have multiple Subnets, and each Subnet can be in a different AZ. Therefore, there is no need for two VPCs -- one is perfectly sufficient.
VPC Endpoints are not relevant for DR or HA. They are a means of connecting from a VPC to AWS Services, and operate across multiple AZs already.
See also:
High availability is not disaster recovery - Disaster Recovery of Workloads on AWS: Recovery in the Cloud
High Availability Application Architectures in Amazon VPC (ARC202) | AWS re:Invent 2013 - YouTube
In addition to the previous answers, you might wanna take a look in migrating your DBs to RDS or Aurora.
It would provide HA for your DB tier via multi-AZ configuration, and you would not have to figure out how to sync the data between the databases.
That being said, you also have to decide what level of availability is acceptable for you:
multi AZ - data & services span across multiple data centers in one region -> if the whole region goes down, your application goes down.
multi region - data & services span across multiple data centers in multiple regions -> single region failure won't put you out of business, but it requires some more bucks & effort to configure

Bastion Server for all AWS instances

i have more than 30 production Windows severs in all AWS regions. I would like to connect all servers from one base bastion host. can any one please let me know which one is good choice? How can i setup one bastion host to communicate all servers which is different regions and different VPC's? Kindly anyone give advice for this?
First of all, I would question what are you trying to achieve with a single bastion design? For example, if all you want is to execute automation commands or patches it would be significantly more efficient (and cheaper) to use AWS System Manager Run Commands or AWS System Manager Patch Manager respectively. With AWS System Manager you are getting a managed service that offers advance management capabilities with highest security principles built-in. Additionally, with SSM almost all access permissions could be regulated via IAM permission policies.
Still, if you need to set-up bastion host for some other purpose not supported by SSM, the answer includes several steps that you need to do.
Firstly, since you are dealing with multiple VPCs (across regions), one way to connect them all and access them from you bastion's VPC would be to set-up a Inter-Region Transit Gateway. However, among other, you would need to make sure that your VPC (and Subnet) CIDR ranges are not overlapping. Otherwise, it will be impossible to properly arrange routing tables.
Secondly, you need to arrange that access from your bastion is allowed in the inbound connections of your target's security group. Since, you are dealing with peered VPCs you will need to properly allow your inbound access based on CIDR ranges.
Finally, you need to decide how you will secure access to your Windows Bastion host. Like with almost all use-cases relying on Amazon EC2 instances, I would stress to keep all the instances in private subnets. From private subnets you can always reach the internet via NAT Gateways (or NAT Instances) and stay protected from unauthorized external access attempts. Therefore, if your Bastion is in private subnet you could use the capability of SSM to establish a port-forwarding session to your local machine. In this way, you enable yourself the connection while even your bastion is secured in private subnet.
Overall, this answer to your question involves a lot of complexity and components that will definitely incur charges to your AWS account. So, it would be wise to consider what practical problem are you trying to solve (not shared in the question)? Afterwards, you could evaluate if there is an applicable managed service like SSM that is already provided by AWS. In the end, from a security perspective, granting access to all instances from a single bastion might not be best practice. If you consider scenarios in which you bastion is compromised for whatever reason, you basically compromised all of your instances across all of the regions.
Hope it gives you slightly better understanding of your potential solution.

Coordinating multiple VMs in a VPC

I'm using a CloudFormation stack that deploys 3 EC2 VMs. Each needs to be configured to be able to discover the other 2, either via IP or hostname, doesn't matter.
Amazon's private internal DNS seems very unhelpful, because it's based on the IP address, which can't be known at provisioning time. As a result, I can't configure the nodes with just what I know at CloudFormation stack time.
As far as I can tell, I have a couple of options. All of them seem to me more complex than necessary - are there other options?
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries, I should know ahead of time the private DNS I assign to them.
Stand up yet another service to have the 3 VMs "phone home" once initialized, which could then report back to them who is ready.
Come up with some other VM-based shell magic, and do something goofy like using nmap to scan the local subnet for machines alive on a certain port.
On other clouds I've used (like GCP) when you provision a VM it gets an internal DNS name based on its resource name in the deploy template, which makes this kind of problem extremely trivial. Boy I wish I had that.
What's the best approach here? (1) seems straightforward, but requires people using my stack to have extra permissions they don't really need. (2) is extra resource usage that's kinda wasted. (3) Seems...well goofy.
Use Route53, set up a private DNS hosted zone, make an entry for each of the VMs which is attached to their network interface, and then by naming the entries
This is the best solution, but there's a simpler implementation.
Give each of your machines a "resource name".
In the CloudFormation stack, create a AWS::Route53::RecordSet resource that associates a hostname based on that "resource name" to the EC2 instance via its logical ID.
Inside your application, use the resource-name-based hostname to access the other isntance(s).
An alternative may be to use an Application Load Balancer, with your application instances in separate target groups. The various EC2 instances then send all traffic through the ALB, so you only have one reference that you need to propagate (and it can be stored in the UserData for the EC2 instance). But that's a lot more work.
This assumes that you already have the private hosted zone set up.
I think what you are talking about is known as service discovery.
If you deploy the EC2 instances in the same subnet in the same VPC with the same security group that allows the port the want to communicate over, they will be "discoverable" to each other.
You can then take this a step further. If autoscaling is on the group and machines die and respawn they can write there IPs into a registry i.e. dynamo so that other machines will know where to find them.

How to properly connect AWS Lambda to RDS in VPC?

I am trying to build a graphQL API with Serverless Framework on AWS-Lambda using Apollo-Server-Lambda etc. I need to use a not publicly available PostgreSQL RDS instance.
I can get lambdas up and running and sending back requests when not in VPC. I can get a Postgres RDS Database up and running and connected to PgAdmin (when in publicly available mode).
The problem is once I make the RDS Postgres 'non public' and try to get all these pieces talking together I fail.
I have tried multitude of different approaches.
This is regularly portrayed as magic. It is wonderfully written https://gist.github.com/reggi/dc5f2620b7b4f515e68e46255ac042a7
I could not get access to Postgres with my lambdas using this. So my first question.
Do I need a NAT gateway for incoming (ingress) api calls to lambdas in a VPC?
My current understanding is that maybe I only need a NAT gateway for my lambdas to make outgoing calls to other api's out of aws or things like S3. Is this correct?
Next up. I have made a security group for my lambdas and have added this security group to the inbound list for the security group that was created for RDS. My understanding is this is how the lambdas should gain access to RDS. I have not had such luck. Maybe this is related to public or non public subnets? Maybe this is related to my understanding of the necessity of a NAT?
Basically the only visibility I have been able to get is Lambdas timing out after 20 or 30 seconds depending on my limit when they are trying to connect to postgres in private. Cloudwatch logs reveal nothing else.
Lastly, for now 😂, what is the best way to connect my dev machine to Postgres once it is 'not public'? I have my machines IP listed for inbound TCP/IP to port 5432 to postgres in the RDS security group but that does not seem to give me the access I was hoping for. Do I really need a VPN connected to VPC? Whats the simplest way?
I have done this tutorial with basic alterations for Postgres https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds.html
I have read and considered answers from this question & more
Allow AWS Lambda to access RDS Database
I have had many success-full deployments with serverless framework with many variations on serverless.yml config to try these options or else I would show a specific one I thought was failing but this is more broadly that I cant seem to grasp exactly how all these VPC, security groups, routing tables etc are supposed to interact.
Any help greatly appreciated!
Obviously, Lambda needs to be setup to run inside the same VPC, but I'm assuming you already got that.
You need to:
Create a security group (SG) and associate it with the Lambda function.
Now, open the SG associated with the RDS instance (not the one you created above).
Inside the RDS SG, go to "Inbound" tab and click "Edit"
Select "PostgreSQL" in the Type column. In the Source column, select "Custom" in the select dropdown and enter the Lambda SG ID in the input text (if you start typing "sg-", it will show you all your SGs).
Does it work?
Make sure your Lambda function is in the VPC, and the security group allows connections from IP addresses within the subnet of the VPC. The amount of available IP addresses is going to affect how many lambda functions can be run concurrently. Also make sure that the Lambda function's role has the ability to describe the VPC (the AWSLambdaVPCAccessExecutionRole policy should do the job for you).

Limit Amazon AWS Access of Specific EC2/RDS Instances

I have an Amazon EC2 instance and a corresponding RDS instance that I want to keep private. I'd like to keep it so that only myself and the sysadmin can access these instances. I don't want to provide access to other developers.
However, one of my developers is working on a project right now where he needs to create/configure his own EC2/RDS instance. I could have my sysadmin perform this work, but I'd rather have the developer do it for the sake of expediency.
Is there any way to configure a group/role/policy in a way that allows me to keep my current instances private from the new developer, but would allow him to create his own EC2 and RDS instances?
Your question appears to be mixing several security concepts, such as 'private', 'group/role/policy' and 'firewalls'.
An Amazon EC2 instance has several layers of security:
First, there is the ability to login to the EC2 instance. This is managed by you, typically by creating users on the instance (in either Linux or Windows) and associating a password or Public/Private. Only people who have login credentials will be able to access the instance.
Second, there is the ability to reach the instance. Security Groups control which ports are open from which IP address range. Therefore, you could configure a security group to only make the instance accessible from your own IP address or your own private network. Your instance might also be in a private subnet that has no Internet connectivity. This again restricts access to the instance.
A person can therefore only login to an instance if they have login credentials, if the security group(s) permit access on the protocol being used (RDP or SSH) and if the instance is reachable by the user from the Internet or private network.
Similarly, an Amazon Relational Database Service (RDS) instance is protected by:
Login credentials: A master user login is created when the database is launched, but additional users can be added via normal CREATE USER database commands
Security Groups: As with EC2 instances, security groups control what ports are open to a particular range of IP addresses
Network security: As with EC2 instances, an RDS instance can be placed in a private subnet, which is not accessible from the Internet.
Please note that none of the above controls involve Identity and Access Management (IAM) users/groups/roles/policies, which are used to grant access to AWS services, such as the ability to launch an Amazon EC2 instance or an Amazon RDS instance.
So, the fact that you have existing Amazon EC2 and Amazon RDS instances has no impact on the security of any other instances that you choose to launch. If a user cannot access your existing services, then launching more services will not change that situation.
If you wish to give another person the ability to launch new EC2/RDS instances, you can do this by applying an appropriate policy on their IAM User entity. However, you might want to be careful about how much permission you give them, because you might also be granting them the ability to delete your existing instances, change the master password, create and restore snapshots (thereby potentially accessing your data) and change network configurations (potentially exposing your instances to the Internet).
When granting IAM permissions to somebody, it is recommend that you grant least privilege, which means you should only give them the permissions they need and nothing more. If you are unsure about how much permission to give them or how to configure these permissions, you would be wise to have your System Administrator create the instances on their behalf. That way, you are fully aware of what has been done and you have not potentially exposed your systems.
Ok, the best explanation about how things works is in #John Rotestein response. But here a few practical suggestions (that must be considered as an complement to John's response):
You can create separate subnets and give permissions to your
developers to run instances in only one of the the subnets using IAM
Policies; But your developers still can reach your instance and you
must configure so/db/application restrictions.
If your company DO NOT use a shared gateway to the internet, you can
define the Network ACL to limit the access to your exclusive network
using your IP address. If you use a shared gateway, you will not be
able to use this solution;
In the second case, one way to limit the access is put your instance
in a private subnet and create a bastion host in your public subnet
to be used only by you (this solution must be configured to your RDS
instances too). The bastion host will be reachable by your
developers, but you can use a specific Key Pair that only you have
access. Just keep in mind that your instances and RDS will not be
available to the internet;
But I think that the simple solution would be create different
VPC's, one for your team and the other for the development team. In
this solution you can restrict access to all VPC resources to your
developers in the "main" VPC. Off course this also means no internet connection to your instances.