I'm trying to get a better understanding of AWS organization patterns.
Suppose I define the term "application stack" as a set of interconnected AWS resources (e.g. a java microservice behind ELB + dynamoDB for peristence), then I need some way of isolating independent stacks. Each application would get a separate dynamodb or kinesis so there is no need for cross-stack resource sharing. But the microservices do need to communicate with each other.
A-priori I could see either of the two organizational methods being used:
Create a VPC for each independent stack (1 VPC per 1 Application)
Create a single "production" VPC and each stack resides within a separate private subnet.
There could be up to 100s of these independent "stacks" within the organization so there's the potential for resource exhaustion if there is a hard limit on VPC count. But other than resources scarcity, what are the decision criteria around creating a new VPC or using a pre-existing VPC for each stack? Are there strong positive or negative consequences to either approach?
Thank you in advance for consideration and response.
Subnet's and IP addresses are a limited commodity within your VPC. The number of IP addresses cannot be increased within your VPC if you hit that limit. Also, by default, all subnets can talk to other subnets, so there may be security concerns. Any limits on the number of VPCs are a soft limit and can be increased by AWS support.
For these reasons, separate distinct projects at the VPC level. Never mix projects within a VPC. That's just asking for trouble.
Also, if your production projects are going to include non-VPC-applicable resources, such as IAM users, DynamoDB tables, SQS queues, etc., then I also recommend isolating those projects within their own AWS account (at the production level).
This way, you're not looking at a list of DynamoDB tables that includes tables from different projects.
Related
I am in process of coming up with a multi-region high availability (active-active) architecture for my product. A simplified version of our stack is that we use Lambda to implement our micro services, which are exposed as APIs using API Gateway. These micro services integrate with downstream services or databases like DynamoDB, Aurora RDS. So, '
Route 53 >> Api gateway >> Lambda >> Downstream service/Database
'
I am trying to figure out what is the best mechanism to configure Route 53 such that it understands any of the services in the stack fails so that it routes the incoming requests to another region. Eg if Lambda service in region-1 fails, then it is easy because I would create Health Check records pointing to these Lambdas and once they are not reachable Route 53 will itself route to next requests to region-2.
However, if the downstream resource eg RDS that Lambda is dependent on fails, how will Route 53 know this so as to route the next requests to region-2?
Appreciate any pointers on this.
It depends a bit on your envision failover setup.
Let us assume you have two regions: region1,region2
Now you could have two failure scenarios:
Lambda fails in region1 => you failover to Lambda in region2
RDS fails in region1 => you failover to RDS in region2
In both cases you need to ask yourself: What I want to do. If for example, in case 1 you connect from Lambda in region2 to RDS in region1 then high region transfer costs may occur, so you may want to trigger in any case a fail-over of RDS to region2.
Note: Generally it is very advisable to not connect directly with Lambda to RDS, but use instead RDS proxy (to avoid hammering the database with requests, slowing it down etc.): https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html
Generally, with RDS these region failovers are much more complicated (can answer on that bit if needed). It is also not simply changing IP to another region, because usually you need to promote in the other region the database (cluster) as a designated node to allow write operations.
For the databases (DynamoDB, Aurora) you mentioned there is though a solution: Use Global Tables.
A simpler solution could be - depending on your application - to use DynamoDB Global Tables (see https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html). However, clearly DynamoDB is not a relational database so it may not fit all cases. Nevertheless, DynamoDB works generally very good with Lambda and is also easier for cross-region replication. Note: if you encrypt your data using AWS KMS CMK (recommended) then you need to have this key also available in all regions where you plan to use Global tables (see https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html).
Another solution could be AWS RDS Aurora Global Tables (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) - those are available in multiple region and failover is thus easier (cf. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-disaster-recovery.html).
In the Aurora case you have to detect a region failure yourself (e.g. you could have a lambda in both regions that regularly tries to connect to the current active cluster for writing) and automatically promote the cluster in the new region as primary if it is not available in the original region.
Do not forget: You need to regularly test the failover otherwise it is almost ensured that it will not work when you need it.
Generally having databases cross-regions implies transfer costs and additional resource costs compared to a single region - not only during failover, but all the time data is written.
With this configuration, I recommend failing over the entire stack (to another Region), rather than failing over individual tiers (components) of the architecture. (This is what you seem to be saying in your question, but just making sure we are on the same page).
Your question comes down to how to configure the health check, and specifically how to implement shallow versus deep (checking dependencies like RDS) health checks.
There is an AWS Well-Architected lab that covers these concepts Implementing Health Checks and Managing Dependencies to improve Reliability.
AWS ECS allows for new services to automatically use Service Discovery, as mentioned in the documentation here:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-configure-servicediscovery.html
If I understand correctly, there should always be a namespace and a service created in CloudMap, before a service instance can register itself into it. After registering, the service instance can then be discovered using DNS records, which are kept in Route 53, which is a global service. The namespace has its own private zone and applications from VPCs associated with this zone can query the records and discover the service needed, regardless of the region they are in.
However, if I understand Correctly, CloudMap resources themselves are regional
Let's consider the following scenario: There is a CloudMap namespace and service X defined in region A. For redundancy reasons I would like instances of service X run in region A, but also in region B. However, when configuring service discovery in ECS, it is not possible to use a namespace from region A.
How can then CloudMap service discovery be used in a multi-region environment? Shall corresponding namespaces be created in both regions?
Redundancy can be built in within a single region. I have not seen a regulator yet that expects more than what is offered by multiple Availability Zones in a single region, but if you still wanted to achieve what you are asking, you would need to perform some kind of VPC network peering: https://docs.aws.amazon.com/vpc/latest/peering/peering-scenarios.html#peering-scenarios-full
I've not got experience with how Cloud Map behaves in this context though. Assuming DNS resolution is possible, it would supposedly still work. But aws services are best (cheaper, more stable and lower latency) when used within each region, targeting their region specific api https://docs.aws.amazon.com/general/latest/gr/cloud_map.html
According to the documentation, it says a "zone" could be mapped to different cluster for different projects but is it true that a zone may map to a different cluster among projects?
I've never seen a zone mapping difference across projects. Also, since each zone provides different machine types, I'm not even sure if a zone could be mapped to different clusters among projects.
If it does, is there a way to find out which cluster my zone is mapped to like the one in AWS?
Thanks!
A cluster, as defined, is simply a set of physical servers, networks, disk, cooling. In short, a datacenter. It's impossible to know, it's google internal management.
A zone comes on top of one or several clusters. If the initial cluster (aka datacenter) is too small, Google can have chosen to expend it and if it's not possible to add another one. But at user point of view, it's invisible!
Google try to locate all the projects of the same organization in the same cluster, especially for security and performance reason in case of VPC peering or Shared VPC. However, it's not guaranteed. But, because your don't know this, you can't check it.
For example, if 2 projects are on 2 different clusters in the same region, there isn't issue. But if you create a VPC peering, it's not optimized. To solve this, Google can migrate Compute Engine from a cluster to another one, even without stopping the VM (it's called "live migration"), you aren't able to see anything of this VM placement.
Generally the cluster is consistent for a project. In case of huge resources usage, it could be different (HPC for example, or with requirement of 10k+ CPUs), but Googlers must have more detail in this case if you are a big CPU consumer
I tried to create a GKE regional cluster in europe-west3, with N2 cpu type, only available in 2 of the 3 zone and I got this error:
I would like to know what is more recommended when one DB instance should be shared across different AWS regions? Is it better to use cross-Region Read Replicas or to use Read Replica in region of origin + AWS Global Accelerator?
Is there some "best praxis solution" for global applications?
I am not experienced with AWS and the most of the things are pretty new for me. So I know that my question may look amateur.
From what I read, I think that one centralized Read replica is better solution, due to latency between regions, but if that would be a case, why anyone would use cross-region replicas at all?
If your application is hosted in a region e.g. eu-west-1 the best read performance will always come when it is reading data from eu-west-1.
If you happen to have customers in us-east-1 you have to choose between one of 3 options:
Edge Location
You reduce the latency using edge locations, i.e. CloudFront or Global Accelerator. This will improve the latency by using the AWS Backbone to route to your origins. This is faster than previously but the application remains in the original region (in this case eu-west-1). You also maintain one copy of the application only.
Latency based routing
This option brings the application closer to the user, by using either Route 53 with latency based records or Global Accelerator you can have your domains resolve to the location that has the lowest latency for them. You would have your central region (where the readwrite lives) and then create cross region replicas. This will provide the best read performance as the reads are being done locally (rather than being across region).
In the example eu-west-1 is the primary region with cross region replicas in us-east-1. Latency between regions is only observed with the time it takes to write to the readwrite (in the original region unless you use Aurora Read Replica Write Forwarding). This is by far the most complex and costly, but will provide the best performance overall.
Do nothing
If you do nothing this option will use the public internet to route to a host, those who are further away to your application will have a longer latency, but this is the cheapest option.
Summary
You need to essentially decide on the importance of cross region, if it is simply because your user base is in a further away region then ensuring you're as close to them as possible is key. You would not need to think about replicas if you're in a specific geographical region.
Remember you can always enhance your infrastructure when demand increases from other geographical regions.
I wonder if it is possible to run a single EKS cluster within one AWS account and give access to it (entire or specific namespaces) to another one?
Here's a scenario:
In my company we have multiple customers and host their systems within AWS. We'd like to setup AWS Organization structure with subaccounts per customer (+ maybe separate account for prod and test). Some of the customers are already being migrated to Kubernetes so we need EKS cluster. Now, setting separate clusters for each customers would not be cost effective - partially because it would generate over 100USD for each control plane, partially because we would need to have separate node groups for each customer which would decrease benefits of scale.
For this reason I thought of setting a single EKS cluster and give access to it to subaccounts created for customers.
Can I achieve this? And how to do it relatively simple?
Follow these steps
You can create separate namespace for each customer rather creating a separate cluster.
Define resource quota at namespace level and manage the resources.
Create RBAC roles and rolebindings to control access at namespace level for each customer.