Kubernetes on AWS with multiple accounts? - amazon-web-services

I wonder if it is possible to run a single EKS cluster within one AWS account and give access to it (entire or specific namespaces) to another one?
Here's a scenario:
In my company we have multiple customers and host their systems within AWS. We'd like to setup AWS Organization structure with subaccounts per customer (+ maybe separate account for prod and test). Some of the customers are already being migrated to Kubernetes so we need EKS cluster. Now, setting separate clusters for each customers would not be cost effective - partially because it would generate over 100USD for each control plane, partially because we would need to have separate node groups for each customer which would decrease benefits of scale.
For this reason I thought of setting a single EKS cluster and give access to it to subaccounts created for customers.
Can I achieve this? And how to do it relatively simple?

Follow these steps
You can create separate namespace for each customer rather creating a separate cluster.
Define resource quota at namespace level and manage the resources.
Create RBAC roles and rolebindings to control access at namespace level for each customer.

Related

Seperate Billing in AWS Account for two different EC2 Instance

I have two EC2 instances in a single AWS account. Both are running for different application services. Now, I want to make billing separate for that particular account. So, I can get the exact spent and charges for the applications and can manage my account as per that for separate accounting purposes.
Is it possible? If not, then can anyone suggest me a better way to achieve this?
I'm planning to handle the different AWS accounts for both services. But, it will be hard to manage so, I can't prefer that option.
For more, In Google Cloud, they're providing to handle different billing accounts within the same google cloud account. So, I think this concept might be available on AWS also.
Thanks in advance for any little bit of help or suggestion.
You can't get separate bills for different sets of resources within the same AWS account. However, you can filter out the costs for different sets of resources using tags. By using a distinct tag (or set of tags) in the resources you allocate to each application, you can get a breakdown of the cost for each application in billing reports and cost explorer. See the documentation for details and steps on how to set it up - https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
I believe it's the same with Google Cloud as well - although you can have multiple billing accounts within your google account (the hierarchy in GC is different to AWS, and an AWS account is more similar to a GC project than a GC account), a project within your account must have one billing account and does not support multiples.
By this -
Now, I want to make billing separate for that particular account.
do you mean you already have 2 different AWS accounts? If yes, you can get billing details as per account if they become part of same Organization. Check AWS Organizations docs for more info.
With Orgs, you can view bills of different accounts from one account itself if it is part of org. Also your accounts get discounts for services.

Add specific GKE labels (not Kubernetes ones) to a node pool

We have two dev teams working on diferent parts of our product. Both teams share the same cluster each having their working version of the code deployed on separate namespaces so they can test without one interfering with the other.
Now we want each team to have its own budget for the testing environment. In order to have that we need to be able to have the usage cost for each one. From what I know about GCP, the only way to keep track of the costs from each resource is to attach labels to them. This development cluster that we have already has a GKE label which is shared across all resources created by the cluster.
The problem is that, since both team uses the same cluster they share the same GKE tags. So I would like to have one Node Pool for each team with specific tags on each one.
I couldn't find anything that would allow me to do that so decided to ask it here.
It would be very overkill to create a separate cluster for each team.
You can use the cluster resource metering feature of GKE. You can track usage based on Kubernetes labels and/or resources.

Is there a distributed multi-cloud service mesh solution that's available? Something that cuts across GCP, AWS, Azure and even on-premise setup?

Is there a distributed multi-cloud service mesh solution that is available? A distributed service mesh that cuts across GCP, AWS, Azure and even on-premise setup?
Nathan Aw (Singapore)
Yes it is possible with istio multi cluster single mesh model.
According to istio documentation:
Multiple clusters
You can configure a single mesh to include multiple clusters. Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment:
Fault isolation and fail over: cluster-1 goes down, fail over to cluster-2.
Location-aware routing and fail over: Send requests to the nearest service.
Various control plane models: Support different levels of availability.
Team or project isolation: Each team runs its own set of clusters.
A service mesh with multiple clusters
Multicluster deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue.
You can configure inter-cluster communication based on the network and the options supported by your cloud provider. For example, if two clusters reside on the same underlying network, you can enable cross-cluster communication by simply configuring firewall rules.
Single mesh
The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name mysvc in the foo namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names.
A single mesh can span one or more clusters and one or more networks. Within a mesh, namespaces are used for tenancy.
Hope it helps.
An alternative could be using this tool with a Kubernetes cluster that spans through all of your selected cloud providers at the same time without the hassle of managing all of them separately

AWS VPC vs Subnet for Application Wrapping

I'm trying to get a better understanding of AWS organization patterns.
Suppose I define the term "application stack" as a set of interconnected AWS resources (e.g. a java microservice behind ELB + dynamoDB for peristence), then I need some way of isolating independent stacks. Each application would get a separate dynamodb or kinesis so there is no need for cross-stack resource sharing. But the microservices do need to communicate with each other.
A-priori I could see either of the two organizational methods being used:
Create a VPC for each independent stack (1 VPC per 1 Application)
Create a single "production" VPC and each stack resides within a separate private subnet.
There could be up to 100s of these independent "stacks" within the organization so there's the potential for resource exhaustion if there is a hard limit on VPC count. But other than resources scarcity, what are the decision criteria around creating a new VPC or using a pre-existing VPC for each stack? Are there strong positive or negative consequences to either approach?
Thank you in advance for consideration and response.
Subnet's and IP addresses are a limited commodity within your VPC. The number of IP addresses cannot be increased within your VPC if you hit that limit. Also, by default, all subnets can talk to other subnets, so there may be security concerns. Any limits on the number of VPCs are a soft limit and can be increased by AWS support.
For these reasons, separate distinct projects at the VPC level. Never mix projects within a VPC. That's just asking for trouble.
Also, if your production projects are going to include non-VPC-applicable resources, such as IAM users, DynamoDB tables, SQS queues, etc., then I also recommend isolating those projects within their own AWS account (at the production level).
This way, you're not looking at a list of DynamoDB tables that includes tables from different projects.

AWS Regional Disaster Recovery

I would like to create a mirror image of my existing production environment in another AWS region for disaster recovery.
I know I don't need to recreate resouces such as IAM roles as its a "Global" service. (correct me if I am wrong)
Do I need to recreate key pairs in another region?
How about Launch configurations and Route 53 Records sets?
Launch configurations you will have to replicate into the another region as the AMIs, Security Groups, subnets, etc will all be different. Some instance types are not available in all regions so you will have to check that as well.
Route53 is another global thing but you will probably have to fiddle with your records to take advantage of multi-region architecture. If you have the same setup in two different regions you will probably want to implement latency based or geo routing to send traffic to the closest region. Heres some info on that
As for keys they are per region. But I read somewhere that you could create an AMI from your instance, move that to a new region, and fire an instance off that and as long as you use the same key name your existing key will work but take that with a grain of salt as I haven't tried it nor seen it documented anywhere.
Heres the official AWS info for migrating