In my organization someone removed a forwarding rule that allowed the Autopilot Nodes to communicate with the GKE managed control plane
we are using Google cloud & terraform to manage Infra & policies.
I assume we have only one resource "Cloud Composer" which has these firewall rules.
I want to disable the deletion of these forwarding rules so that there will be no issue for Autopilot Nodes to communicate with the GKE managed control plane in the future.
Can someone let me know the below,
What is the Organization policy related to this issue?
What is the default value to set for this Organization policy?
Related
I'm trying to create an AWS EKS cluster with Pulumi and it seems two components exists:
#pulumi/eks providing a Cluster component
#pulumi/aws providing an eks/Cluster component
#pulumi/eks seems to be higher level but I cannot find a documentation specifying the concrete difference between those, and if one is preferred depending on use cases.
What's the difference between those two components?
#pulumi/eks/Cluster is a component resource that is built on top of #pulumi/aws/eks/Cluster and other resources to simplify provisioning of EKS clusters. Its goal is to make common scenarios achievable with a handful of lines of code, as opposed to the involved model of raw AWS resources.
You can find some usage examples in
AWS Crosswalk: AWS Elastic Kubernetes Service
Easily Create and Manage AWS EKS Kubernetes Clusters with Pulumi.
I suggest you start with #pulumi/eks and see if it works well for you.
I've been looking into Argo as a Gitops style CD system. It looks really neat. That said, I am not understanding how to use Argo in across multiple GCP projects. Specifically, the plan is to have environment dependent projects (i.e. prod, stage dev). It seems like Argo is not designed to orchestrate deployment across environment dependent clusters, or is it?
Your question is mainly about security management. You have several possibilities and several point of views/level of security.
1. Project segregation
The most simple and secure way is to have Argo running in each project without relation/bridge between each environment. No risk in security or to deploy on the wrong project. Default project segregation (VPC and IAM role) are sufficient.
But it implies to deploy and maintain the same app on several clusters, and to pay several clusters (Dev, Staging and prod CD aren't used at the same frequency)
In term of security, you can use the Compute Engine default service account for the authorization, or you can rely on Workload identity (preferred way)
2. Namespace segregation
The other way is to have only one project with a cluster deployed on it and a kubernetes namespace per delivery project. By the way, you can reuse the same cluster for all the projects in your company.
You still have to update and maintain Argo in each namespace, but the cluster administration is easier because the node are the same.
In term of security, you can use the Workload identity per namespace
(and thus to have 1 service account per namespace authorized in the delivery project) and to keep the permission segregated
Here, the trade off is the private IP access. If your deployment need to access to private IP inside the delivery project (for testing purpose or to access to private K8S master), you have to set up a VPC peering (and you are limited to 25 peering per project) or set up a shared VPC.
3. Service account segregation
The latest solution isn't recommended, but it's the easiest to maintain. You have only one GKE cluster for all the environment, and only 1 namespace with Argo deployed on it. By configuration, you can say to Argo to use a specific service account to access to the delivery project (with service account key files (not recommended solution) stored in GKE secrets or in secret manager, or (better) by using service account impersonation).
Here also, you have 1 service account authorized per delivery project. And the peering issue is the same in case of private IP access required in the delivery project.
I came across an open source Kubernetes project KOPS and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier.
This question does not ask which one is better, but rather asks for a comparison.
The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings
EKS:
Fully managed control plane from AWS - you have no control over the masters
AWS native authentication IAM authentication with the cluster
VPC level networking for pods meaning you can use things like security groups at the cluster/pod level
kops:
Support for more Kubernetes features, such as API server options
Auto provisioned nodes use the built in kops node_up tool
More flexibility over Kubernetes versions, EKS only has a few versions available right now
Other significant difference is that EKS is an AWS product so you require an AWS account but kops allows to run Kubernetes in AWS but also in GCE and DigitalOcean.
I am trying to enable some admission controllers on EKS. How do you see the existing admission controllers and enable new ones?
I don't believe this is possible at this time. The control plane is managed by Amazon, and it's not possible to modify it.
If you need a Kubernetes cluster in AWS with these kind of options, use kops
According to documentation of both kops and aws, the dedicated kops user needs IAMFullAccess permission to operate properly.
Why is this permission needed?
Is there a way to avoid (i.e. restrict) this, given that it is a bit too intrusive to create a user with such a permission?
edit: one could assume that the specific permission is needed to attach the respective roles to the master(s) and node(s) instances;
therefore perhaps the question / challenge becomes how to:
not use IAMFullAccess
sync with the node creation / bootstrapping process and attach the above roles; (perhaps create a cluster on pre-configured instances? - no idea if kops provides for that)
As far as I understand kops design, it's meant to be end to end tool for provisioning you with k8s clusters. If you want to provision your nodes separately and deploy k8s on them I would suggest to use other tool, such as kubespray or kubeadm:
https://github.com/kubernetes-incubator/kubespray
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/