Istio - SPIFFE Trust Federation - istio

In the following Istio documentation page - https://istio.io/latest/docs/ops/deployment/deployment-models/#trust-between-meshes - it mentions using SPIFFE federation to import a trust bundle to a mesh.
I can't seem to find any other documentation that states how to do this, or if it is even possible. Does anyone have any insight as to how to federate either two Istio clusters using SPIFFE Federation, or an Istio cluster and a different SPIFFE endpoint such as SPIRE?
Thanks!

Related

How can a Google Cloud Logging user be limited to view logs from a specific deployment?

My company currently have a legacy GCP project that has multiple deployments running in the same kubernetes namespace. Before time can be found to separate the the deployments to their projects, I would like to give certain users access to the (Cloud Logging) logs of specific deployments e.g team_A should only be able to see the logs of deployment_A in the default namespace.
Google has IAM conditions, however I cannot find the right name nor type to use. There's a big list, but am I missing something? Is this not possible?
You can use RBAC Authorization for this kind of fine tuned access control:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control
For example, you can create a custom ClusterRole with only pods/log as resource, core as apiGroups and get as verb. And then you create a RoleBinding for at the default namespace which is binding the custom ClusterRole to your users / group.
If you are using Google groups, maybe you need to check also this documentation: https://cloud.google.com/kubernetes-engine/docs/how-to/google-groups-rbac

How to debug Istio Authorization Policy?

It is not very straightforward to test AuthorizationPolicy CRD as per https://istio.io/latest/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action. I want to make sure that the AuthorizationPolicy I wrote can ALLOW the requests I want to allow, and DENY those I don't. But there are multiple hops of workloads in my cluster, so when the request failed, I have no idea where to look at for debugging the authorization rules.
Previous Research
I found this Debugging Authorization article but it was for IstioIdle 1.0. But I am using Istio 1.9, there are some differences in terms of istio architecture.
Edit
I have a Kubeflow app deployment guide which has old authorization policy (see ClusterRbacConfig in this). I want to preserve the original role-based access control policy, but use the new AuthorizatonPolicy CRD to achieve it.

AWS EKS: Assign multiple Service Accounts to Deployment\Pod

I'm using Kubeless on AWS EKS. While installing Kubeless, the installation has a manifest which has some CRDs and a deployment object. The deployment object has already a Service Account attached. I have created another Service Account in the kube-system namespace, which has some AWS IAM roles attached. I also want to attach this newly created Service Account used for IAM roles to the Kubeless deployment object by modifying the Kubeless manifest file.
I want to have 2 Service Accounts attached to the deployment object: One that comes with Kubeless and other for AWS IAM. Any help would appreciated. Thanks
This is not possible. If you look at the API documentation for PodSpec v1 core you can see that serviceAccountName expects a string not an array or object. This is because using a ServiceAccount resource creates a 1:1 relationship between your pod and authentication against the API server.
You will either need to:
Diversify your workload into multiple containers. Which with you can apply different service accounts.
Combine your service account capabilities into a single account and apply it exclusively to this pod.
I recommend #2.

Give AWS IAM Role to a pod running in GKE (Google Kubernetes Engine)

I would like to move a pod from AWS hosted K8s cluster to GKE (Google). The problem is that on a GKE instance I don't have the AWS metadata in order to assume an IAM role (obviously).
But I guess I can do something similar to kube2iam in order to allow the pods to assume roles as if they were running inside AWS. Meaning, to run a daemonset that would simulate the access to the metadata for the pods.
I already have a VPN set up between the clouds.
Anyone did this already?
I haven’t tried that yet. But keep in mind that in GKE, IAM roles are associated to accounts (user accounts/service accounts) and not to resources (pod/nodes).
Also, kube2iam looks more like a security solution more than a compatibility solution. Once you have the credentials from the kube2iam node you still having the compatibility issues.
I think a better solution would be to use API calls and deal with the authentication.
A newer and possibly better option for your use case is the GKE Workload Identity feature that Google announces in June of this year: https://www.google.com/amp/s/cloudblog.withgoogle.com/products/containers-kubernetes/introducing-workload-identity-better-authentication-for-your-gke-applications/amp/
It lets you bind GCP IAM SAs to K8s SAs and namespace. Then, any pod that is created with that K8s SA for that namespace will automatically have temporary credentials mounted for the bound IAM SA - and the GCP gcloud SDK auto authenticates when executing gcloud commands from the pod.

How does Istio implement this spec point of SPIFFE?

In the SPIFFE specification it is stated that
Since a workload in its early stages may have no prior knowledge of
its identity or whom it should trust, it is very difficult to secure
access to the endpoint. As a result, the SPIFFE Workload Endpoint
SHOULD be exposed through a local endpoint, and implementers SHOULD
NOT expose the same endpoint instance to more than one host.
Can you please explain on what is meant by this and how Istio implements this?
Actually, Istio mesh services adopt SPIFFE standard policies through Istio Security mechanisms using the same identity document SVID. Istio Citadel is the key component for secure provisioning various identities and provides credential management.
It is feasible in the near future to use Node agent within Istio mesh in order to discover relevant services via Envoy secret discovery service (SDS) API and this approach is very similar to SPIRE design.
The key concepts of SPIRE design, described in the official documentation, you can find below:
SPIRE consists of two components, an agent and a server.
The server provides a central registry of SPIFFE IDs, and the
attestation policies that describe which workloads are entitled to
assume those identities. Attestation policies describe the properties
that the workload must exhibit in order to be assigned an identity,
and are typically described as a mix of process attributes (such as a
Linux UID) and infrastructure attributes (such as running in a VM that
has a particular EC2 label).
The agent runs on any machine (or, more formally, any kernel) and
exposes the local workload API to any process that needs to retrieve a
SPIFFE ID, key, or trust bundle. On *nix systems, the Workload API is
exposed locally through a Unix Domain Socket. By verifying the
attributes of a calling workload, the workload API avoids requiring
the workload to supply a secret to authenticate.
SPIRE promises to become the main contributor for workload authentication mechanisms, however so far it's on developing stage with desired future implementation on production deployments.