It is not very straightforward to test AuthorizationPolicy CRD as per https://istio.io/latest/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action. I want to make sure that the AuthorizationPolicy I wrote can ALLOW the requests I want to allow, and DENY those I don't. But there are multiple hops of workloads in my cluster, so when the request failed, I have no idea where to look at for debugging the authorization rules.
Previous Research
I found this Debugging Authorization article but it was for IstioIdle 1.0. But I am using Istio 1.9, there are some differences in terms of istio architecture.
Edit
I have a Kubeflow app deployment guide which has old authorization policy (see ClusterRbacConfig in this). I want to preserve the original role-based access control policy, but use the new AuthorizatonPolicy CRD to achieve it.
Related
I’m looking at EKS architecture patterns for a saas multi tenant saas application and found some great resources in the AWS saas factory space. However, I have couple of questions that that I couldn’t find answers for in those resources:
The proposed application will broadly have following components:
Landing and Tenant Registration App (React SPA)
Admin UI To Manage tenants (React SPA)
Application UI (the product - React)
Core/Support Micro-services (Lambda)
Tenant App Micro-services(Go/Java)
RDS - Postgres per tenant
I’m currently leaning towards EKS with Fargate where namespaces used for tenant isolation.
Questions:
Is the namespaces right way to go about for tenant separation (as opposed to separate cluster/vpc per tenant)
Regarding tenant data (RDS) isolation, if I go with namespaces isolation, what’s the best way to go about it?
Apologies of the question isn’t clear, happy to provide further clarification of needed.
Question 1: Could you please let me know if AWS global Accelator has capability to route the request to different ALB based on dns(base url)?
Answer: Unfortunately Global Accelerator does not have the ability to do smart routing based on URL.
Question 2: Do we require multiple AWS accelator for each customer?
Answer: That is correct.
Question 3: Do we have any other solution to access the application in-isolation for each customer?
Answer: Not in isolation it is possible to use one ALB with host header rules for your listeners. Each rule going to its own target group. This way you will control traffic by making a target group for each customer. However it does not fullfill your isolation requirement.
Question 4: Do you suggest to have different pods for each customer with routing the request to various target group based on path?
Answer: Yes that is the option I mentioned above you can use path based or host header option depending on the URL. If the URL's are completely different then host header if the URL is just different paths then path based would be best.
hope these answer help you to resolve #sheeni's queries.
My company currently have a legacy GCP project that has multiple deployments running in the same kubernetes namespace. Before time can be found to separate the the deployments to their projects, I would like to give certain users access to the (Cloud Logging) logs of specific deployments e.g team_A should only be able to see the logs of deployment_A in the default namespace.
Google has IAM conditions, however I cannot find the right name nor type to use. There's a big list, but am I missing something? Is this not possible?
You can use RBAC Authorization for this kind of fine tuned access control:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control
For example, you can create a custom ClusterRole with only pods/log as resource, core as apiGroups and get as verb. And then you create a RoleBinding for at the default namespace which is binding the custom ClusterRole to your users / group.
If you are using Google groups, maybe you need to check also this documentation: https://cloud.google.com/kubernetes-engine/docs/how-to/google-groups-rbac
What benefits, if any, exist for selecting the host endpoint amazonaws.com versus api.aws for connecting programmatically to an AWS service?
If possible, it seems best to use amazonaws.com when you can leverage the fips-enabled endpoint (service-fips.region.amazonaws.com) but that seems to be the only difference found.
For example, the service endpoints for Lambda, for the us-east-2 region include the following available endpoints:
lambda.us-east-2.amazonaws.com
lambda-fips.us-east-2.amazonaws.com
lambda.us-east-2.api.aws
To ask this in another way... would you ever use lambda.us-east-2.api.aws, and if so, why?
In the following Istio documentation page - https://istio.io/latest/docs/ops/deployment/deployment-models/#trust-between-meshes - it mentions using SPIFFE federation to import a trust bundle to a mesh.
I can't seem to find any other documentation that states how to do this, or if it is even possible. Does anyone have any insight as to how to federate either two Istio clusters using SPIFFE Federation, or an Istio cluster and a different SPIFFE endpoint such as SPIRE?
Thanks!
In the SPIFFE specification it is stated that
Since a workload in its early stages may have no prior knowledge of
its identity or whom it should trust, it is very difficult to secure
access to the endpoint. As a result, the SPIFFE Workload Endpoint
SHOULD be exposed through a local endpoint, and implementers SHOULD
NOT expose the same endpoint instance to more than one host.
Can you please explain on what is meant by this and how Istio implements this?
Actually, Istio mesh services adopt SPIFFE standard policies through Istio Security mechanisms using the same identity document SVID. Istio Citadel is the key component for secure provisioning various identities and provides credential management.
It is feasible in the near future to use Node agent within Istio mesh in order to discover relevant services via Envoy secret discovery service (SDS) API and this approach is very similar to SPIRE design.
The key concepts of SPIRE design, described in the official documentation, you can find below:
SPIRE consists of two components, an agent and a server.
The server provides a central registry of SPIFFE IDs, and the
attestation policies that describe which workloads are entitled to
assume those identities. Attestation policies describe the properties
that the workload must exhibit in order to be assigned an identity,
and are typically described as a mix of process attributes (such as a
Linux UID) and infrastructure attributes (such as running in a VM that
has a particular EC2 label).
The agent runs on any machine (or, more formally, any kernel) and
exposes the local workload API to any process that needs to retrieve a
SPIFFE ID, key, or trust bundle. On *nix systems, the Workload API is
exposed locally through a Unix Domain Socket. By verifying the
attributes of a calling workload, the workload API avoids requiring
the workload to supply a secret to authenticate.
SPIRE promises to become the main contributor for workload authentication mechanisms, however so far it's on developing stage with desired future implementation on production deployments.