I am very new to Istio Authorization policies, I need some help with setting up authorization policies :
Here is the scenario:
I have a namespace called namespace1 which has 4 Microservices running in them. For the context, let's call them A,B,C,D. And all 4 microservices have istio-sidecar injection enabled.
have a namespace called namespace2 which has 2 Microservices running in them. For the context, let's call them E,F. And both microservices have istio-sidecar injection enabled.
Now I have deployed Memcached service by following Memcached using mcrouter to namespace memcached. And all the pods of Memcached are also having istio-sidecar injection enabled.
Now I have a scenario where I have to allow only calls from B and C microservices in namespace1 to be made to memcached services and deny calls from A and D in namespace1 along with calls coming from any other namespaces. Is it possible to achieve this using istio authorization policies?
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: authorization-policy-deny-all
namespace: memcached
spec:
selector:
matchLabels:
app: activator
action: DENY
rules:
- from:
- source:
notNamespaces: ["namespace1"]
This is the best I could come up with, where I am allowing only calls from namepsace1 and denying calls from all other namespaces. I could not figure out how I can deny calls from A and D Microservices in namespace1.
Here's one setup that might work.
---
spec:
selector:
matchLabels:
app: activator
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/namespace1/sa/memcached--user
B & C to use different service account than the rest of services. Let say the service account name is memcached--user (Depends on the role needed by B & C, you might want to even have separate service account for each service)
Define AuthorizationPolicy to allow access from principal which is the service account used by B & C.
Make sure mTLS enabled. As stated in the docs, This field (principals) requires mTLS enabled.
Make sure selector is configured properly.
I hope this could solve your issue.
You can also use principals for allowing access. As for the example from the Istio documentation on Authorization Policy:
so analogously something like that should be possible:
- from:
- source:
principals: ["cluster.local/ns/namespace1/sa/B","cluster.local/ns/namespace1/sa/C"]
According to the doc:
1. If there are any DENY policies that match the request, deny the request.
2. If there are no ALLOW policies for the workload, allow the request.
3. If any of the ALLOW policies match the request, allow the request.
4. Deny the request.
So if you have an ALLOW policy for memcached , and allow access from B and C (rule 3), then other requests to memcached from other sources should be denied (rule 2 does not allow access, since you have an ALLOW policy).
(untested)
Related
I have a service account which I am trying to use across multiple pods installed in the same namespace.
One of the pods is created by Airflow KubernetesPodOperator.
The other is created via Helm through Kubernetes deployment.
In the Airflow deployment, I see the IAM role being assigned and DynamoDB tables are created, listed etc however in the second helm chart deployment (or) in a test pod (created as shown here), I keep getting AccessDenied error for CreateTable in DynamoDB.
I can see the AWS Role ARN being assigned to the service account and the service account being applied to the pod and the corresponding token file also being created, but I see AccessDenied exception.
arn:aws:sts::1234567890:assumed-role/MyCustomRole/aws-sdk-java-1636152310195 is not authorized to perform: dynamodb:CreateTable on resource
ServiceAccount
Name: mypipeline-service-account
Namespace: abc-qa-daemons
Labels: app.kubernetes.io/managed-by=Helm
chart=abc-pipeline-main.651
heritage=Helm
release=ab-qa-pipeline
tier=mypipeline
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/MyCustomRole
meta.helm.sh/release-name: ab-qa-pipeline
meta.helm.sh/release-namespace: abc-qa-daemons
Image pull secrets: <none>
Mountable secrets: mypipeline-service-account-token-6gm5b
Tokens: mypipeline-service-account-token-6gm5b
P.S: Both the client code created using KubernetesPodOperator and through Helm chart deployment is same i.e. same docker image. Other attributes like nodeSelector, tolerations etc, volume mounts are also same.
The describe pod output for both of them is similar with just some name and label changes.
The KubernetesPodOperator pod has QoS class as Burstable while the Helm chart ones is BestEffort.
Why do I get AccessDenied in Helm deployment but not in KubernetesPodOperator? How to debug this issue?
Whenever we get an AccessDenied exception, there can be two possible reasons:
You have assigned the wrong role
The assigned role doesn't have necessary permissions
In my case, latter is the issue. The permissions assigned to particular role can be sophisticated i.e. they can be more granular.
For example, in my case, the DynamoDB tables which the role can create/describe is limited to only those that are starting with a specific prefix but not all the DynamoDB tables.
So, it is always advisable to check the IAM role permissions whenever
you get this error.
As stated in the question, be sure to check the service account using the awscli image.
Keep in mind that, there is a credential provider chain used in AWS SDKs which determines the credentials to be used by the application. In most cases, the DefaultAWSCredentialsProviderChain is used and its order is given below. Ensure that the SDK is picking up the intended provider (in our case it is WebIdentityTokenCredentialsProvider)
super(new EnvironmentVariableCredentialsProvider(),
new SystemPropertiesCredentialsProvider(),
new ProfileCredentialsProvider(),
WebIdentityTokenCredentialsProvider.create(),
new EC2ContainerCredentialsProviderWrapper());
Additionally, you might also want to set the AWS SDK classes to DEBUG mode in your logger to see which credentials provider is being picked up and why.
To check if the service account is applied to a pod, describe it and check if the AWS environment variables are set to it like AWS_REGION, AWS_DEFAULT_REGION, AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.
If not, then check your service account if it has the AWS annotation eks.amazonaws.com/role-arn by describing that service account.
I am using gcp cloud endpoint with a cloud run backend Service. My Problem is that the backend is configured with a default timeout of 15 seconds. Thats why I would like to set openAPI "x-google-backend" deadline parameter to increase timeout for the endpoint: (https://cloud.google.com/endpoints/docs/openapi/openapi-extensions)
Currently I am using the following grpc service configuration for my endpoint.
https://cloud.google.com/endpoints/docs/grpc/grpc-service-config
openAPI extension is not supported for this kind of configuration. Now I am looking for a way to combine
the grpc configuration with openAPI. I have read that it is possible to publish several configuration files for one endpoint.
OK, this kind of configuration works well.
type: google.api.Service
config_version: 3
name: ${cloud_run_hostname_endpoint}
title: ${endpoint_title}
apis:
- name: my_endpoint_name
usage:
rules:
# No APIs can be called without an API Key
- selector: "*"
allow_unregistered_calls: false
backend:
rules:
- selector: "*"
address: grpcs://${cloud_run_hostname_backend}
deadline: 300.0
deadline parameter is accepted.
I am using cloudformation to provision lambda and RDS on AWS. But I don't know how to add database proxy on lambda. Below screenshot is from lambda console:
Does cloudformation support adding this? I can't see it in lambda and db proxy template.
The exact configuration I use in CloudFormation template is:
MyLambdaFunction:
Type: AWS::Serverless::Function
Properties:
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- rds-db:connect
Resource:
- <rds_proxy_arn>/*
where <rds_proxy_arn> is the ARN of the proxy but service is rds-db instead of rds and resource type is dbuser instead of db-proxy. For example, if your proxy's ARN is arn:aws:rds:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01 the whole line should be arn:aws:rds-db:us-east-1:123456789012:db-proxy:prx-0123456789abcdef01/*.
After deployed, we can see a new link is added in Database Proxies of the Console.
As per the CloudFormation/Lambda documentation there is no option to specify the DB Proxy for a Lambda.
I don't see an option to add an RDS proxy while creating a Lambda function in the low level HTTP API also. Not sure why.
As per the following Github issue, it seems this is not required to connect lambda to RDS proxy. https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/750
You merely need to provide the new connection details to lambda (e.g. using env variables to make it work)
After talking with AWS support, the screenshot in AWS console to add proxy on lambad is only to grant below IAM permission to lambda. That means it is an optional.
Allow: rds-db:connect
Allow: rds-db:*
I've created a cloud armor security policy but it does not have a default rule. I am confused because the documentation contradicts with this.
https://cloud.google.com/compute/docs/reference/rest/beta/securityPolicies
A list of rules that belong to this policy. There must always be a default rule (rule with priority 2147483647 and match "*"). If no rules are provided when creating a security policy, a default rule with action "allow" will be added.
$ gcloud beta compute security-policies describe healthcheck
---
creationTimestamp: ''
description: ''
fingerprint: ...
id: '.....'
kind: compute#securityPolicy
labelFingerprint: .....
name: healthcheck
rules:
- action: deny(404)
description: Block requests to /health
kind: compute#securityPolicyRule
match:
expr:
expression: request.path.matches('/health')
preview: false
priority: 1000
selfLink: https://www.googleapis.com/compute/....
Based on my tests, the default behaviour seems to be Allow. Is this default rule hidden or am I missing something?
The rule was created with Terraform but I don't think it matters.
The answer to your question lies in the fact that there are different ways to create a Cloud Armor policy. For example, if you create a policy through the Cloud Console, you are required to choose the default rule type prior to creating the policy.
In your case, the policy was created using Terraform. Terraform will create a policy in effectively the same way as if you were to use gcloud commands from the Cloud Shell. Using something like Terraform or using gcloud commands will permit a Cloud Armor policy to be created without a default rule specified.
If a Cloud Armor policy is created without a rule specified (default or otherwise), then an “Allow” rule will be automatically added. This is the behavior documented in the REST resource link you shared. One thing to take note of, it may take a few minutes before the default “Allow” rule is visible. In my testing it took at least 2 minutes minimum to be visible in the Console and through:
gcloud compute security-policies describe [POLICY_NAME]
Typically during Cloud Armor policy creation, a default rule is specified with the desired behavior (step # 2). The example you have shared appears to not have updated in the console completely, thus does not show the default “Allow” rule. However, based on the description you provided for your setup, a default “Allow” rule would have been applied during the policy creation by Terraform.
You can always choose to change the behavior of the default rule from “Allow” to “Deny-404” (or “Deny-502”), using the command:
gcloud compute security-policies rules update 2147483647 --security-policy [POLICY_NAME] --action "deny-404"
(2147483647 is the default rule priority, max int32)
How to deploy a web page architecture from a GCP Cloud Deployment yaml, which includes static files in a storage and a load balancer that has a backend bucket connected to this storage?
We need the load balancer to connect it to the GCP CDN.
I think you need to create the resources based on google's API on the deployment manager YAML script.
As my understanding you need to connect a load balancing with a backend bucket,
and the latter connect it to a storage bucket. I will asume the bucket creation is not necessary.
So the resources you need are compute.beta.backendBucket and the compute.v1.urlMap. The YAML file will look -kind- of this:
resources:
- type: compute.beta.backendBucket
name: backendbucket-test
properties:
bucketName: already-created-bucket
- type: compute.v1.urlMap
name: urlmap-test
properties:
defaultService: $(ref.backendbucket-test.selfLink)
hostRules:
- hosts: ["*"]
pathMatcher: "allpaths"
pathMatchers:
- name: "allpaths"
defaultService: $(ref.backendbucket-test.selfLink)
pathRules:
- service: $(ref.backendbucket-test.selfLink)
paths: ["/*"]
Note that the names are completely up to you. Also see there are ref (from reference) to link the backendBucket created on the first step to the urlMap of the second one.
Is good to mention that you will probably need more resources for a complete solution (specifically the frontend part of the load balancer).
Hope it can help in some way,
Cheers!
You can follow this guide from Google on how to create a Load Balancer to serve static content from a bucket. Note that the bucket and its content must already exists, the content will not be created by DM.
Follow the gcloud steps, not the console steps. For each step, find the correct API call and create a separate resource in your deployment manager config for each step.