istio authorization policy and jwt check - istio

xx
Hello everyone,
I am playing with istio and security based on a JWT token.
I am able to deny access to services based on simple token elements (ie. when the field is of type key and simple value). I would like to know if we can create rules when the field value is an array. I am not able to figure out if it's possible after many tries.
For example, let's consider the JWT with the following data
{
"sub": "user",
"aud": "mgu",
"scope": [
"openid",
"address",
"email",
"phone",
"profile"
],
"iss": "http://host.docker.internal:8001",
"exp": 1670583577,
"iat": 1670579977,
"authorities": [
"AUTH1",
"AUTH2"
],
"jti": "mgu-proxy"
}
I would like to allow access when one of the authorities is AUTH1.
I have tried with key of type
key: request.auth.claims[authorities][*]
key: request.auth.claims[authorities]
but none of them work.
Does someone know how to do it if it is possible?
Thanks for any help :)
PS: here is my AuthorizationPolicy.yaml file
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
spec:
action: ALLOW
selector:
matchLabels:
istio: ingressgateway
rules:
- when:
- key: request.auth.audiences
values: ["mgu"]
- key: request.auth.claims[jti]
values: ["mgu-proxy"]
and I have tried with (and many others...)
- key: request.auth.claims[authorities][*]
values: ["AUTH1"]
and
- key: request.auth.claims[authorities]
values: ["AUTH1"]

Related

Disable Istio default retry on errorcode 503

I am not sure, whether we are talking about the smae issue. I will try to explain the scenario which I am trying. I have 2 services appservice-A and appservice-B, both are in the same namespace "mynamespace" and both have seperate Virtual service called VS-A and VS-B.
In MyAPPLICATION, there is a call from appservice-A to appservice-B , and I want to disable the retry when the ServiceB throwing a 503 error, and the ServiceA should not retry again. So what I enabled is in the VS-B, I added the retry-attempt to 0, by expecting that appservice-A wont retry an attempt if the appservice-B throws a 503 error.. Which is not working for me.
I performed a testing again with the sample for you by making the attempt 0 in the VS-B. but still there is no change is happening to the retries, still showing 2.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
labels:
app.kubernetes.io/managed-by: Helm
name: appservice-B
namespace: mynamespace
spec:
gateways:
- istio-system/ingress-gateway
hosts:
- appservice-B.mynamespace.svc.cluster.local
- appservice-B.mycompany.com
http:
- match:
- uri:
prefix: /
retries:
attempts: 0
route:
- destination:
host: appservice-B.mynamespace.svc.cluster.local
port:
number: 8080
subset: v1
this is the proxy config routing rule out put which generated from appservice-A
istioctl proxy-config route appservice-A-6dbb74bc88-dffb8 -n mynamespace -o json
"routes": [
{
"name": "default",
"match": {
"prefix": "/"
},
"route": {
"cluster": "outbound|8080||appservice-B.mynamespace.svc.cluster.local",
"timeout": "0s",
"retryPolicy": {
"retryOn": "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes",
"numRetries": 2,
"retryHostPredicate": [
{
"name": "envoy.retry_host_predicates.previous_hosts"
}
],
"hostSelectionRetryMaxAttempts": "5",
"retriableStatusCodes": [
503
]
},
"maxStreamDuration": {
"maxStreamDuration": "0s",
"grpcTimeoutHeaderMax": "0s"
}
},
"decorator": {
"operation": "appservice-B.mynamespace.svc.cluster.local:8080/*"
}
}
],
"includeRequestAttemptCount": true
So, suspect that , whether I am trying right things for my scenario, If this also right, any workaround like Is there achange the property of status codes or headers from responses to disable, so that istio wont retry appservice-B from appservice-A if a 503 errorcode got from appservice-B ?

redis fault injection using istio and envoy filter

I am trying to inject 2s delay to a redis instance (which is not in cluster) using istio.
So, first I am creating an ExternalName k8s service in order to reach external redis so that istio knows about this service. This works. However when I create EnvoyFilter to add fault, I don't see redis_proxy filter in istioctl pc listeners <pod-name> -o json for a pod in same namespace. (and also delay is not introduced)
apiVersion: v1
kind: Namespace
metadata:
name: chaos
labels:
istio-injection: enabled
---
apiVersion: v1
kind: Service
metadata:
name: redis-proxy
namespace: chaos
spec:
type: ExternalName
externalName: redis-external.bla
ports:
- name: tcp
protocol: TCP
port: 6379
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: redis-proxy-filter
namespace: chaos
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 6379
filterChain:
filter:
name: "envoy.filters.network.redis_proxy"
patch:
operation: MERGE
value:
name: "envoy.filters.network.redis_proxy"
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy
faults:
- fault_type: DELAY
fault_enabled:
default_value:
numerator: 100
denominator: HUNDRED
delay: 2s
Can someone give an idea? Thanks.
I tried your yaml in my local istio 1.8.2. Here's few changes that might help you
set PILOT_ENABLE_REDIS_FILTER in istiod env var. otherwise, the filter name will be "name": "envoy.filters.network.tcp_proxy"
add match context
match:
context: SIDECAR_OUTBOUND
use redis protocol
ports:
- name: redis-proxy
port: 6379
appProtocol: redis
I can see the following
% istioctl pc listener nginx.chaos --port 6379 -o json
[
{
"name": "0.0.0.0_6379",
"address": {
"socketAddress": {
"address": "0.0.0.0",
"portValue": 6379
}
},
"filterChains": [
{
"filters": [
{
"name": "envoy.filters.network.redis_proxy",
"typedConfig": {
"#type": "type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxy",
"statPrefix": "outbound|6379||redis-proxy.chaos.svc.cluster.local",
"settings": {
"opTimeout": "5s"
},
"latencyInMicros": true,
"prefixRoutes": {
"catchAllRoute": {
"cluster": "outbound|6379||redis-proxy.chaos.svc.cluster.local"
}
},
"faults": [
{
"faultEnabled": {
"defaultValue": {
"numerator": 100
}
},
"delay": "2s"
}
]
}
}
]
}
],
"deprecatedV1": {
"bindToPort": false
},
"trafficDirection": "OUTBOUND"
}
]

Not authorized to perform sts:AssumeRoleWithWebIdentity- 403

I have been trying to run an external-dns pod using the guide provided by k8s-sig group. I have followed every step of the guide, and getting the below error.
time="2021-02-27T13:27:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 87a3ca86-ceb0-47be-8f90-25d0c2de9f48"
I had created AWS IAM policy using Terraform, and it was successfully created. Except IAM Role for service account for which I had used eksctl, everything else has been spun via Terraform.
But then I got hold of this article which says creating AWS IAM policy using awscli would eliminate this error. So I deleted the policy created using Terraform, and recreated it with awscli. Yet, it is throwing the same error error.
Below is my external dns yaml file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
# If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
# Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::268xxxxxxx:role/eksctl-ats-Eks1-addon-iamserviceaccoun-Role1-WMLL93xxxx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=xyz.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=Z0471542U7WSPZxxxx
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
I am scratching my head as there is no proper solution to this error anywhere in the net. Hoping to find a solution to this issue in this forum.
End result must show something like below and fill up records in hosted zone.
time="2020-05-05T02:57:31Z" level=info msg="All records are already up to date"
I also struggled with this error.
The problem was in the definition of the trust relationship.
You can see in some offical aws tutorials (like this) the following setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:<my-namespace>:<my-service-account>"
}
}
}
]
}
Option 1 for failure
My problem was that I passed the a wrong value for my-service-account at the end of ${OIDC_PROVIDER}:sub in the Condition part.
Option 2 for failure
After the previous fix - I still faced the same error - it was solved by following this aws tutorial which shows the output of using the eksctl with the command below:
eksctl create iamserviceaccount \
--name my-serviceaccount \
--namespace <your-ns> \
--cluster <your-cluster-name> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
When you look at the output in the trust relationship tab in the AWS web console - you can see that an additional condition was added with the postfix of :aud and the value of sts.amazonaws.com:
So this need to be added after the "${OIDC_PROVIDER}:sub" condition.
I was able to get help from the Kubernetes Slack (shout out to #Rob Del) and this is what we came up with. There's nothing wrong with the k8s rbac from the article, the issue is the way the IAM role is written. I am using Terraform v0.12.24, but I believe something similar to the following .tf should work for Terraform v0.14:
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "external_dns_role" {
name = "external-dns"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": format(
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:%s",
replace(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://",
"oidc-provider/"
)
)
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
format(
"%s:sub",
trimprefix(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://"
)
) : "system:serviceaccount:default:external-dns"
}
}
}
]
})
}
The above .tf assume you created your eks cluster using terraform and that you use the rbac manifest from the external-dns tutorial.
I have a few possibilities here.
Before anything else, does your cluster have an OIDC provider associated with it? IRSA won't work without it.
You can check that in the AWS console, or via the CLI with:
aws eks describe-cluster --name {name} --query "cluster.identity.oidc.issuer"
First
Delete the iamserviceaccount, recreate it, remove the ServiceAccount definition from your ExternalDNS manfiest (the entire first section) and re-apply it.
eksctl delete iamserviceaccount --name {name} --namespace {namespace} --cluster {cluster}
eksctl create iamserviceaccount --name {name} --namespace {namespace} --cluster
{cluster} --attach-policy-arn {policy-arn} --approve --override-existing-serviceaccounts
kubectl apply -n {namespace} -f {your-externaldns-manifest.yaml}
It may be that there is some conflict going on as you have overwritten what you created with eksctl createiamserviceaccount by also specifying a ServiceAccount in your ExternalDNS manfiest.
Second
Upgrade your cluster to v1.19 (if it's not there already):
eksctl upgrade cluster --name {name} will show you what will be done;
eksctl upgrade cluster --name {name} --approve will do it
Third
Some documentation suggests that in addition to setting securityContext.fsGroup: 65534, you also need to set securityContext.runAsUser: 0.
I've been struggling with a similar issue after following the setup suggested here
I ended up with the exception below in the deploy logs.
time="2021-05-10T06:40:17Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 3fda6c69-2a0a-4bc9-b478-521b5131af9b"
time="2021-05-10T06:41:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 7d3e07a2-c514-44fa-8e79-d49314d9adb6"
In my case, it was an issue with wrong Service account name mapped to the new role created.
Here is a step by step approach to get this done without much hiccups.
Create the IAM Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
Create the IAM role and the service account for your EKS cluster.
eksctl create iamserviceaccount \
--name external-dns-sa-eks \
--namespace default \
--cluster aecops-grpc-test \
--attach-policy-arn arn:aws:iam::xxxxxxxx:policy/external-dns-policy-eks \
--approve
--override-existing-serviceaccounts
Created new hosted zone.
aws route53 create-hosted-zone --name "hosted.domain.com." --caller-reference "grpc-endpoint-external-dns-test-$(date +%s)"
Deploy ExternalDNS, after creating the Cluster role and Cluster role binding to the previously created service account.
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns-sa-eks
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
# If you're using kiam or kube2iam, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
iam.amazonaws.com/role: arn:aws:iam::***********:role/eksctl-eks-cluster-name-addon-iamserviceacco-Role1-156KP94SN7D7
spec:
serviceAccountName: external-dns-sa-eks
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=hosted.domain.com. # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-hostedzone-identifier
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
Update Ingress resource with the domain name and reapply the manifest.
For ingress objects, ExternalDNS will create a DNS record based on the host specified for the ingress object.
- host: myapp.hosted.domain.com
Validate new records created.
BASH-3.2$ aws route53 list-resource-record-sets --output json
--hosted-zone-id "/hostedzone/Z065*********" --query "ResourceRecordSets[?Name == 'hosted.domain.com..']|[?Type == 'A']"
[
{
"Name": "myapp.hosted.domain.com..",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZCT6F*******",
"DNSName": "****************.elb.ap-southeast-2.amazonaws.com.",
"EvaluateTargetHealth": true
}
} ]
In our case this issue occurred when using the Terraform module to create the eks cluster, and eksctl to create the iamserviceaccount for the aws-load-balancer controller. It all works fine the first go-round. But if you do a terraform destroy, you need to do some cleanup, like delete the CloudFormation script created by eksctl. Somehow things got crossed, and the CloudTrail was passing along a resource role that was no longer valid. So check the annotation of the service account to ensure it's valid, and update it if necessary. Then in my case I deleted and redeployed the aws-load-balancer-controller
%> kubectl describe serviceaccount aws-load-balancer-controller -n kube-system
Name: aws-load-balancer-controller
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-JQL4R3JM7I1A
Image pull secrets: <none>
Mountable secrets: aws-load-balancer-controller-token-b8hw7
Tokens: aws-load-balancer-controller-token-b8hw7
Events: <none>
%>
%> kubectl annotate --overwrite serviceaccount aws-load-balancer-controller eks.amazonaws.com/role-arn='arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-17A92GGXZRY6O' -n kube-system
In my case, I was able to attach the oidc role with route53 permissions policy and that resolved the error.
https://medium.com/swlh/amazon-eks-setup-external-dns-with-oidc-provider-and-kube2iam-f2487c77b2a1
and then with the external-dns service account used that instead of the cluster role.
annotations:
# # Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::<account>:role/external-dns-service-account-oidc-role
For me the issue was that the trust relationship was (correctly) setup using one partition whereas the ServiceAccount was annotated with a different partition, like so:
...
"Principal": {
"Federated": "arn:aws-us-gov:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
...
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::{{ .Values.aws.account }}:role/{{ .Values.aws.roleName }}
Notice arn:aws:iam vs arn:aws-us-gov:iam

Google cloud endpoints API key is not verified

I'm now developing REST API with Cloud endpoints and App engine.
I will like to implement api key authentication but it does not work.
Looks good without query params of 'key=${API KEY}'.
# curl -X POST https://hogehoge.com/test -d '{"key":"value"}'
{
"code": 16,
"message": "Method doesn't allow unregistered callers (callers without established identity). Please use API Key or other form of API consumer identity to call this API.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}
But any key can be granted to access to the backend.
# curl -X POST https://hogehoge.com/test?key=aaa -d '{"key":"value"}'
POST is sended.
Of course, API key generated via API management will work.
# curl -X POST https://hogehoge.com/test?key=${realkey} -d '{"key":"value"}'
POST is sended.
Cloud endpoint file definition is
swagger: "2.0"
info:
title: "xxxxxxxxx"
description: "xxxxxxxxx"
version: "1.0.0"
host: "hogehoge.com"
schemes:
- "https"
security: []
paths:
"/test":
post:
description: "test"
operationId: "test"
security:
- api_key: []
parameters:
- name: body
in: body
required: true
schema:
$ref: '#/definitions/testRequest'
responses:
201:
description: "Success"
schema:
$ref: '#/definitions/testResponse'
definitions:
testRequest:
type: object
required:
- data
properties:
data:
type: object
required:
- key
properties:
token:
type: string
example: value
maxLength: 20
testResponse:
type: string
securityDefinitions:
api_key:
type: "apiKey"
name: "key"
in: "query"
What I expect is only key generated via API management will be granted to access.
Let me know how to solve this issue.
Thanks.
It seems that the Service Control API might not be enabled on your project.
In order to check that, you can run
gcloud services list --enabled
If servicecontrol.googleapis.com is not listed in the result of the previous command, you should run
gcloud services enable servicecontrol.googleapis.com
Furthermore, you could check that you have all the required services for Endpoints enabled. You can see how to do this in the documentation

How to set up Istio RBAC based on groups from JWT claims?

I have a service with AuthenticationPolicy and Istio RBAC enabled (authorization context is set to use groups from JWT claim) However, it seems istio does not take into account the groups attribute from JWT claim when a call is being made.
As an IDP I use dex and I have set corresponding AuthnPolicy for it.
I have set Authorization context as following :
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
name: requestcontext
namespace: istio-system
spec:
subject:
user: source.user | request.auth.claims["email"] | ""
groups: request.auth.claims["groups"] | ""
properties:
namespace: source.namespace | ""
service: source.service | ""
iss: request.auth.claims["iss"] | ""
sub: request.auth.claims["sub"] | ""
action:
namespace: destination.namespace | ""
service: destination.service | ""
method: request.method | ""
path: request.path | ""
properties:
version: request.headers["version"] | ""
I have enabled RBAC and created ServiceRole. I've added ServiceRoleBinding with subject set to a specific group called "admins" :
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
name: service-admin-binding
spec:
subjects:
- group: "admins"
roleRef:
kind: ServiceRole
name: "service-admin"
When a call is made without a token AuthnPolicy works, 401 with proper message is returned. Call with valid JWT results in 403 permission denied as the group was not matched. It works fine when I change subject to "all" users instead of a group( - user: "*")
Groups claim in fetched JWT after decoding is just an array of strings :
"groups": [
"admins"
]
If I add in the authorization context a first non empty operator with hardcoded value "admins" - groups: request.auth.claims["groups"] | "admins") it works ofc, but indicates groups are empty on mixer adapter resolving phase?
If I set in the authorization context groups to be taken from request.auth.token["groups"] like it's mentioned in the docu
mixer fails with an error :
(...)'requestcontext.authorization.istio-system': failed to evaluate expression for field 'Subject'; failed to evaluate expression for field 'Subject.Groups': unknown attribute request.auth.token'.
When I took a look at attribute vocabulary docu it does not mention token attribute on request.auth and I could find it in the code neither. However, there is request.auth.claims which I'm trying to use.
How can I setup authentication policy together with RBAC to let it be working with groups from JWT? Additionally, is it possible to log/debug mixer while resolving the authorization phase, to see what's exactly evaluated?
Answered by Piotr Mścichowski in comments:
I got a response on google groups mentioning that groups as an array of strings are not supported yet as well as groups in the subject of role binding (could be workaround by properties):
Yangmin Zhu
We're currently working adding more documents in Istio 1.0, if you're
using the most recent daily release, you could try it with the
following steps:
1) We introduced a new global custom resource to control the RBAC
behavior in the mesh: RbacConfig. You could apply one like
this to enable RBAC for the "default" namespace,
2) We made some changes to the ServiceRole.Constraints and
ServiceRoleBinding.Properties about what keys are supported. See
this PR for an overview of the supported keys. Regarding your
ServiceRoleBinding, you could use the following config to check
against the claim from the JWT (Note: the group field is not used and
not supported, instead you could specify it in properties):
apiVersion: "config.istio.io/v1alpha2"
kind: ServiceRoleBinding
metadata:
name: service-admin-binding
spec:
subjects:
- properties:
request.auth.claims[groups]: "admins"
roleRef:
kind: ServiceRole
name: "service-admin"
I think you don't need special setting to make the authentication
policy work with RBAC, if you could successfully finish this
task, it should work with RBAC automatically.
You could turn on the debug logging of the envoy proxy of your
service. For rbac, there is a specific logging group named "rbac" in
envoy, you could access the enovy admin page locally (by default it's
http://127.0.0.1:15000/logging).
Limin Wang:
We currently haven't supported JWT claims that are non-strings. If your JWT group claim is set to a single string
(instead of an array), it will just work.
"group": "admin"
Also "group" under "subject" is not supported at the moment. But as
Yangmin suggested, you can use custom "properties" instead.
subjects:
- properties:
request.auth.claims[groups]: "admins"
Thanks for bring this issue to our attention, we plan to make
improvement to support such use case in future releases.