I have Kubernetes with Istio installed.
I am trying to limit external traffic to a host (for example checkip.amazonaws.com). This will be applied to all services in a namespace (konta in the example). All the pods already have sidecar proxy injected.
I used the following configuration but with no success.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: quotahandler
namespace: konta
spec:
compiledAdapter: memquota
params:
quotas:
- name: requestcountquota.instance.konta
maxAmount: 5
validDuration: 60s
rateLimitAlgorithm: ROLLING_WINDOW
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: requestcountquota
namespace: konta
spec:
compiledTemplate: quota
params:
dimensions:
#source: "unknown"
source: request.headers["x-forwarded-for"] | "unknown"
#destination:
#destination: destination.labels["app"] | destination.service.name | "unknown"
#destinationVersion: destination.labels["version"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: konta
spec:
rules:
- quotas:
- charge: 1
quota: requestcountquota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: konta
spec:
quotaSpecs:
- name: request-count
namespace: konta
services:
- service: '*' # Uncomment this to bind *all* services to request-count
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: konta
spec:
# quota only applies if you are not logged in.
# match: match(request.headers["cookie"], "user=*") == false
match: match(destination.service.host, "checkip.amazonaws.com") == true
actions:
- handler: quotahandler
instances:
- requestcountquota
I am testing with a simple curl pod.
kubectl run -i --tty get-ip-address --image=dwdraju/alpine-curl-jq --restart=Never -n konta
and
curl checkip.amazonaws.com
Obs: My egress traffic are NOT passing through Istio egress-gateway
Related
I need help connecting a private cluster (from now On Cluster 1) of EKS through AWS Mesh and CloudMap to another public/private cluster (from now on Cluster 2).
I have managed to get Cluster 2 to connect to Cluster 1 through a virtual mesh, making a 'curl a core.app.svc.cluster.local:8080'; but I can't do it the other way around.
I clarify that if I do 'curl a core.app.svc.cluster.local:9000' it gives me a connection error because there is nothing on that port.
I have created an Endpoints for Mesh on the private networks of cluster 1, and the security group of Cluster 1 has access through port 8080 of CLuster 2.
I have also created router and virtual service for the CLuster 2.
In short, I've created the same thing for both clusters.
The fact is that if I do from inside the pod of Cluster 1 'curl front.app.svc.cluster.local:8080', it does not make any connection, I have checked the file /etc/resolv.conf and it has the DNS inside but the result is:
curl: (6) Could not resolve host: front.app.svc.cluster.local:8080
If I make a 'traceroute front.app.svc.cluster.local:8080' it responds with:
traceroute: bad address 'front.app.svc.cluster.local:8080'
I leave my settings:
CLUSTER 1 (private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: core
namespace: app
spec:
podSelector:
matchLabels:
app: core
version: v1
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: core
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/front.app.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: core
namespace: app
spec:
awsName: core.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: core-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: core-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: core-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name: core
weight: 1
CLUSTER 2 (public/private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: front
namespace: app
spec:
awsName: front.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: front-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: front-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: front-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name:front
weight: 1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: front
namespace: app
spec:
podSelector:
matchLabels:
app: front
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: front
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/core.app.svc.cluster.local
Could you help me understand why it works for one side and not for the other?
Thanks in advance.
I wanted to setup and use istio egress gateway.
I followed this link https://preliminary.istio.io/latest/blog/2018/egress-tcp/ and made this manifest:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-oracle
spec:
hosts:
- my.oracle.instance.com
addresses:
- 192.168.100.50/32
ports:
- name: tcp
number: 1521
protocol: tcp
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- hosts:
- my.oracle.instance.com
port:
name: tcp
number: 1521
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-destination-rule-for-oracle
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: external-oracle
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-external-oracle-through-egress-gateway
spec:
gateways:
- mesh
- istio-egressgateway
hosts:
- my.oracle.instance.com
tcp:
- match:
- destinationSubnets:
- 192.168.100.50/32
gateways:
- mesh
port: 1521
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 1521
subset: external-oracle
- match:
- gateways:
- istio-egressgateway
port: 1521
route:
- destination:
host: my.oracle.instance.com
port:
number: 1521
weight: 100
And then my application not able to start because a JDBC error.
I started to watch the egress-gateway pod's logs but I not see any sign of traffic.
So I googled and found this link: https://istio.io/latest/blog/2018/egress-monitoring-access-control/ to boost my egress-gateway pod logging but this looking a bit deprecated for me.
cat <<EOF | kubectl apply -f -
# Log entry for egress access
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: egress-access
namespace: istio-system
spec:
severity: '"info"'
timestamp: request.time
variables:
destination: request.host | "unknown"
path: request.path | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
reporterUID: context.reporter.uid | "unknown"
sourcePrincipal: source.principal | "unknown"
monitored_resource_type: '"UNSPECIFIED"'
---
# Handler for error egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-error-logger
namespace: istio-system
spec:
severity_levels:
info: 2 # output log level as error
outputAsJson: true
---
# Rule to handle access to *.cnn.com/politics
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith("cnn.com") && request.path.startsWith("/politics") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Handler for info egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-access-logger
namespace: istio-system
spec:
severity_levels:
info: 0 # output log level as info
outputAsJson: true
---
# Rule to handle access to *.com
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(".com") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
EOF
But when I want to apply to this I have this error:
no matches for kind "logentry" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
There is a new api version of there kind's?
istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (28 proxies)
There is a way to make a working istio egress-gateway with a logging (as the istio ingress gateway logging works).
I want to limit some pod to access external service.
two app A and B, A can access example.com, but B can't access example.com.
create the serviceentry for external service
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
hosts:
- example.com
addresses:
- 192.168.0.13
ports:
- number: 8888
name: tcp-8888
protocol: TCP
- number: 443
name: tcp-443
protocol: TCP
location: MESH_EXTERNAL
exportTo:
- .
create policy to limit pod label contain app is app1 can access this serviceentry
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist
spec:
compiledAdapter: listchecker
params:
overrides:
- app1
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: appname
spec:
compiledTemplate: listentry
params:
value: source.labels["app"]
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkapp
spec:
match: destination.service.host == "example.com"
actions:
- handler: whitelist
instances: [ appname ]
but this seting not work
I am trying to setup the OPA adapter in Istio with the simplest rule to deny everything by default:
---
apiVersion: "config.istio.io/v1alpha2"
kind: authorization
metadata:
name: authz-instance
namespace: istio-demo
spec:
subject:
user: source.uid | ""
action:
namespace: destination.namespace | "default"
service: destination.service | ""
method: request.method | ""
path: request.path | ""
---
apiVersion: "config.istio.io/v1alpha2"
kind: opa
metadata:
name: opa-handler
namespace: istio-demo
spec:
policy:
- |+
package mixerauthz
default allow = false
checkMethod: "data.mixerauthz.allow"
failClose: true
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: authz-rule
namespace: istio-demo
spec:
match: "true"
actions:
- handler: opa-handler.opa.istio-demo
instances:
- authz-instance.authorization.istio-demo
When I apply it, Istio's policy complains about not finding the handler:
istio-system/istio-policy-7f86484668-fc8lv[mixer]: 2019-08-12T15:58:21.798783Z info Built new config.Snapshot: id='9'
istio-system/istio-policy-7f86484668-fc8lv[mixer]: 2019-08-12T15:58:21.798819Z error 2 errors occurred:
istio-system/istio-policy-7f86484668-fc8lv[mixer]: * action='authz-rule.rule.istio-demo[0]': Handler not found: handler='opa-handler.opa.istio-demo'
istio-system/istio-policy-7f86484668-fc8lv[mixer]: * rule=authz-rule.rule.istio-demo: No valid actions found in rule
I've tried to apply it in the istio-system namespace, but same issue.
Anyone can help out here?
Thanks in advance.
I got this to work with Istio 1.4 installed with demo profile.
It was also necessary to enable policies check by running:
istioctl manifest apply --set values.global.disablePolicyChecks=false --set values.pilot.policy.enabled=true
Find handler, authorization template and rule configuration below
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: opa-handler
namespace: istio-system
spec:
compiledAdapter: opa
params:
policy:
- |+
package mixerauthz
default allow = false
checkMethod: "data.mixerauthz.allow"
failClose: true
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: authz-instance
namespace: istio-system
spec:
compiledTemplate: authorization
params:
subject:
user: source.uid | ""
action:
namespace: destination.namespace | "default"
service: destination.service.host | ""
path: request.path | ""
method: request.method | ""
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: auth
namespace: istio-system
spec:
actions:
- handler: opa-handler.handler.istio-system
instances:
- authz-instance.instance.istio-system
Then I got 403 with this message in my web service (httpbin)
PERMISSION_DENIED:opa-handler.istio-system:opa: request was rejected, opa-handler.istio-system:opa: request was rejected
Alternatively you can try out the OPA/Istio/Envoy integration that enforces the same type of policies at the proxy layer
We want Istio to allow incoming traffic to a service only from a particular namespace. How can we do this with Istio? We are runnning Istio 1.1.3 version.
Update:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- podSelector:
matchLabels:
istio: ingress
This did not work I am able to access the service from other namespaces as well. Next i tried:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: external-api-caller
namespace: test-ns
spec:
rules:
- services: ["testapp"]
methods: ["*"]
constraints:
- key: "destination.labels[version]"
values: ["v1", "v2"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: external-api-caller
namespace: test-ns
spec:
subjects:
- properties:
source.namespace: "default"
roleRef:
kind: ServiceRole
name: "external-api-caller"
I am able to access the service from all the namespaces. Where i expected it should be allowed only from "default" namespace
I'm not sure if this is possible for particular namespace, but it will work on labels.
You can create a network policy in Istio, this is nicely explained on Traffic Routing in Kubernetes via Istio and Envoy Proxy.
...
ingress:
- from:
- podSelector:
matchLabels:
zone: trusted
...
In the example only pods with label zone: trusted will be allowed to make incoming connection to the pod.
You can read about Using Network Policy with Istio.
I would also recommend reading Security Concepts in Istio as well as Denials and White/Black Listing.
Hope this helps You.
Using k8s Network Policy: Yes it is possible. The example that is posted in the question is not allowing from a different namespace. In the Ingress rule you have to use namespace selector which will be used to specify the namespace from which you want to allow the traffic. In the example below, namespace with label 'ns-group: prod-ns' will be allowed to access the pod with label 'app: testapp' on port 80 and protocol TCP
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-app-ingress
namespace: test-ns
spec:
podSelector:
matchLabels:
app: testapp
ingress:
- ports:
- protocol: TCP
port: 80
from:
- namespaceSelector:
matchLabels:
ns-group: prod-ns
Using Istio White Listing Policy: You can go through white listing policy examples and attribute vocabulary
Below is the example,
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist-namespace
spec:
compiledAdapter: listchecker
params:
overrides: ["prod-ns"]
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: source-ns-instance
spec:
compiledTemplate: listentry
params:
value: source.namespace
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: rule-policy-1
spec:
match: destination.labels["app"] == "testapp" && destination.namespace == "test-ns"
actions:
- handler: whitelist-namespace
instances: [ source-ns-instance ]