I want to limit some pod to access external service.
two app A and B, A can access example.com, but B can't access example.com.
create the serviceentry for external service
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
hosts:
- example.com
addresses:
- 192.168.0.13
ports:
- number: 8888
name: tcp-8888
protocol: TCP
- number: 443
name: tcp-443
protocol: TCP
location: MESH_EXTERNAL
exportTo:
- .
create policy to limit pod label contain app is app1 can access this serviceentry
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist
spec:
compiledAdapter: listchecker
params:
overrides:
- app1
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: appname
spec:
compiledTemplate: listentry
params:
value: source.labels["app"]
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkapp
spec:
match: destination.service.host == "example.com"
actions:
- handler: whitelist
instances: [ appname ]
but this seting not work
Related
I am trying to make an Istio gateway (with certificates from for public access to a deployed application. Here are the configurations:
Cert manager installed in cluster via helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Certificate issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: kube-system
spec:
acme:
email: xxx#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using istio
solvers:
- http01:
ingress:
class: istio
Certificate file:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: url-certs
namespace: istio-system
annotations:
cert-manager.io/issue-temporary-certificate: "true"
spec:
secretName: url-certs
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: bot.demo.live
dnsNames:
- bot.demo.live
- "*.demo.live"
Gateway file:
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https-url-1
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "url-certs" # This should match the Certificate secretName
Application Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
namespace: bot-demo
spec:
replicas: 1
selector:
matchLabels:
app: microbot
template:
metadata:
labels:
app: microbot
spec:
containers:
- name: microbot
image: dontrebootme/microbot:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Virtual service and application service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microbot-virtual-svc
namespace: bot-demo
spec:
hosts:
- bot.demo.live
gateways:
- istio-system/public-gateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: microbot-service
port:
number: 9100
---
apiVersion: v1
kind: Service
metadata:
name: microbot-service
namespace: bot-demo
spec:
selector:
app: microbot
ports:
- port: 9100
targetPort: 80
Whenever I try to curl https://bot.demo.live, I get a certificate error. The certificate issuer is working. I just can't figure out how to expose the deployed application via the istio gateway for external access. bot.demo.live is already in my /etc/hosts/ file and and I can ping it just fine.
What am I doing wrong?
I'm new to using Kubernetes and AWS so there are a lot of concepts I may not understand. I hope you can help me with this problem I am having.
I have 3 services, frontend, backend and auth each with their corresponding nodeport and an ingress that maps the one host to each service, everything is running on EKS and for the ingress deployment I am using AWS ingress controller. Once everything is deployed I try to register the node-group in the targets the frontend and auth services work correctly but backend stays in unhealthy state. I thought it could be a port problem but if you look at auth and backend they have almost the same deployment defined and both are api created with dotnet core. One thing to note is that I can do kubectl port-forward <backend-pod> 80:80 and it is running without problems. And when I run the kubectl describe ingresses command I get this:
Name: ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: default
Address: xxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxx.elb.amazonaws.com
Ingress Class: \<none\>
Default backend: \<default\>
Rules:
Host Path Backends
----------------
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
alb.ingress.kubernetes.io/listen-ports: \[{"HTTPS":443}, {"HTTP":80}\]
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: 443
kubernetes.io/ingress.class: alb
Events:
Type Reason Age From Message
-------------------------
Normal SuccessfullyReconciled 8m20s (x15 over 41h) ingress Successfully reconciled
Frontend
apiVersion: apps/v1
kind: Deployment
metadata:
name: front
labels:
name: front
spec:
replicas: 2
selector:
matchLabels:
name: front
template:
metadata:
labels:
name: front
spec:
containers:
- name: frontend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wrfront-{{ .Values.namespace }}-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: default-port
protocol: TCP
selector:
name: front
---
Auth
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-wrauth-keys
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
name: "auth-config-ocpm"
labels:
app: "auth"
data:
ASPNETCORE_URL: "http://+:80"
ASPNETCORE_ENVIRONMENT: "Development"
ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS: "true"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "auth"
labels:
app: "auth"
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: "auth"
template:
metadata:
labels:
app: "auth"
spec:
volumes:
- name: auth-keys-storage
persistentVolumeClaim:
claimName: pvc-wrauth-keys
containers:
- name: "api-auth"
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: auth-keys-storage
mountPath: "/app/auth-keys"
env:
- name: "ASPNETCORE_URL"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_URL"
name: "auth-config-ocpm"
- name: "ASPNETCORE_ENVIRONMENT"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_ENVIRONMENT"
name: "auth-config-ocpm"
- name: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
valueFrom:
configMapKeyRef:
key: "ASPNETCORE_LOGGINGCONSOLEDISABLECOLORS"
name: "auth-config-ocpm"
---
apiVersion: v1
kind: Service
metadata:
name: auth-service
spec:
type: NodePort
selector:
app: auth
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Backend (Service with problem)
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: {{ .Values.image }}
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
name: backend
ports:
- name: default-port
protocol: TCP
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/certificate-arn: {{ .Values.certificate }}
spec:
rules:
- host: {{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: front-service
port:
name: default-port
pathType: Prefix
- host: back.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: backend-service
port:
name: default-port
pathType: Prefix
- host: auth.{{ .Values.host }}
http:
paths:
- path: /
backend:
service:
name: auth-service
port:
name: default-port
pathType: Prefix
I've tried to deploy other services and they work correctly, also running only backend or only another service, but always the same thing happens and always with the backend.
What could be happening? Is it a configuration problem? Some error in Ingress or Deployment? Or is it just the backend service?
I would be very grateful for any help.
domain.com
/ front-service:default-port (10.0.1.183:80,10.0.2.98:80)
back.domain.com
/ backend-service:default-port (\<none\>)
auth.domain.com
/ auth-service:default-port (10.0.1.30:80,10.0.1.33:80)
This one is saying that your backend service is not registered to the Ingress.
One thing to remember is that Ingress registers Services by pods' ClusterIP, like your Ingress output "10.0.1.30:80", not NodePort. And according to docs , I don't know why you can have multiple NodePort services with the same port. But when you do port-forward, you actually open that port on all your instances, I assume you have 2 instances, and then your ALB health check that port and return healthy.
But I think your issue is from your Ingress that can not locate your backend service.
My suggestions are:
Trying with only backend-service with port changed, and maybe without auth and frontend services. Default range of NodePort is 30000 - 32767
Going inside that pod or create a new pod, make a request to that service using its URL to check what it returns. By default, ALB only accept status 200 from its homepage.
I need help connecting a private cluster (from now On Cluster 1) of EKS through AWS Mesh and CloudMap to another public/private cluster (from now on Cluster 2).
I have managed to get Cluster 2 to connect to Cluster 1 through a virtual mesh, making a 'curl a core.app.svc.cluster.local:8080'; but I can't do it the other way around.
I clarify that if I do 'curl a core.app.svc.cluster.local:9000' it gives me a connection error because there is nothing on that port.
I have created an Endpoints for Mesh on the private networks of cluster 1, and the security group of Cluster 1 has access through port 8080 of CLuster 2.
I have also created router and virtual service for the CLuster 2.
In short, I've created the same thing for both clusters.
The fact is that if I do from inside the pod of Cluster 1 'curl front.app.svc.cluster.local:8080', it does not make any connection, I have checked the file /etc/resolv.conf and it has the DNS inside but the result is:
curl: (6) Could not resolve host: front.app.svc.cluster.local:8080
If I make a 'traceroute front.app.svc.cluster.local:8080' it responds with:
traceroute: bad address 'front.app.svc.cluster.local:8080'
I leave my settings:
CLUSTER 1 (private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: core
namespace: app
spec:
podSelector:
matchLabels:
app: core
version: v1
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: core
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/front.app.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: core
namespace: app
spec:
awsName: core.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: core-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: core-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: core-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name: core
weight: 1
CLUSTER 2 (public/private)
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: app
spec:
namespaceSelector:
matchLabels:
mesh: app
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualService
metadata:
name: front
namespace: app
spec:
awsName: front.app.svc.cluster.local
provider:
virtualRouter:
virtualRouterRef:
name: front-router
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualRouter
metadata:
namespace: app
name: front-router
spec:
listeners:
- portMapping:
port: 8080
protocol: http
routes:
- name: front-route
httpRoute:
match:
prefix: /
action:
weightedTargets:
- virtualNodeRef:
name:front
weight: 1
---
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: front
namespace: app
spec:
podSelector:
matchLabels:
app: front
listeners:
- portMapping:
port: 8080
protocol: http
serviceDiscovery:
awsCloudMap:
namespaceName: app.pvt.aws.local
serviceName: front
backends:
- virtualService:
virtualServiceARN: arn:aws:appmesh:eu-west-2:238523995933:mesh/app/virtualService/core.app.svc.cluster.local
Could you help me understand why it works for one side and not for the other?
Thanks in advance.
I wanted to setup and use istio egress gateway.
I followed this link https://preliminary.istio.io/latest/blog/2018/egress-tcp/ and made this manifest:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-oracle
spec:
hosts:
- my.oracle.instance.com
addresses:
- 192.168.100.50/32
ports:
- name: tcp
number: 1521
protocol: tcp
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- hosts:
- my.oracle.instance.com
port:
name: tcp
number: 1521
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-destination-rule-for-oracle
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: external-oracle
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-external-oracle-through-egress-gateway
spec:
gateways:
- mesh
- istio-egressgateway
hosts:
- my.oracle.instance.com
tcp:
- match:
- destinationSubnets:
- 192.168.100.50/32
gateways:
- mesh
port: 1521
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 1521
subset: external-oracle
- match:
- gateways:
- istio-egressgateway
port: 1521
route:
- destination:
host: my.oracle.instance.com
port:
number: 1521
weight: 100
And then my application not able to start because a JDBC error.
I started to watch the egress-gateway pod's logs but I not see any sign of traffic.
So I googled and found this link: https://istio.io/latest/blog/2018/egress-monitoring-access-control/ to boost my egress-gateway pod logging but this looking a bit deprecated for me.
cat <<EOF | kubectl apply -f -
# Log entry for egress access
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: egress-access
namespace: istio-system
spec:
severity: '"info"'
timestamp: request.time
variables:
destination: request.host | "unknown"
path: request.path | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
reporterUID: context.reporter.uid | "unknown"
sourcePrincipal: source.principal | "unknown"
monitored_resource_type: '"UNSPECIFIED"'
---
# Handler for error egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-error-logger
namespace: istio-system
spec:
severity_levels:
info: 2 # output log level as error
outputAsJson: true
---
# Rule to handle access to *.cnn.com/politics
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith("cnn.com") && request.path.startsWith("/politics") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Handler for info egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-access-logger
namespace: istio-system
spec:
severity_levels:
info: 0 # output log level as info
outputAsJson: true
---
# Rule to handle access to *.com
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(".com") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
EOF
But when I want to apply to this I have this error:
no matches for kind "logentry" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
There is a new api version of there kind's?
istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (28 proxies)
There is a way to make a working istio egress-gateway with a logging (as the istio ingress gateway logging works).
ISTIO version: 1.9.4
EKS Cluster version: 1.14
We have deployed ISTIO APP mesh in our project. We have deployed External Authorization using istio's documentation i.e. https://istio.io/latest/docs/tasks/security/authorization/authz-custom/.
External authorizer used (as mentioned in above documentation) : https://raw.githubusercontent.com/istio/istio/release-1.9/samples/extauthz/ext-authz.yaml
When we access any API from going into pod of another API (i.e. over http), using curl command, all works fine. External auth service gets call and all the headers are passed into external authorizer's v3 check method. Below information is passed
source, principal, destination, headers: authority, method, path, accept, content-length, user-agent, x-b3-sampled, x-b3-spanid, x-b3-traceid, x-envoy-attempt-count, x-ext-authz, x-forwarded-client-certx-forwarded-proto, x-request-id.
But when we try to access the same service over https using postman, browser or from going into pod of another API and using curl with https endpoint, we get denied response from external authorizer's v3 check method. Also when we check the logs of external authorizer's v3 check method no headers are passed to it in this case.
Below is setup
Name spaces with ISTIO ejection enable : foo
1. ISTIO Config map changes
data:
mesh: |-
# Add the following content to define the external authorizers.
extensionProviders:
- name: "sample-ext-authz-grpc"
envoyExtAuthzGrpc:
service: "ext-authz.foo.svc.cluster.local"
port: "9000"
- name: "sample-ext-authz-http"
envoyExtAuthzHttp:
service: "ext-authz.foo.svc.cluster.local"
port: "8000"
includeHeadersInCheck: ["x-ext-authz"]
2. External Authorizer
apiVersion: v1
kind: Service
metadata:
name: ext-authz
namespace: foo
labels:
app: ext-authz
spec:
ports:
- name: http
port: 8000
targetPort: 8000
- name: grpc
port: 9000
targetPort: 9000
selector:
app: ext-authz
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ext-authz
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: ext-authz
template:
metadata:
labels:
app: ext-authz
spec:
containers:
- image: docker.io/istio/ext-authz:0.6
imagePullPolicy: IfNotPresent
name: ext-authz
ports:
- containerPort: 8000
- containerPort: 9000
3. Enable the external authorization Config
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ext-authz
namespace: foo
spec:
selector:
matchLabels:
app: user-api
action: CUSTOM
provider:
name: sample-ext-authz-grpc
rules:
- to:
- operation:
paths: ["/user/api/*"]
4. PeerAuth Chagnes
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: mtlsauth
namespace: foo
spec:
mtls:
mode: STRICT
5. Destination Rule
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: default
namespace: foo
spec:
host: "*.samplehost.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
6. Virtual Service File
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sample-gateway
namespace: foo
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "sample.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: user-api
namespace: foo
spec:
hosts:
- "sample.com"
gateways:
- sample-gateway
http:
- match:
- uri:
prefix: /user/api/
route:
- destination:
host: user-api
port:
number: 9500
Logs from ingress gateway:
2021-07-08T11:13:33.554104Z warning envoy config StreamAggregatedResources gRPC config stream closed: 14, connection error: desc = "transport: Error while dialing dial tcp 172.20.0.51:15012: connect: connection refused"
2021-07-08T11:13:35.420052Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
2021-07-08T11:43:24.012961Z warning envoy config StreamAggregatedResources gRPC config stream closed: 0
I am not sure if you are facing the issue but if seems like you have enforced mtls . Thats why in the following config for gateway. You might need to open HTTPS also
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sample-gateway
namespace: foo
spec:
selector:
istio: ingressgateway
servers:
port:
number: 80
name: http
protocol: HTTP
hosts:
"sample.com"
port:
number: 443
name: https
protocol: HTTPS
hosts:
"sample.com"