I wanted to setup and use istio egress gateway.
I followed this link https://preliminary.istio.io/latest/blog/2018/egress-tcp/ and made this manifest:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-oracle
spec:
hosts:
- my.oracle.instance.com
addresses:
- 192.168.100.50/32
ports:
- name: tcp
number: 1521
protocol: tcp
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- hosts:
- my.oracle.instance.com
port:
name: tcp
number: 1521
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-destination-rule-for-oracle
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: external-oracle
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-external-oracle-through-egress-gateway
spec:
gateways:
- mesh
- istio-egressgateway
hosts:
- my.oracle.instance.com
tcp:
- match:
- destinationSubnets:
- 192.168.100.50/32
gateways:
- mesh
port: 1521
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 1521
subset: external-oracle
- match:
- gateways:
- istio-egressgateway
port: 1521
route:
- destination:
host: my.oracle.instance.com
port:
number: 1521
weight: 100
And then my application not able to start because a JDBC error.
I started to watch the egress-gateway pod's logs but I not see any sign of traffic.
So I googled and found this link: https://istio.io/latest/blog/2018/egress-monitoring-access-control/ to boost my egress-gateway pod logging but this looking a bit deprecated for me.
cat <<EOF | kubectl apply -f -
# Log entry for egress access
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: egress-access
namespace: istio-system
spec:
severity: '"info"'
timestamp: request.time
variables:
destination: request.host | "unknown"
path: request.path | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
reporterUID: context.reporter.uid | "unknown"
sourcePrincipal: source.principal | "unknown"
monitored_resource_type: '"UNSPECIFIED"'
---
# Handler for error egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-error-logger
namespace: istio-system
spec:
severity_levels:
info: 2 # output log level as error
outputAsJson: true
---
# Rule to handle access to *.cnn.com/politics
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith("cnn.com") && request.path.startsWith("/politics") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Handler for info egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-access-logger
namespace: istio-system
spec:
severity_levels:
info: 0 # output log level as info
outputAsJson: true
---
# Rule to handle access to *.com
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(".com") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
EOF
But when I want to apply to this I have this error:
no matches for kind "logentry" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
There is a new api version of there kind's?
istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (28 proxies)
There is a way to make a working istio egress-gateway with a logging (as the istio ingress gateway logging works).
Related
I am trying to make an Istio gateway (with certificates from for public access to a deployed application. Here are the configurations:
Cert manager installed in cluster via helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Certificate issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: kube-system
spec:
acme:
email: xxx#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using istio
solvers:
- http01:
ingress:
class: istio
Certificate file:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: url-certs
namespace: istio-system
annotations:
cert-manager.io/issue-temporary-certificate: "true"
spec:
secretName: url-certs
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: bot.demo.live
dnsNames:
- bot.demo.live
- "*.demo.live"
Gateway file:
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https-url-1
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "url-certs" # This should match the Certificate secretName
Application Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
namespace: bot-demo
spec:
replicas: 1
selector:
matchLabels:
app: microbot
template:
metadata:
labels:
app: microbot
spec:
containers:
- name: microbot
image: dontrebootme/microbot:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Virtual service and application service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microbot-virtual-svc
namespace: bot-demo
spec:
hosts:
- bot.demo.live
gateways:
- istio-system/public-gateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: microbot-service
port:
number: 9100
---
apiVersion: v1
kind: Service
metadata:
name: microbot-service
namespace: bot-demo
spec:
selector:
app: microbot
ports:
- port: 9100
targetPort: 80
Whenever I try to curl https://bot.demo.live, I get a certificate error. The certificate issuer is working. I just can't figure out how to expose the deployed application via the istio gateway for external access. bot.demo.live is already in my /etc/hosts/ file and and I can ping it just fine.
What am I doing wrong?
We have an api service which should be accessible only by a particular Istio gateway/virtual service.
Can this be achieved by istio's AuthorizationPolicy?
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: selfserviceportal
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: api-server-svc
namespace: selfserviceportal
spec:
rules:
- from:
- source:
# How do I reference the istio gateway/virtual service here?
to:
- operation:
methods:
- GET
selector:
matchLabels:
app: api-server-svc
This is the gateway which should be allowed:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: api-server-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "ssp-api-server.internalroot.net"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-server-vservice
spec:
hosts:
- "ssp-api-server.internalroot.net"
gateways:
- api-server-gateway
http:
- match:
- uri:
prefix: /api
route:
- destination:
port:
number: 8000
host: api-server-svc
I want to limit some pod to access external service.
two app A and B, A can access example.com, but B can't access example.com.
create the serviceentry for external service
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: example
spec:
hosts:
- example.com
addresses:
- 192.168.0.13
ports:
- number: 8888
name: tcp-8888
protocol: TCP
- number: 443
name: tcp-443
protocol: TCP
location: MESH_EXTERNAL
exportTo:
- .
create policy to limit pod label contain app is app1 can access this serviceentry
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelist
spec:
compiledAdapter: listchecker
params:
overrides:
- app1
blacklist: false
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: appname
spec:
compiledTemplate: listentry
params:
value: source.labels["app"]
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkapp
spec:
match: destination.service.host == "example.com"
actions:
- handler: whitelist
instances: [ appname ]
but this seting not work
I am trying to expose kiali on my default gateway. I have other services working for apps in the default namespace but have not been able to route traffic to anything in the istio namespace
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- '*'
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: default
spec:
hosts:
- kiali.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
The problem was I had mTLS enabled and kiali does not have a sidecar thus can not be validated by mTLS. the solution was to add a destination rule disabling mTLS for it.
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
You should define an ingress gateway and make sure that the hosts in the gateway match the hosts in the virtual service. Also specify the port of the destination. See the Control Ingress Traffic task.
For me this worked!
I ran
istioctl proxy-config routes istio-ingressgateway-866d7949c6-68tt4 -n istio-system -o json > ./routes.json
to get the dump of all the routes. The kiali route got corrupted for some reason. I deleted the virtual service and created it again, that fixed it.
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kiali
namespace: istio-system
spec:
gateways:
- istio-system/my-gateway
hosts:
- 'mydomain.com'
http:
- match:
- uri:
prefix: /kiali/
route:
- destination:
host: kiali.istio-system.svc.cluster.local
port:
number: 20001
weight: 100
---
apiVersion: 'networking.istio.io/v1alpha3'
kind: DestinationRule
metadata:
name: kiali
namespace: istio-system
spec:
host: kiali.istio-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
---
Note: hosts needed to be set, '*' didnt work for some reason.
in the helm values file there is a setting global.k8sIngressSelector with the description.
Gateway used for legacy k8s Ingress resources. By default it is
using 'istio:ingress', to match 0.8 config. It requires that
ingress.enabled is set to true. You can also set it
to ingressgateway, or any other gateway you define in the 'gateway'
section.
My interpretation of this is that the istio ingress should pick up normal ingress configurations instead of having to make a virtual service. Is this correct? I tried it and it is not working for me.
kind: Deployment
apiVersion: apps/v1
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: echo
spec:
type: ClusterIP
selector:
app: echo
ports:
- port: 80
name: http
this works
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*.dev.example.com'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo
spec:
hosts:
- echo.dev.example.com
gateways:
- gateway
http:
- route:
- destination:
host: echo
this doesnt
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: echo
spec:
rules:
- host: echo.dev.example.com
http:
paths:
- backend:
serviceName: echo
servicePort: 80
Your Ingress needs to have an annotation: kubernetes.io/ingress.class: istio.
Depending on what version of Istio you are using, it may not be working anyway. There is currently an open issue about Ingress not working in the latest drivers, and it sounds like it may have been broken for a while.
https://github.com/istio/istio/issues/10500