How to use my own root certificate for mTLS in Istio? - istio

I am new with Istio and I want to use my own root certificate for mTLS in Istio.
I am following this doc: https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/
I followed the instructions and they work fine.
It creates a secret in istio-system namespace with the name cacerts to store certificates and Istio uses those certificates as root and intermediate certificates for mTLS.
Now, I want to know 2 things:
When I change the secret name from cacerts to cacerts1, Istio no longer uses certificates present in cacert1. What should I do to make Istio use the certificates stored in a secret which is named different than cacert?
If my secret (which contains the certificates) is present in different namespace, how do I use that secret?

If you are bringing your own certificate instead of istio's own self signed cert, you have to name the secret file name as cacert. For your second questions, the cacert secret is namespace scoped and has to be in the istio-system namespace only.

I was able to partially resolve this problem. Let’s say you have a secret named customsecret in namespace istio-system then you can achieve this using this file:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
components:
pilot:
k8s:
env:
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istiod
patches:
- path: spec.template.spec.volumes.[name:cacerts].secret.secretName
value: "customsecret"
Just make sure you have a secret is present in the namespace istio-system. I could not find a way to use a secret that is not present in istio-system namespace.

Related

mTLS between services running inside and outside a mesh using Istio's trust chain

I understand that I can configure Istio for its Citadel component to use a root x509 certificate + private key that I provide. Can I extend this system in a way that I also use the same root to issue certificates to legacy workloads running in the same k8s cluster, and then configure a destination rule to access these workloads from inside the mesh? Something like:
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls
spec:
host: mymtls-app.legacy.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: ISTIO_MUTUAL
sni: mymtls-app.legacy.svc.cluster.local
Can the above work? Do I need any additional configuration besides the above? I may not be in a position to run spiffe / spire to manage the certificates for workloads outside the mesh - which puts a spiffe-federation solution like this somewhat out of reach for me. But this also doesn't seem like a fully supported mechanism in any case.
I have been able to configure mTLS using a separate certificate hierarchy which I have to inject via secrets and mount into the pods / sidecars in question (illustrated here).

Istio: DestinationRule for a legacy service outside the mesh

I have a k8s cluster with Istio deployed in the istio-system namespace, and sidecar injection enabled by default in another namespace called mesh-apps. I also have a second legacy namespace which contains certain applications that do their own TLS termination. I am trying to setup mTLS access between services running inside the mesh-apps namespace and those running inside legacy.
For this purpose, I have done the following:
Created a secret in the mesh-apps namespace containing the client cert, key and CAcert to be used to connect with an application in legacy via mTLS.
Mounted these at a well-defined location inside a pod (the sleep pod in Istio samples actually) running in mesh-apps.
Deployed an app inside legacy and exposed it using a ClusterIP service called mymtls-app on port 8443.
Created the following destination rule in the mesh-apps namespace, hoping that this enables mTLS access from mesh-apps to legacy.
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls
spec:
host: mymtls-app.legacy.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: MUTUAL
clientCertificate: /etc/sleep/tls/server.cert
privateKey: /etc/sleep/tls/server.key
caCertificates: /etc/sleep/tls/ca.pem
sni: mymtls-app.legacy.svc.cluster.local
Now when I run the following command from inside the sleep pod, I would have expected the above DestinationRule to take effect:
kubectl exec sleep-37893-foobar -c sleep -- curl http://mymtls-app.legacy.svc.cluster.local:8443/hello
But instead I just get the error:
Client sent an HTTP request to an HTTPS server.
If I add https in the URL, then this is the error:
curl: (56) OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0
command terminated with exit code 56
I figured my own mistake. I needed to correctly mount the certificate, private key, and CA chain in the sidecar, not in the app container. In order to mount them in the sidecar, I performed the following actions:
Created a secret with the cert, private key and CA chain.
kubectl create secret generic sleep-secret -n mesh-apps \
--from-file=server.key=/home/johndoe/certs_mtls/client.key \
--from-file=server.cert=/home/johndoe/certs_mtls/client.crt \
--from-file=ca.pem=/home/johndoe/certs_mtls/server_ca.pem
Modified the deployment manifest for the sleep container thus:
template:
metadata:
annotations:
sidecar.istio.io/userVolumeMount: '[{"name": "secret-volume", "mountPath": "/etc/sleep/tls", "readonly": true}]'
sidecar.istio.io/userVolume: '[{"name": "secret-volume", "secret": {"secretName": "sleep-secret"}}]'
Actually I had already created the secret earlier, but it was mounted in the app container (sleep) instead of the sidecar, in this way:
spec:
volumes:
- name: <secret_volume_name>
secret:
secretName: <secret_name>
optional: true
containers:
- name: ...
volumeMounts:
- mountPath: ...
name: <secret_volume_name>

How can we generate Self Managed Certificates in GCP instance?

i am testing with Open SSL in GCP instance. and how can generate Self Managed Certificates in GCP instance.
You can make certificate and domain status active, it can take up to 30 mins for your load balancer to begin using your self-managed SSL certificate
To test this you can run the following OpenSSL command, replacing
DOMAIN ----------------------- with-----------------------DNS name
IP_ADDRESS-------------------with-----------------------IP address of your load balancer.
echo | openssl s_client -showcerts -servername DOMAIN -connect IP_ADDRESS:443 -verify 99 -verify_return_error
This command outputs the certificates that the load balancer presents to the client. Along with other detailed information, the output should include the certificate chain.
Verify return code: 0 (ok).
For more information you can refer to this link.
There are so many ways we can issue certificates, let's focus on K8S cluster running on Google (GKE) using a custom resource called ManagedCertificate and ingress rules.
You must own the domain name and name must be no longer than 63 characters.
Create a reserved (static) external IP address using the following
command, or use google console.
gcloud compute addresses create gke-static-ip --global
Create Managed Certificate
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: gke-certificate
spec:
domains:
- DOMAIN
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gke-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-static-ip
networking.gke.io/managed-certificates: gke-certificate
spec:
backend:
serviceName: hello-world-service
servicePort: 80
in my case i use the cloud endpoints as domain name

GKE Ingress is not working with cert-manager ssl secrets

I am trying to get letsencrypt work with GKE LB, I know there are GCP Managed Certs but it will not work with internal LB as the challenge will not get passed. Letsencrypt DNS certification using cert-manager is there and ready to be used.
❯ k get secrets letsencrypt-prod -o yaml
apiVersion: v1
data:
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlVZTVhXdGNZZUJpMkdadzljRFRLNzY==
kind: Secret
metadata:
creationTimestamp: "2021-01-24T15:03:39Z"
name: letsencrypt-prod
namespace: elastic-system
resourceVersion: "3636289"
selfLink: /api/v1/namespaces/elastic-system/secrets/letsencrypt-prod
uid: f4bec5a9-d3b5-4f4a-9ec6-01a4ce3ba47c
type: Opaque
spec:
tls:
- hosts:
- staging.example.com
- staging2.example.com
secretName: letsencrypt-prod
GCP Reporting this error Error syncing to GCP: error running load balancer syncing routine: error getting secrets for Ingress: secret "letsencrypt-prod" does not specify cert as string data
can anybody help me with what it is missing?
As per this, you must provide a valid format for GCP, like this from your already provided Let's Encrypt valid certs:
kubectl create secret generic letsencrypt-prod --from-file=tls.crt="cert.pem" --from-file=tls.key="privkey.pem" --dry-run -o yaml > output
kubectl apply -f output
Also, (it seems you are already using it, but better safe than sorry), you must define this in the tls section of your Ingress as per this
Actually, it is missed in doc or I am missing as example uses the same name everywhere as metadata.
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: cert-example
namespace: example
spec:
secretName: REAL_NAME_OF_SECRET << This need to include in ingress.
issuerRef:
name: letsencrypt-prod
dnsNames:
- 'staging.domain.com'
- '*.staging.domain.com'
so REAL_NAME_OF_SECRET you should put in ingress or anywhere, where you want to use tls.crt or tls.key.

Istio allow only specific IP CIDR and deny rest

I have a requirement where-in I would like to allow certain CIDR ranges to be able to access my service, rest all should be denied.
I have tried the Istio IP Whitelisting/Blacklisting as mentioned in the official Istio documentation.
For example 10.0.0.2/16 should get allowed and rest should be denied. This doesn't seem to work.
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.0.0.2/16"] # overrides provide a static list
blacklist: true
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
So basically the istio 1.5.0 was released few days ago and if we check the istio docs white/black list are deprecated now.
Denials and White/Black Listing (Deprecated)
But there is actually a good news because there is new example for authorization on ingress gateway which should answer your question.
I am not able to get the real client IP hence not able to block/allow using authorization policy or IP based whitelisting.
Based on this new example which I tested myself if you want to see you'r source ip you have to change istio-ingressgateway externalTrafficPolicy from Cluster to Local.
Update the ingress gateway to set externalTrafficPolicy: local to preserve the original client source IP on the ingress gateway using the following command:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
And the allow example
The following example creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ingress-policy
namespace: istio-system
spec:
selector:
matchLabels:
app: istio-ingressgateway
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
I hope this answer your question. Let me know if you have any more questions.
Other solution in Istio 1.5:
Configure external traffic:
kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Or in Helm installation:
--set gateways.istio-ingressgateway.externalTrafficPolicy
And you can use in any namespace like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist
namespace: foo
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
when:
- key: request.headers[X-Envoy-External-Address]
values: ["1.2.3.4/24"] #Ip to allow
selector:
matchLabels:
app: httpbin