How to set TLS to a service in EKS with PCA on AWS? - amazon-web-services

I created a TLS-enabled service with AWS PCA and cert-manager by this post:
https://aws.amazon.com/blogs/security/tls-enabled-kubernetes-clusters-with-acm-private-ca-and-amazon-eks-2/
After I deployed a demo application with ingress, I tested access on control node
$ curl https://demo.my-org.com --cacert cacert.pem
Got message
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The cacert.pem was downloaded from AWS PCA's Certificate body. Things look good in K8s for AWSPCAClusterIssuer and Certificate. The certificate description got these events:
$ kubectl describe certificate rsa-cert-2048 -n acm-pca-lab-demo
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 47m cert-manager Existing issued Secret is not up to date for spec: [spec.commonName spec.dnsNames]
Normal Reused 47m cert-manager Reusing private key stored in existing Secret resource "rsa-example-cert-2048"
Normal Requested 47m cert-manager Created new CertificateRequest resource "rsa-cert-2048-pp4c4"
Normal Issuing 47m cert-manager The certificate has been successfully issued
If I access from browser got 502 error. The certificate page shown a fake certificate and an alt DNS name.
I'm sure the private CA in AWS was actived successfully. Its arn and region were been set to the EKS node policy and AWSPCAClusterIssuer. What's wrong about the settings? How to diagnose the issue?
deployed resources
I checked the deployed resources in acm-pca-lab-demo namespace.
$ kubectl get secret -n acm-pca-lab-demo
NAME TYPE DATA AGE
default-token-jmxt7 kubernetes.io/service-account-token 3 10h
rsa-example-cert-2048 kubernetes.io/tls 3 10h
$ kubectl get all -n acm-pca-lab-demo
NAME READY STATUS RESTARTS AGE
pod/hello-world-57df4c69f9-nnjrl 1/1 Running 0 10h
pod/hello-world-57df4c69f9-r8f4p 1/1 Running 0 10h
pod/hello-world-57df4c69f9-xgm6w 1/1 Running 0 10h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-world ClusterIP 102.30.45.163 <none> 80/TCP 10h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-world 3/3 3 3 10h
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-world-57df4c69f9 3 3 3 10h
$ kubectl get ingress -n acm-pca-lab-demo
NAME CLASS HOSTS ADDRESS PORTS AGE
acm-pca-demo-ingress <none> demo.my-org.com 11111111111111111111111111111111-2222222222222222.elb.us-east-1.amazonaws.com 80, 443 10h
On the browser, I also got these messages:
The certificate is not trusted because it is self-signed.
HTTP Strict Transport Security: false
HTTP Public Key Pinning: false
certificate file
I downloaded the PCA .pem file from AWS console here. Is it correct?
It's -----BEGIN CERTIFICATE----- started file.

Check your ingress configuration, share the YAML config-if possible which you have used with application deployment.
there could be chances there is not secret attached to ingress, due to that K8s Nginx ingress controller by default attaching the default FAKE cert instead of your generated cert.
For example :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: acm-pca-demo-ingress
namespace: acm-pca-lab-demo
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- www.rsa-2048.example.com
secretName: rsa-example-cert-2048
rules:
- host: www.rsa-2048.example.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: hello-world
port:
number: 80
as shown above rsa-example-cert-2048, make sure your secret exists in the namespace in which ingress there.

Related

accessing kubernetes service from local host

I created a single node cluster. There is a nodeport service
kubectl get all --namespace default
service/backend-org-1-substra-backend-server NodePort 10.43.81.5 <none> 8000:30068/TCP 4d23h
The node ip is
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-k3s-default-server-0 Ready control-plane,master 5d v1.24.4+k3s1 172.18.0.2 <none> K3s dev 5.15.0-1028-aws containerd://1.6.6-k3s1
From the same host, but not inside the cluster, I can ping the 172.18.0.2 ip. Since the backend-org-1-substra-backend-server is a nodeport, shouldn't I be able to access it by
curl 172.18.0.2:30068? I get
curl: (7) Failed to connect to 172.18.0.2 port 30068 after 0 ms: Connection refused
additional information:
$ kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get nodes -o yaml
...
addresses:
- address: 172.24.0.2
type: InternalIP
- address: k3d-k3s-default-server-0
type: Hostname
allocatable:
$ kubectl describe svc backend-org-1-substra-backend-server
Name: backend-org-1-substra-backend-server
Namespace: org-1
Labels: app.kubernetes.io/instance=backend-org-1
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=substra-backend-server
app.kubernetes.io/part-of=substra-backend
app.kubernetes.io/version=0.34.1
helm.sh/chart=substra-backend-22.3.1
skaffold.dev/run-id=394a8d19-bbc8-4a3b-b04e-08e0fff40681
Annotations: meta.helm.sh/release-name: backend-org-1
meta.helm.sh/release-namespace: org-1
Selector: app.kubernetes.io/instance=backend-org-1,app.kubernetes.io/name=substra-backend-server
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.68.217
IPs: 10.43.68.217
Port: http 8000/TCP
TargetPort: http/TCP
NodePort: http 31960/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Here, I noticed the endpoints shows . which worries me.
I followed the doc at https://docs.substra.org/en/stable/contributing/getting-started.html
It's a lot to ask someone to replicate the whole thing.
My point is AFAIK, the nodeport service allows callers from outside the cluster to call pods inside the cluster. But neither the cluster ip nor the node ip allows me to curl that service.
I found that it was due to a faulty installation. Now wget to the load balancer ip and port does get a connection.

Istio: DestinationRule for a legacy service outside the mesh

I have a k8s cluster with Istio deployed in the istio-system namespace, and sidecar injection enabled by default in another namespace called mesh-apps. I also have a second legacy namespace which contains certain applications that do their own TLS termination. I am trying to setup mTLS access between services running inside the mesh-apps namespace and those running inside legacy.
For this purpose, I have done the following:
Created a secret in the mesh-apps namespace containing the client cert, key and CAcert to be used to connect with an application in legacy via mTLS.
Mounted these at a well-defined location inside a pod (the sleep pod in Istio samples actually) running in mesh-apps.
Deployed an app inside legacy and exposed it using a ClusterIP service called mymtls-app on port 8443.
Created the following destination rule in the mesh-apps namespace, hoping that this enables mTLS access from mesh-apps to legacy.
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: originate-mtls
spec:
host: mymtls-app.legacy.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 8443
tls:
mode: MUTUAL
clientCertificate: /etc/sleep/tls/server.cert
privateKey: /etc/sleep/tls/server.key
caCertificates: /etc/sleep/tls/ca.pem
sni: mymtls-app.legacy.svc.cluster.local
Now when I run the following command from inside the sleep pod, I would have expected the above DestinationRule to take effect:
kubectl exec sleep-37893-foobar -c sleep -- curl http://mymtls-app.legacy.svc.cluster.local:8443/hello
But instead I just get the error:
Client sent an HTTP request to an HTTPS server.
If I add https in the URL, then this is the error:
curl: (56) OpenSSL SSL_read: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate, errno 0
command terminated with exit code 56
I figured my own mistake. I needed to correctly mount the certificate, private key, and CA chain in the sidecar, not in the app container. In order to mount them in the sidecar, I performed the following actions:
Created a secret with the cert, private key and CA chain.
kubectl create secret generic sleep-secret -n mesh-apps \
--from-file=server.key=/home/johndoe/certs_mtls/client.key \
--from-file=server.cert=/home/johndoe/certs_mtls/client.crt \
--from-file=ca.pem=/home/johndoe/certs_mtls/server_ca.pem
Modified the deployment manifest for the sleep container thus:
template:
metadata:
annotations:
sidecar.istio.io/userVolumeMount: '[{"name": "secret-volume", "mountPath": "/etc/sleep/tls", "readonly": true}]'
sidecar.istio.io/userVolume: '[{"name": "secret-volume", "secret": {"secretName": "sleep-secret"}}]'
Actually I had already created the secret earlier, but it was mounted in the app container (sleep) instead of the sidecar, in this way:
spec:
volumes:
- name: <secret_volume_name>
secret:
secretName: <secret_name>
optional: true
containers:
- name: ...
volumeMounts:
- mountPath: ...
name: <secret_volume_name>

How can we generate Self Managed Certificates in GCP instance?

i am testing with Open SSL in GCP instance. and how can generate Self Managed Certificates in GCP instance.
You can make certificate and domain status active, it can take up to 30 mins for your load balancer to begin using your self-managed SSL certificate
To test this you can run the following OpenSSL command, replacing
DOMAIN ----------------------- with-----------------------DNS name
IP_ADDRESS-------------------with-----------------------IP address of your load balancer.
echo | openssl s_client -showcerts -servername DOMAIN -connect IP_ADDRESS:443 -verify 99 -verify_return_error
This command outputs the certificates that the load balancer presents to the client. Along with other detailed information, the output should include the certificate chain.
Verify return code: 0 (ok).
For more information you can refer to this link.
There are so many ways we can issue certificates, let's focus on K8S cluster running on Google (GKE) using a custom resource called ManagedCertificate and ingress rules.
You must own the domain name and name must be no longer than 63 characters.
Create a reserved (static) external IP address using the following
command, or use google console.
gcloud compute addresses create gke-static-ip --global
Create Managed Certificate
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: gke-certificate
spec:
domains:
- DOMAIN
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gke-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: gke-static-ip
networking.gke.io/managed-certificates: gke-certificate
spec:
backend:
serviceName: hello-world-service
servicePort: 80
in my case i use the cloud endpoints as domain name

kiali showing unkown traffic via sending through ambassador

I have installed service mesh(Istio) and working with Ambassador to route traffic to our application. Whenever I am sending traffic through Istio Ingress its working fine and working with the ambassador but when sending through Ambassador, It is showing unknown, You can see on the attached image, could be related to the fact that the ambassador does not use an Istio sidecar.
Used code to deploy Ambassador service:
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: ambassador-http
port: 80
targetPort: 8080
selector:
service: ambassador
---
Is there anything to I can add here to make it possible?
Thanks
Yes, it is possible and here is detailed guide for this from Abmassador documentation:
Getting Ambassador Working With Istio
Getting Ambassador working with Istio is straightforward. In this example, we'll use the bookinfo sample application from Istio.
Install Istio on Kubernetes, following the default instructions (without using mutual TLS auth between sidecars)
Next, install the Bookinfo sample application, following the instructions.
Verify that the sample application is working as expected.
By default, the Bookinfo application uses the Istio ingress. To use Ambassador, we need to:
Install Ambassador.
First you will need to deploy the Ambassador ambassador-admin service to your cluster:
It's simplest to use the YAML files we have online for this (though of course you can download them and use them locally if you prefer!).
First, you need to check if Kubernetes has RBAC enabled:
kubectl cluster-info dump --namespace kube-system | grep authorization-mode
If you see something like --authorization-mode=Node,RBAC in the output, then RBAC is enabled.
If RBAC is enabled, you'll need to use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml
Without RBAC, you can use:
kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yaml
(Note that if you are planning to use mutual TLS for communication between Ambassador and Istio/services in the future, then the order in which you deploy the ambassador-admin service and the ambassador LoadBalancer service below may need to be swapped)
Next you will deploy an ambassador service that acts as a point of ingress into the cluster via the LoadBalancer type. Create the following YAML and put it in a file called ambassador-service.yaml.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: httpbin
spec:
prefix: /httpbin/
service: httpbin.org
host_rewrite: httpbin.org
Then, apply it to the Kubernetes with kubectl:
kubectl apply -f ambassador-service.yaml
The YAML above does several things:
It creates a Kubernetes service for Ambassador, of type LoadBalancer. Note that if you're not deploying in an environment where LoadBalancer is a supported type (i.e. MiniKube), you'll need to change this to a different type of service, e.g., NodePort.
It creates a test route that will route traffic from /httpbin/ to the public httpbin.org HTTP Request and Response service (which provides useful endpoint that can be used for diagnostic purposes). In Ambassador, Kubernetes annotations (as shown above) are used for configuration. More commonly, you'll want to configure routes as part of your service deployment process, as shown in this more advanced example.
You can see if the two Ambassador services are running correctly (and also obtain the LoadBalancer IP address when this is assigned after a few minutes) by executing the following commands:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador LoadBalancer 10.63.247.1 35.224.41.XX 8080:32171/TCP 11m
ambassador-admin NodePort 10.63.250.17 <none> 8877:32107/TCP 12m
details ClusterIP 10.63.241.224 <none> 9080/TCP 16m
kubernetes ClusterIP 10.63.240.1 <none> 443/TCP 24m
productpage ClusterIP 10.63.248.184 <none> 9080/TCP 16m
ratings ClusterIP 10.63.255.72 <none> 9080/TCP 16m
reviews ClusterIP 10.63.252.192 <none> 9080/TCP 16m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ambassador-2680035017-092rk 2/2 Running 0 13m
ambassador-2680035017-9mr97 2/2 Running 0 13m
ambassador-2680035017-thcpr 2/2 Running 0 13m
details-v1-3842766915-3bjwx 2/2 Running 0 17m
productpage-v1-449428215-dwf44 2/2 Running 0 16m
ratings-v1-555398331-80zts 2/2 Running 0 17m
reviews-v1-217127373-s3d91 2/2 Running 0 17m
reviews-v2-2104781143-2nxqf 2/2 Running 0 16m
reviews-v3-3240307257-xl1l6 2/2 Running 0 16m
Above we see that external IP assigned to our LoadBalancer is 35.224.41.XX (XX is used to mask the actual value), and that all ambassador pods are running (Ambassador relies on Kubernetes to provide high availability, and so there should be two small pods running on each node within the cluster).
You can test if Ambassador has been installed correctly by using the test route to httpbin.org to get the external cluster Origin IP from which the request was made:
$ curl 35.224.41.XX/httpbin/ip
{
"origin": "35.192.109.XX"
}
If you're seeing a similar response, then everything is working great!
(Bonus: If you want to use a little bit of awk magic to export the LoadBalancer IP to a variable AMBASSADOR_IP, then you can type export AMBASSADOR_IP=$(kubectl get services ambassador | tail -1 | awk '{ print $4 }')and usecurl $AMBASSADOR_IP/httpbin/ip
Now you are going to modify the bookinfo demo bookinfo.yaml manifest to include the necessary Ambassador annotations. See below.
---
apiVersion: getambassador.io/v1
kind: Mapping
metadata:
name: productpage
spec:
prefix: /productpage/
rewrite: /productpage
service: productpage:9080
---
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
The annotation above implements an Ambassador mapping from the '/productpage/' URI to the Kubernetes productpage service running on port 9080 ('productpage:9080'). The 'prefix' mapping URI is taken from the context of the root of your Ambassador service that is acting as the ingress point (exposed externally via port 80 because it is a LoadBalancer) e.g. '35.224.41.XX/productpage/'.
You can now apply this manifest from the root of the Istio GitHub repo on your local file system (taking care to wrap the apply with istioctl kube-inject):
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
Optionally, delete the Ingress controller from the bookinfo.yaml manifest by typing kubectl delete ingress gateway.
Test Ambassador by going to the IP of the Ambassador LoadBalancer you configured above e.g. 35.192.109.XX/productpage/. You can see the actual IP address again for Ambassador by typing kubectl get services ambassador.
Also according to documentation there is no need for Ambassador pods to be injected.
Yes, I have already configured all these things. That's why I have mentioned it in the attached image. I have taken this from kiali dashboard. That output I have shared of the bookinfo application. I have deployed my own application and its also working fine.
But I want short out this unknown thing.
I am using the AWS EKS cluster.
Putting note about ambassador:
Ambassador should not have the Istio sidecar for two reasons. First, it cannot since running the two separate Envoy instances will result in a conflict over their shared memory segment. The second is Ambassador should not be in your mesh anyway. The mesh is great for handling traffic routing from service to service, but since Ambassador is your ingress point, it should be solely in charge of deciding which service to route to and how to do it. Having both Ambassador and Istio try to set routing rules would be a headache and wouldn't make much sense.
All the traffic coming from a source that is not part of the service mesh is going to be shown as unknown.
See what kiali says about the unknowns:
https://kiali.io/faq/graph/#many-unknown

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
I have created a Deployment using the following configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
The deployment is running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
A single pod has been created on the worker node:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
From the worker node, I can curl the pod and get the information:
$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
I have created a service using the following configuration:
kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
The service is up and running:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.
If I curl the service from the worker node it works just as expected:
$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:
$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?
I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.
Depends of which mode you using it working different in details, but conceptually same.
You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.
IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.
That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.
So, service IP address desired to be available from other pod, not from nodes.
You can read in official documentation about how the Service Virtual IPs works.
kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.