I am trying to setup mTLS for outgoing connections, but instead of originating the TLS traffic from the egress gateway, I’m trying to do it from the sidecar proxy itself. We want to originate the TLS connection from proxy and not the egress gateway.
I took care of mounting the client certs in my sidecar proxy container and verified that the client certs are available in the expected path. My API resources look something like below
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-host-mtls
spec:
hosts:
- external-host-example.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: external-mtls
spec:
hosts:
- external-host-example.com
tls:
- match:
- port: 443
sniHosts:
- external-host-example.com
route:
- destination:
host: external-host-example.com
port:
number: 443
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: external-mtls
spec:
host: external-host-example.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/client-certs/client.pem
privateKey: /etc/client-certs/client.key
caCertificates: /etc/client-certs/ca.pem
when I try to curl to external-host-example.com, I am hoping that Istio will add the client certs to the connection.
I’m not sure if that’s happening, because I’m running into errors.
curl -H "Host: external-host-example.com" --tlsv1.2 -v https://external-host-example.com
* About to connect() to external-host-example.com port 443 (#0)
* Trying x.x.x.x...
* Connected to external-host-example.com (x.x.x.x) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
Looking at the debug logs, I see this
|2020-11-17T15:53:58.226367Z|debug|envoy filter|[external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:148] tls:onServerName(), requestedServerName: external-host-example.com|
|2020-11-17T15:53:58.226443Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:251] [C161] new tcp proxy session|
|2020-11-17T15:53:58.226480Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:395] [C161] Creating connection to cluster outbound|443||external-host-example.com|
|2020-11-17T15:53:58.226509Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:83] creating a new connection|
|2020-11-17T15:53:58.226550Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:364] [C162] connecting|
|2020-11-17T15:53:58.226557Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:727] [C162] connecting to x.x.x.x:443|
|2020-11-17T15:53:58.226641Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:736] [C162] connection in progress|
|2020-11-17T15:53:58.226656Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:109] queueing request due to no available connections|
|2020-11-17T15:53:58.226662Z|debug|envoy conn_handler|[external/envoy/source/server/connection_handler_impl.cc:411] [C161] new connection|
|2020-11-17T15:53:58.252446Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:592] [C162] connected|
|2020-11-17T15:53:58.252555Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277388Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277417Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:191] [C162] handshake expecting read|
|2020-11-17T15:53:58.277595Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:176] [C162] handshake complete|
|2020-11-17T15:53:58.277633Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:285] [C162] assigning connection|
|2020-11-17T15:53:58.277661Z|debug|envoy filter|[external/envoy/source/common/tcp_proxy/tcp_proxy.cc:624] TCP:onUpstreamEvent(), requestedServerName:external-host-example.com|
|2020-11-17T15:53:58.303804Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:226] [C162]|
|2020-11-17T15:53:58.303830Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:558] [C162] remote close|
|2020-11-17T15:53:58.303834Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:200] [C162] closing socket: 0|
|2020-11-17T15:53:58.303853Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:298] [C162] SSL shutdown: rc=-1|
|2020-11-17T15:53:58.303855Z|debug|envoy connection|[external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:226] [C162]|
|2020-11-17T15:53:58.303880Z|debug|envoy pool|[external/envoy/source/common/tcp/conn_pool.cc:124] [C162] client disconnected|
|2020-11-17T15:53:58.303894Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:109] [C161] closing data_to_write=0 type=0|
|2020-11-17T15:53:58.303900Z|debug|envoy connection|[external/envoy/source/common/network/connection_impl.cc:200] [C161] closing socket: 1|
|2020-11-17T15:53:58.303985Z|debug|envoy conn_handler|[external/envoy/source/server/connection_handler_impl.cc:111] [C161] adding to cleanup list|
Any idea what am I doing wrong? How do I debug this further?
Related
I am trying to expose An EKS deployment of Kafka outside the cluster, within the same VPC.
In terraform I added an ingress rule for the Kafka security group:
ingress {
from_port = 9092
protocol = "tcp"
to_port = 9092
cidr_blocks = [
"10.0.0.0/16",
]
}
This is the service yaml
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-0....d,sg-0db....ae"
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9092
targetPort: 9092
selector:
app: kafka
When trying to connect from another instance, belonging to one of the security groups in the yaml,
I seem to be able to establish a connection through the load balancer but not get referred to Kafka:
[ec2-user#ip-10-0-4-47 kafkacat]$ nc -zvw10 internal-a08....628f-1654182718.us-east-2.elb.amazonaws.com 9092
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 10.0.3.151:9092.
Ncat: 0 bytes sent, 0 bytes received in 0.05 seconds.
[ec2-user#ip-10-0-4-47 kafkacat]$ nmap -Pn internal-a0837....a0e628f-1654182718.us-east-2.elb.amazonaws.com -p 9092
Starting Nmap 6.40 ( http://nmap.org ) at 2021-02-28 07:19 UTC
Nmap scan report for internal-a083747ab.....8f-1654182718.us-east-2.elb.amazonaws.com (10.0.2.41)
Host is up (0.00088s latency).
Other addresses for internal-a083747ab....36f0a0e628f-1654182718.us-east-2.elb.amazonaws.com (not scanned): 10.0.3.151 10.0.1.85
rDNS record for 10.0.2.41: ip-10-0-2-41.us-east-2.compute.internal
PORT STATE SERVICE
9092/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds
[ec2-user#ip-10-0-4-47 kafkacat]$ kafkacat -b internal-a083747abf4....-1654182718.us-east-2.elb.amazonaws.com:9092 -t models
% Auto-selecting Consumer mode (use -P or -C to override)
% ERROR: Local: Host resolution failure: kafka-2.broker.kafka.svc.cluster.local:9092/2: Failed to resolve 'kafka-2.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-1.broker.kafka.svc.cluster.local:9092/1: Failed to resolve 'kafka-1.broker.kafka.svc.cluster.local:9092': Name or service not known
% ERROR: Local: Host resolution failure: kafka-0.broker.kafka.svc.cluster.local:9092/0: Failed to resolve 'kafka-0.broker.kafka.svc.cluster.local:9092': Name or service not known
^C[ec2-user#ip-10-0-4-47 kafkacat]$
``
We solved the Kafka connection by:
Adding ingress rule to the Kafka worker security group (We use Terraform)
ingress {
from_port = 9094
protocol = "tcp"
to_port = 9094
cidr_blocks = [
"10.0.0.0/16",
]
}
Provisioning each broker a load balancer service in Kubernetes YAML (note that the last digit in the nodePort corresponds to the broker stateful set ID).
apiVersion: v1
kind: Service
metadata:
name: bootstrap-external-0
namespace: kafka
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "10.0.0.0/16"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: sg-....d,sg-0db14....e,sg-001ce.....e,sg-0fe....15d13c
spec:
type: LoadBalancer
ports:
-
protocol: TCP
targetPort: 9094
port: 32400
nodePort: 32400
selector:
app: kafka
kafka-broker-id: "0"
Retrieving load balancer name by parsing kubctl -n kafka get svc bootstrap-external-0.
Adding DNS name by convention using Route 53.
We plan to automate by terraforming the Route53 after load balancer is created.
My application consists of play web application deployed using GKE. The application was running fine (using Deployment and Loadbalancer service) and then I decided to use Ingress. I made the following changes which has made the application unreachable. I get 502 error when I try to connect with the application using ingress IP.
The application is of kind Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 2
selector:
matchLabels:
app: webapp
It has a service associated with it
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 9000 #this service is reachable at this port
targetPort: 9000 #this service will forward the request to correspoding nodes of the service at this port
#type: LoadBalancer
type: NodePort
Then I applied the following file to create ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: webapp-https-loadbalancer-ingress
annotations:
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: webapp-service
servicePort: 9000
I can see that there is an IP address (which is also reachable from outside) when I run
kubectl describe ingress webapp-https-loadbalancer-ingress
Name: webapp-https-loadbalancer-ingress
Namespace: default
Address: 3x.yyy.zzz.pq
Default backend: default-http-backend:80 (10.88.0.5:8080)
Rules:
Host Path Backends
---- ---- --------
*
... Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 10m loadbalancer-controller default/webapp-https-loadbalancer-ingress
Normal CREATE 9m10s loadbalancer-controller ip: 3x.yyy.zzz.pq
But I am not able to reach the application using https://3x.yyy.zzz.pq. I haven't yet associated the domain with the IP. I tried to connect using curl and got error- 502 bad gateway error
curl -v 3x.xxx.xxx.xxx
* Expire in 0 ms for 6 (transfer 0x55d4c5258f90)
* Trying 3x.xxx.xxx.xxx...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55d4c5258f90)
* Connected to 3x.xxx.xxx.xxx (3x.xxx.xxx.xxx) port 80 (#0)
> GET / HTTP/1.1
> Host: 3x.xxx.xxx.xxx
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Content-Length: 332
< Date: Tue, 22 Dec 2020 22:27:23 GMT
<
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
* Connection #0 to host 3x.xxx.xxx.xxx left intact
The issue was that the load balancer's IP was not in the allowed-hosts list in the server. As temporary fix, I used wild card in server configuration to allow traffic from all hosts. I am still figuring out how to restrict it to load balancer's internal IPs
Trying to curl the service deployed in k8s cluster from the master node
curl: (7) Failed to connect to localhost port 31796: Connection
refused
For kubernetes cluster, when I check my iptables on master I get the following .
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 10.100.94.202 /*
default/some-service: has no endpoints */ tcp dpt:9015 reject-with
icmp-port-unreachable
REJECT tcp -- anywhere 10.103.64.79 /*
default/some-service: has no endpoints */ tcp dpt:9000 reject-with
icmp-port-unreachable
REJECT tcp -- anywhere 10.107.111.252 /*
default/some-service: has no endpoints */ tcp dpt:9015 reject-with
icmp-port-unreachable
if I flush my iptables with
iptables -F
and then curl
curl -v localhost:31796
I get the following
* Rebuilt URL to: localhost:31796/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 31796 (#0)
> GET / HTTP/1.1
> Host: localhost:31796
> User-Agent: curl/7.58.0
> Accept: */*
but soon after it results in
* Rebuilt URL to: localhost:31796/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 31796 failed: Connection refused
* Failed to connect to localhost port 31796: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 31796: Connection
refused
I'm using the nodePort concept in my service
Details
kubectl get node
NAME STATUS ROLES AGE VERSION
ip-Master-IP Ready master 26h v1.12.7
ip-Node1-ip Ready <none> 26h v1.12.7
ip-Node2-ip Ready <none> 23h v1.12.7
Kubectl get pods
NAME READY STATUS RESTARTS AGE
config-service-7dc8fc4ff-5kk88 1/1 Running 0 5h49m
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE SELECTOR
cadmin-server NodePort 10.109.55.255 <none>
9015:31796/TCP 22h app=config-service
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
26h <none>
Kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
endpoint.yml
apiVersion: v1
kind: Endpoints
metadata:
name: xyz
subsets:
- addresses:
- ip: node1_ip
- ip: node2_ip
ports:
- port: 31796
- name: xyz
service.yml
apiVersion: v1
kind: Service
metadata:
name: xyz
namespace: default
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /xyz
labels:
app: xyz
spec:
type: NodePort
ports:
- nodePort: 31796
port: 8001
targetPort: 8001
protocol: TCP
selector:
app: xyz
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: xyz
name: xyz
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: xyz
template:
metadata:
labels:
app: xyz
spec:
containers:
- name: xyz
image: abc
ports:
- containerPort: 8001
imagePullPolicy: Always
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /app/
name: config-volume
restartPolicy: Always
imagePullSecrets:
- name: awslogin
volumes:
- configMap:
name: xyz
name: config-volume
You can run the following command to check endpoints.
kubectl get endpoints
If endpoint is not showing up for the service. Please check the yml files that you used for creating the loadbalancer and the deployment. Make sure the labels match.
As many have pointed out in their comments the Firewall Rule "no endpoints" is inserted by the kubelet service and indicates a broken Service Application Definition or Setup.
# iptables-save
# Generated by iptables-save v1.4.21 on Wed Feb 24 10:10:23 2021
*filter
# [...]
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "default/web-service:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 30081 -j REJECT --reject-with icmp-port-unreachable
# [...]
As you have noticed as well the service kubelet constantly monitors the Firewall Rules and inserts or deletes rules dynamically according to the Kubernetes Pod or Service definitions.
# kubectl get service --namespace=default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 198d
web-service NodePort 10.111.188.199 <none> 8201:30081/TCP 194d
# kubectl get pods --namespace=default
No resources found in default namespace.
In this example case a Service is defined but the Pod associated with the Service does not exist.
Still the kube-proxy process listens on the port 30081:
# netstat -lpn | grep -i kube
[...]
tcp 0 0 0.0.0.0:30081 0.0.0.0:* LISTEN 21542/kube-proxy
[...]
So the kubelet service inserts a firewall rule to prevent the traffic for the broken service.
Also the kubelet service will delete this rule as soon as you delete the Service definition
# kubectl delete service web-service --namespace=default
service "web-service" deleted
# iptables-save | grep -i "no endpoints" | wc -l
0
As a Side Node:
This rule is also inserted for Kubernetes Definitions that the kubelet Service doesn't like.
As an example your service can have the name "log-service" but can't have the name "web-log".
In the latter case the kubelet Service didn't give a warning but inserted this blocking rule
I have a simple Flask app. It worked fine when I connected to it via port-forwarding to send the HTTP Post request directly to the Service.
from flask import Flask, request
import redis
from rq import Queue
from worker import job_worker
UPLOAD_FOLDER = './uploads/'
app = Flask(__name__)
r = redis.Redis()
q = Queue(connection = r)
#app.route('/', methods=['POST'])
def upload():
scale = int(request.form['scale'])
q.enqueue(job_worker, scale)
return ""
if __name__ == "__main__":
app.run()
I also have a simple index.html file in an nginx container which is served at port 80. It does an ajax POST request to "/upload". Which if you look at the ingress controller, should convert that to a port 5000 request and strip away the "upload"
The flask app gets served at port 5000
Here is the ingress controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: emoji-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /upload
backend:
serviceName: emoji-backend
servicePort: 5000
- path: /
backend:
serviceName: emoji-frontend
servicePort: 80
And for completeness, the emoji-backend service:
apiVersion: v1
kind: Service
metadata:
name: emoji-backend
labels:
app: emoji-backend
tier: backend
spec:
type: LoadBalancer
ports:
- port: 5000
selector:
app: emoji-backend
tier: backend
I get a 502 bad gateway without really any indication except the ingress log does say this:
2019/09/29 21:41:04 [error] 2021#2021: *78651 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.64.1, server: _,
request: "POST /upload HTTP/2.0", upstream: "http://172.17.0.4:5000/", host: "192.168.64.5", referrer: "https://192.168.64.5/"
"http://172.17.0.4:5000/" is the correct endpoint and port for the emoji-backend service.
Adding the following line fixed it:
app.run(debug=True,host='0.0.0.0',port=5000)
However, it took me a while to figure that out because at first when I tried it my docker image was not updating when I re-deployed.
I use AWS EKS as my k8s control plane, and deployed a 3-node autoscaling group as my work nodes(K8s Nodes.) This autoscaling group sits in my VPC. And I made sure security groups are open at least permissive enough for peer node and ELB to communicate.
I am trying to use nginx-ingress for routing traffic from outside k8s cluster. I use helm to deploy my nginx-ingress using a values.yaml.
My values.yaml looks like this:
serviceAccount:
create: true
name: nginx-ingress-sa
rbac:
create: true
controller:
kind: "Deployment"
service:
type: "LoadBalancer"
# targetPorts:
# http: 80
# https: http
loadBalancerSourceRanges:
- 1.2.3.4/32
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789:certificate/my-cert
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: sg-12345678
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/enable-access-log: "true"
config:
log-format-escape-json: "true"
log-format-upstream: '{"real_ip" : "$the_real_ip", "remote_user": "$remote_user", "time_iso8601": "$time_iso8601", "request": "$request", "request_method" : "$request_method", "status": "$status", "upstream_addr": $upstream_addr", "upstream_status": "$upstream_status"}'
extraArgs:
v: 3 # NGINX log level
My ingress yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-1
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-access-log: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "s1.testk8.dev.mydomain.net"
http:
paths:
- path: /
backend:
serviceName: s1-service
servicePort: 443
- host: "s2.testk8.dev.mydomain.net"
http:
paths:
- path: /
backend:
serviceName: s2-service
servicePort: 443
tls:
- hosts:
- "s1.testk8.dev.mydomain.net"
- "s2.testk8.dev.mydomain.net"
secretName: "testk8.dev.mydomain.net"
Note that this secret is a self-signed TLS cert on the domain *.mydomain.net.
The behavior right now with this setting is that if enter
https://s1.testk8.dev.mydomain.net in Chrome, it just hangs. It says waiting for s1.testk8.dev.mydomain.net on the lower left corner.
If I use:
curl -vk https://s1.testk8.dev.mydomain.net
It returns:
* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to s1.testk8.dev.mydomain.net (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=*.mydomain.net
* start date: Apr 25 00:00:00 2018 GMT
* expire date: May 25 12:00:00 2019 GMT
* issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: s1.testk8.dev.steelcentral.net
> User-Agent: curl/7.54.0
> Accept: */*
>
And it also appears to wait for server response.
I also tried to tweak the values.yaml, and when I change
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" # instead of https as above
then I hit the https://s1.testk8.dev.mydomain.net URL, I can at least see the HTTP 400 message(plain HTTP request sent to HTTPS port) from the ingress controller pod.
If uncomment these lines in the values.yaml:
# targetPorts:
# http: 80
# https: http
I am able to reach to my backend pod(controlled by a statefulset, not listed here.), I can see access log of my backend pod having new entries.
Not sure whether my use case is weird here since I see lot of folks use nginx-ingress on AWS they terminate the TLS at ELB. But I need to let my backend pod to terminate the TLS.
I also tried the ssl-passthrough flag, didn't help. When the backend-protocal is https, my request doesn't even seem to reach the ingress controller, so talking ssl-passthrough might still be meanless.
Thank you in advance if you just read all the way through here!!
As far as I can tell, even with the current master of nginx-ingress, it is not possible to use self-signed certificates. The template https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl is missing any of the needed directives like:
location / {
proxy_pass https://backend.server.ip/;
proxy_ssl_trusted_certificate /etc/nginx/sslcerts/backend.server.pem;
proxy_ssl_verify off;
... other proxy settings
}
So try to use e.g. a let's encrypt certificate.
My guess is that your backend services are using HTTPS and the in-between traffic is being sent through HTTP. This line in your values.yaml seems odd:
targetPorts:
http: 80
https: http
Can you try something like this?
targetPorts:
http: 80
https: 443