I would like to setup a public kubernetes service in AWS that listens on https.
I know that kubernetes services currently only support TCP and UDP, but is there a way to make this work with the current version of kubernetes and AWS ELBs?
I found this. http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html
Is that the best way at the moment?
Https usually runs over TCP, so you can simply run your service with Type=Nodeport/LoadBalancer and manage the certs in the service. This example might help [1], nginx is listening on :443 through a NodePort for ingress traffic. See [2] for a better explanation of the example.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8
[2] http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html
Since 1.3, you can use annotations along with a type=LoadBalancer service:
https://github.com/kubernetes/kubernetes/issues/24978
service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
service.beta.kubernetes.io/aws-load-balancer-ssl-ports=* (or e.g. https)
The first annotation is the only one you need if all you want is to support HTTPS, on any number of ports. If you also want to support HTTP on one or more additional ports, you need to use the second annotation to specify explicitly which ports will use encryption (the others will use plain HTTP).
In my case I setup an elb in aws and setup the ssl cert on that, choosing https and http for the connection types in the elb and that worked great. I setup the elb wroth kubectl expose.
Related
I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.
I need to setup up an Akka-Cluster (using Akka Classic) in Kubernetes using DNS-resolver. I've created a headless-service which is able to resolve address for various pods of my Akka application.
After DNS resolving, I'm able to get addresses for various pods. Now my Akka-Management runs over Https,
So while one pod tries connecting to management endpoints of various other pods, It needs to use "HTTPS" instead of "HTTP" but Akka by default uses "http". Is there a way to modify this behavior in Java
Yes, there is: to enable HTTPS, you have to instantiate your server by providing an HttpsConnectionContext object to it.
You should probably do something like:
Http.get(system).newServerAt("localhost", 8080)
.enableHttps(createHttpsContext(system))
.bind(app.createRoute());
The previous example is taken from the official documentation, which also shows how the createHttpsContext(system) method works.
I'm trying to deploy an UDP-based application on kubernetes, but I'm having troubles finding a suitable cloud provider that has an UDP loadbalancer with IP-based sticky sessions.
I have tried using DigitalOcean Kubernetes Service (DOKS) but they don't support UDP loadbalancers.
EKS (AWS' kubernetes service) provides UDP support with NLB for example, but they don't seem to have sticky sessions on that type of loadbalancer, only on the classic LB.
Is there another cloud provider (I'm thinking of GCE or Azure) that provides my required functionalities out of the box?
I'm asking this here to know if anyone else has had the same problem and maybe has already tried various solutions, and has already found the perfect fit.
I know in Nginx Ingress Controller (which I know works with AWS and NLB with UDP support as you stated) can expose UDP services and supports sticky sessions. I have not done this in AWS or any other cloud provider, but I have with similar use cases on bare-metal.
As #jordanm posted, the answer was to apply the stickiness parameter through the ec2 console.
I am trying to make sense of the GCP LoadBalancer for the use case of mixed HTTPS and gRPC backend. The LoadBalancer documentation seems to indicate that you can/should use the HTTP(S) LoadBalancer, as that "includes HTTP/2". For backend services I appear to be able to specify a named "grpc" port and set it to be number 7000, but if I use the gcloud command to view my backend services:
gcloud compute backend-services list --format=json
My service is shown to use portName "grpc" (correct) with port "80" (incorrect). This even though I was prompted that the instance group had named ports, and I could (and did) choose "grpc:7000".
On the frontend side, I can only select ports 80 and 8080 for HTTP, or 443 for HTTPS. No mention of HTTP/2, but I guess "HTTPS includes HTTP/2".
Am I right in that I cannot use the layer 7 LoadBalancer at all for my scenario? The documentation is not very explicit on ports, and if I search the Web for gRPC I get loads of stories on LoadBalancing Kubernetes-hosted apps.
In order to use gRPC you need to use HTTP/2
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
HTTP/2 and HTTPS are not one and the same, however H2 (HTTPS/2) can only work over HTTPS. But by default H2 is not enabled you need to enable it.
To use gRPC with your Google Cloud Platform applications, you must proxy requests end-to-end over HTTP/2. To do this with an HTTP(S) load balancer:
Configure an HTTPS load balancer.
Enable HTTP/2 as the protocol from the load balancer to the backends.
See: https://cloud.google.com/load-balancing/docs/https/ for further information.
We have a number of services behind an API gateway which is itself behind ingress-nginx. We're trying to use HTTP/2 to speed up data transfer to the front-end but all of our connections are still being done with HTTP/1.1.
The connection from client to nginx is over HTTPS, but nginx communicates with our API gateway using HTTP, and the gateway also uses HTTP to communicate with the backend services.
Do we need to use HTTPS from end-to-end to get HTTP/2 to work? If so, what's the best way to set this up re: using certificates? If not, what could be causing the connection to drop to HTTP/1.1?
We are using ingress-nginx version 0.21.0, which has nginx 1.15.6 and OpenSSL 1.1.1, which should be sufficient to support TLS 1.3/ALPN/HTTP2. Our nginx-configuration configmap has use-http2 set to true and I can see that the pod's /etc/nginx.conf has a listen ... http2; line.
Edit 10/05/2019:
Further to the comments of #Barry Pollard and #Rico, I've found out that AWS Elastic Load Balancer, which sits in front of our ingress-nginx controller, doesn't support HTTP/2. I've cut nginx out of the stack and our API Gateway is being provisioned its own Network Load Balancer. However, we're still on HTTP/1.1. It looks like ASP.Net Core 2.2's HTTP server Kestrel uses HTTP/2 by default, so I'm not sure why the connection is still dropping to 1.1.
Like #BarryPollard said you shouldn't need HTTP/2 end to end to establish HTTP/2 connections on your browser.
It sounds like whatever you are using for a client is dropping to HTTP/1.1, make sure you try with one of the following:
Chrome 51
Firefox 53
Edge 12
Internet Explorer 11
Opera 38
You didn't specify what architecture is fronting your nginx. Is it connected directly to the internet? or it's going through cloud load balancer? CDN? You can also test with Wireshark as described here.