Enable HTTPS for Akka-Discovery Endpoints while forming akka-cluster in kubernetes environment - akka

I need to setup up an Akka-Cluster (using Akka Classic) in Kubernetes using DNS-resolver. I've created a headless-service which is able to resolve address for various pods of my Akka application.
After DNS resolving, I'm able to get addresses for various pods. Now my Akka-Management runs over Https,
So while one pod tries connecting to management endpoints of various other pods, It needs to use "HTTPS" instead of "HTTP" but Akka by default uses "http". Is there a way to modify this behavior in Java

Yes, there is: to enable HTTPS, you have to instantiate your server by providing an HttpsConnectionContext object to it.
You should probably do something like:
Http.get(system).newServerAt("localhost", 8080)
.enableHttps(createHttpsContext(system))
.bind(app.createRoute());
The previous example is taken from the official documentation, which also shows how the createHttpsContext(system) method works.

Related

How to communicate securely to a k8s service via istio?

I can communicate to another service in the same namespace via:
curl http://myservice1:8080/actuator/info
inside the pod.
The application is not configured with TLS, I am curious if I can reach that pod via virtual service so that I can utilized this Istio feature:
curl https://myservice1:8080/actuator/info
We have Istio virtualservice and gateway in place. External access to pod is managed by it and is working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application.
How to communicate securely to a k8s service via istio?
Answering the question under the title - there will be many possibilities, but you should at the beginning Understanding TLS Configuration:
One of Istio’s most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to TLS configuration mistakes for a summary of some the most common TLS configuration problems.
There are many different ways to secure your connection. It all depends on what exactly you need and what you set up.
We have istio virtualservice and gateway in place, external access to pod is managed by it and working properly. We just wanted to reach another pod via https if possible without having to reconfigure the application
As for virtualservice and gateway, you will find an example configuration in this article. You can find guides for single host and for multiple hosts.
We just wanted to reach another pod via https if possible without having to reconfigure the application.
Here you will most likely be able to apply the outbound configuration:
While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a DestinationRule, just like external outbound traffic from sidecars, or auto mTLS by default.
The only difference is that you should be careful to consider the Gateway settings when configuring this. For example, if the Gateway is configured with TLS PASSTHROUGH while the DestinationRule configures TLS origination, you will end up with double encryption. This works, but is often not the desired behavior.
A VirtualService bound to the gateway needs care as well to ensure it is consistent with the Gateway definition.

How does CodeDeploy work with dynamic port mapping?

It's been weeks that I am trying to make CodeDeploy / CodePipeline works for our solution. To make some sort of CI/CD, and make deployment faster, safer...etc
As I keep diving into it, I feel like either I am not doing it the right way at all, or either it is not suitable in our case.
What all our AWS infra is :
We have an ECS Cluster, that contain for now one service (under EC2), associated with one or multiple tasks, a reverse proxy and an API. So the reverse proxy is internally listening to port 80, and when reached, proxy pass internally to the API on port 5000.
We have an application load balancer associated with this service, that will be publicly reachable. It currently has 2 listeners, http and https. Both listener redirect to the same target group, that only have instance(s) where our reverse proxy is. Note that the instance port to redirect to is random (check this link)
We have an auto scaling group, that is scaling numbers of instance depending on the number of call to the application load balancer.
What we may have in the futur :
Other tasks will be in the same instance as our API. For example, we may create another API that is in the same cluster as before, on another port, with another reverse proxy, and yet another load-balancer. We may have some Batch running, and other stuffs.
What's the problem :
Well for now, deploying "manually" (that is, telling the service to make a new deployment on ECS) doesn't work. CodeDeploy is stuck at creating replacement tasks, and when i look at the log of the service, there is the following error
service xxxx-xxxx was unable to place a task because no container
instance met all of its requirements. The closest matching
container-instance yyyy is already using a port required by your task.
Which i don't really understand, since port assignation is random, but maybe CodeDeploy operate before that, and just understand that assignated port is 0, and that it's the same as the previous task definition ?
I don't really know how i can resolve this, and i even doubt that CodeDeploy is usable in our case...
-- Edit 02/18/2021 --
So, i know why it is not working now. Like i said, the port that my host is listening for the reverse proxy is random. But there is still the port that my API is listening on that is not random
But now, even if i tell the API port to be random like the reverse proxy one, how would my reverse proxy know on what port the API will be reachable ? I tried to link containers, but it seems that it doesn't work in the configuration file (i use nginx as reverse proxy).
--
Not specifying hostPort seems to assign a "random" port on the host
But still, since NGINX and the API are two diferent containers, i would need my first NGINX container to call my first API container which is API:32798. I think i'm missing something
You're probably getting this port conflict, because you have two tasks on the same host that want to map the Port 80 of the Host into their containers.
I've tried to visualize the conflict:
The violet boxes share a port namespace and so do the green and orange boxes. This means in each box you can use the ports from 1 - ~65k once. When you explicitly require a Host Port, it will try to map violet port 80 to two container ports, which doesn't work.
You don't want to explicitly map these container ports to the host port, let ECS worry about that.
Just specify the container port in Load Balancer integration in the service definition and it will do the mapping for you. If you set the container port to 80, this refers to the green port 80, and the orange port 80. It will expose these as random ports and automatically register these random ports with the Load Balancer.
Service Definition docs (search for containerPort)

How to access client IP of an HTTP request from Google Container Engine?

I'm running a gunicorn+flask service in a docker container with Google Container Engine. I set up the cluster following the tutorial at http://kubernetes.io/docs/hellonode/
The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. What I was looking for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to retain the external client ip in the requests?
If anyone gets stuck on this there is a better approach.
You can use the following annotations depending on your kubernetes version:
service.spec.externalTrafficPolicy: Local
on 1.7
or
service.beta.kubernetes.io/external-traffic: OnlyLocal
on 1.5-1.6
before this is not supported
source: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
note that there are caveats:
https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#caveats-and-limitations-when-preserving-source-ips
I assume you set up your service by setting the service's type to LoadBalancer? It's an unfortunate limitation of the way incoming network-load-balanced packets are routed through Kubernetes right now that the client IP gets lost.
Instead of using the service's LoadBalancer type, you could set up an Ingress object to integrate your service with a Google Cloud HTTP(s) Load Balancer, which will add the X-Forwarded-For header to incoming requests.

How to setup an external kubernetes service in AWS using https

I would like to setup a public kubernetes service in AWS that listens on https.
I know that kubernetes services currently only support TCP and UDP, but is there a way to make this work with the current version of kubernetes and AWS ELBs?
I found this. http://blog.kubernetes.io/2015/07/strong-simple-ssl-for-kubernetes.html
Is that the best way at the moment?
Https usually runs over TCP, so you can simply run your service with Type=Nodeport/LoadBalancer and manage the certs in the service. This example might help [1], nginx is listening on :443 through a NodePort for ingress traffic. See [2] for a better explanation of the example.
[1] https://github.com/kubernetes/kubernetes/blob/release-1.0/examples/https-nginx/nginx-app.yaml#L8
[2] http://kubernetes.io/v1.0/docs/user-guide/connecting-applications.html
Since 1.3, you can use annotations along with a type=LoadBalancer service:
https://github.com/kubernetes/kubernetes/issues/24978
service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
service.beta.kubernetes.io/aws-load-balancer-ssl-ports=* (or e.g. https)
The first annotation is the only one you need if all you want is to support HTTPS, on any number of ports. If you also want to support HTTP on one or more additional ports, you need to use the second annotation to specify explicitly which ports will use encryption (the others will use plain HTTP).
In my case I setup an elb in aws and setup the ssl cert on that, choosing https and http for the connection types in the elb and that worked great. I setup the elb wroth kubectl expose.

Setup Server with same instance and different address

I want to develop my application separately (API, JOBS, WEB), so that it stays in this manner:
API: api.myaddress.com
JOBS: jobs.myaddress.com
WEB: myaddress.com
I know how to do that with distinct instances with Amazon and GoogleComputing, however, I was wondering IF, I could setup a single instance to do all that, and each DNS namespace, going to a different port on that machine, like:
api.myaddress.com resides in xxx.xxx.xxx.xxx:8090
jobs.myaddress.com resides in xxx.xxx.xxx.xxx:8080
myaddress.com resides in xxx.xxx.xxx.xxx:80
Also if that is possible, I don't know where I should configure that (Is it in the DNS, or a specific setup on my instance in Amazon/Google?)
Why do you want them to go to a different port? Its certainly not necessary. You can use DNS to point all of those domains/subdomains to a single server/ip address, and then thru your webserver configuration bind the various subdomain names to each particular website on that server.
In IIS you bind in the IIS Manager tool, and apache has a similar ability:
http://httpd.apache.org/docs/2.2/vhosts/examples.html
It sounds like what you are looking for is an HTTP reverse proxy. This would be a web server on your machine that binds to port 80 and, based on the incoming Host: header (or other criteria) it forwards the request to the appropriate Node.js instance, each of which is bound to a (different) port of its own.
There are several alternatives. A couple that immediately come to mind are HAProxy and Nginx.
DNS cannot be used to control which port a request arrives at.
Another approach, which is (arguably) unconventional but nonetheless would work would be to set up 3 CloudFront distributions, one for each hostname. Each distribution forwards requests to an "origin server" (your server) and the destination port can be specified for each one. Since CloudFront is primarily intended as a caching service, you would need to return Cache-Control: headers from Node to disable that caching where appropriate... but you could also see some performance improvements on responses that CloudFront can be allowed to cache for you.
what you are looking for is a load balancer (ELB in case of amazon).
setup the load balancer to send traffic to the different ports and at DNS level setup CNAMES for your services that point to the 3 load balancers that you have.