ALB ingress mixed private and internet facing paths - amazon-web-services

I have a set of containerized microservices behind an ALB serving as endpoints for my API. The ALB ingress is internet-facing and I have set up my path routing accordingly. Suddenly the need appeared for some additional (new) containerized microservices to be private (aka not accessible through the internet) but still be reachable from, and able to communicate with, the ones that are public (internally).
Is there a way to configure path based routing , or modify the ingress with some annotation to keep certain paths private?
If not, would a second ingress (an internal one this time) under the same ALB do the trick for what I want?
Thanks,
George

Turns out that (at least for my case) the solution is to ignore the internet-facing Ingress and let it do its thing. Internal facing REST API paths that should not be otherwise accessible can be used through their pods' Service specification.
Implementing a Service per microservice will allow internal access in their : without the need to modify anything in the initial Ingress which will continue to handle internet-facing API(s).

Related

AWS Load Balancer Path Based Routing

I am running a microservice application off of AWS ECS. Each microservice currently has its own Load balancer.
There is one main public facing service which the rest of the services communicate with via gateways. Having each service have its own ELB is currently too expensive, is there some way to have only 1 ELB for the public facing service that will route to the other services based off of path. Is this possible without actually having the other service names in the URL. Could a reverse proxy work?
I know this is a broad question but any help would be appreciated
Inside your EC2 panel go to loadbalancers section, choose a loadbalancer and then in listeners tab, there is a button named view/edit rules, there you set conditions to use a single loadbalancer for different clusters/instances of your app. note that for each container you need a target group defined.
You can config loadbalancer to route based on:
Http Headers
Path i.e: www.example.com/a or www.example.com/b
Host Header(hostname)
Query strings
or even source Ip.
That's it! cheers.

How to expose tcp service in Kubernetes only for certain ip addresses?

Nignx ingress provides a way to expose tcp or udp service: all you need is public NLB.
However this way tcp service will be exposed publicly: NLB does not support security groups or acl, also nginx-ingress does not have any way to filter traffic while proxying tcp or udp.
The only solution that comes to my mind is internal load balancer and separate non-k8s instance with haproxy or iptables, where I'll actually have restrictions based on source ip - and then forward/proxy requests to internal NLB.
Maybe there are another ways to solve this?
Do not use nginx-ingress for this. To get real IP inside nginx-ingress you have to set controller.service.externalTrafficPolicy: Local, which in its turn changes the way the nginx-ingress service is exposed - making it local to the nodes. See https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip. This in its turn causes your nginx-ingress LoadBalancer to have unhealthy hosts which will create noise in your monitoring (opposite to NodePort where every node exposes the same port and healthy). Unless you run nginx-ingress as a DaemonSet or use other hacks, e.g. limit which nodes are added as a backends (mind scheduling, scaling), or move nginx-ingress to a separate set of nodes/subnet - IMO each of these is a lot of headache for such a simple problem. More on this problem: https://elsesiy.com/blog/kubernetes-client-source-ip-dilemma
Use plain Service type: LoadBalancer (classic ELB) which supports:
Source ranges: https://aws.amazon.com/premiumsupport/knowledge-center/eks-cidr-ip-address-loadbalancer/
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups annotation in case you want to manage the source ranges from the outside.
In this case your traffic going like World -> ELB -> NodePort -> Service -> Pod, without Ingress.
Yo can use the whitelist-source-range annotation for that. We've been using it successfully for a few use cases and it does the job well.
EDIT: I spoke too soon. Rereading your question and understanding your exact use case brought me to this issue, which clearly states these services cannot be whitelisted, and suggests solving this in the firewall level.

Is there a way to setup multiple Istio Gateways, each with same host, but in different namespaces?

I've been following the post here to setup a configuration to direct traffic from a Kubernetes cluster + Istio to an external service via an Istio egress gateway.
The setup I have is working fine - in isolation. However, I have 2 separate components (in separate Kubernetes namespaces) which I would like to be able to route traffic to the same external service. This means that I end up with a configuration where relevant Istio Service Entries, Destination Rules, Virtual Services and Gateways are duplicated in both namespaces. When I set this up, however, the Gateway resources appear to clash, and the egress gateway is not configured correctly, presumably because of the duplicate egress setup with the same host name and port combination. This stops either namespace from communicating with the external service.
I can fix this by removing one of the 'duplicate' Gateway resources, or by having only a single Gateway resource in a common namespace to control the egress. Neither of these options seems ideal to me though, as it essentially means that the configuration from one namespace is affecting the configuration of the other.
Am I missing something? (e.g. is there a way to setup the 'duplicate' configuration entirely within each namespace, without causing these problems - or is this just a situation where the Istio egress gateway is considered a 'shared' resource and requires configuration to be shared even across otherwise isolated namespaces?

Set static response from Istio Ingress Gateway

How do you set a static 200 response in Istio's Ingress Gateway?
We have a situation where we need an endpoint to return a small bit of static content (a bootstrap URL). We could even put it in a header. Can Istio host something like that or do we need to run a pod for no other reason than to return a single word?
Specifically I am looking for a solution that returns 200 via Istio configuration, not a pod that Istio routes to (which is quite a common example and available elsewhere).
You have to do it manually by creating VirtualService to specific service connected to pod.
Of course firstly you have to create pod and then attached service to it,
even if your application will return single word.
Istio Gateway’s are responsible for opening ports on relevant
Istio gateway pods and receiving traffic for hosts. That’s it.
The VirtualService: Istio VirtualService’s are what get “attached” to
Gateways and are responsible defining the routes the gateway should implement.
You can have multiple VirtualServices attached to Gateways. But not for the
same domain.

kubernetes on aws: Exposing multiple domain names (ingress vs ELB)

I am experimenting with a kubernetes cluster on aws.
At the end of the day, I want to expose 2 urls:
production.somesite.com
staging.somesite.com
When exposing 1 url, things (at least in the cloud landscape) seem to be easy.
You make the service LoadBalancer type --> aws provisions an ELB --> you assign an A type alias record (e.g. whatever.somesite.com) to ELB's dns name and boom, there is your service publicly available via the hostname you like.
I assume one easy (and I guess not best-pracise-wise) way of going about this is to expose 2 ELBs.
Is Ingress the (good) alternative?
If so, what is the Route53 record I should create?
For what that matters (and in case this may be a dealbreaker for Ingress):
production.somesite.com will be publicly available
staging.somesite.com will have restrictive acces
Ingress is for sure one possible solution.
You need to deploy in your cluster an Ingress controller (for instance https://github.com/kubernetes/ingress-nginx) than expose it with a Service of type LoadBalancer as you did previously.
In route53, you need to point any domain names you want to be served by your ingress controller to ELB's name, exactly as you did previously.
The last thing you need to do is create an Ingress resource for every domain you want your ingress controller to be aware of (more on this here: https://kubernetes.io/docs/concepts/services-networking/ingress/).
That being said, if you plan to only have 2 public URLs in your cluster I'd use 2 ELBs. Ingress controller is another component to be maintained/monitored in your cluster, so take this into account when evaluating the tradeoffs.