How to expose a Kubernetes service on AWS using `service.spec.externalIPs` and not `--type=LoadBalancer`? - amazon-web-services

I've deployed a Kubernetes cluster on AWS using kops and I'm able to expose my pods using a service with --type=LoadBalancer:
kubectl run sample-nginx --image=nginx --replicas=2 --port=80
kubectl expose deployment sample-nginx --port=80 --type=LoadBalancer
However, I cannot get it to work by specifying service.spec.externalIPs with the public IP of my master node.
I've allowed ingress traffic the specified port and used https://kubernetes.io/docs/concepts/services-networking/service/#external-ips as documentation.
Can anyone clarify how to expose a service on AWS without using the cloud provider's native load balancer?

If you want to avoid using Loadbalancer then you case use NodePort type of service.
NodePort exposes service on each Node’s IP at a static port (the NodePort).
ClusterIP service that NodePort service routes is created along. You will be able to reach the NodePort service, from outside by requesting:
<NodeIP>:<NodePort>
That means that if you access any node with that port you will be able to reach your service. It worth to remember that NodePorts are high-numbered ports (30 000 - 32767)
Coming back specifically to AWS here is theirs official document how to expose a services along with NodePort explained.
Do note very important inforamation there about enabling the ports:
Note: Before you access NodeIP:NodePort from an outside cluster, you must enable the security group of the nodes to allow
incoming traffic through your service port.
Let me know if this helps.

Related

Target health check fails - AWS Network Load Balancer

NOTE: I tried to include screenshots but stackoverflow does not allow me to add images with preview so I included them as links.
I deployed a web app on AWS using kOps.
I have two nodes and set up a Network Load Balancer.
The target group of the NLB has two nodes (each node is an instance made from the same template).
Load balancer actually seems to be working after checking ingress-nginx-controller logs.
The requests are being distributed over pods correctly. And I can access the service via ingress external address.
But when I go to AWS Console / Target Group, one of the two nodes is marked as and I am concerned with that.
Nodes are running correctly.
I tried to execute sh into nginx-controller and tried curl to both nodes with their internal IP address.
For the healthy node, I get nginx response and for the unhealthy node, it times out.
I do not know how nginx was installed on one of the nodes and not on the other one.
Could anybody let me know the possible reasons?
I had exactly the same problem before and this should be documented somewhere on AWS or Kubernetes. The answer is copied from AWS Premium Support
Short description
The NGINX Ingress Controller sets the spec.externalTrafficPolicy option to Local to preserve the client IP. Also, requests aren't routed to unhealthy worker nodes. The following troubleshooting implies that you don't need to maintain the cluster IP address or preserve the client IP address.
Resolution
If you check the ingress controller service you will see the External Traffic Policy field set to Local.
$ kubectl -n ingress-nginx describe svc ingress-nginx-controller
Output:
Name: ingress-nginx-controller
Namespace: ingress-nginx
...
External Traffic Policy: Local
...
This Local setting drops packets that are sent to Kubernetes nodes that aren't running instances of the NGINX Ingress Controller. Assign NGINX pods (from the Kubernetes website) to the nodes that you want to schedule the NGINX Ingress Controller on.
Update the pec.externalTrafficPolicy option to Cluster
$ kubectl -n ingress-nginx patch service ingress-nginx-controller -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'
Output:
service/ingress-nginx-controller patched
By default, NodePort services perform source address translation (from the Kubernetes website). For NGINX, this means that the source IP of an HTTP request is always the IP address of the Kubernetes node that received the request. If you set a NodePort to the value of the externalTrafficPolicy field in the ingress-nginx service specification to Cluster, then you can't maintain the source IP address.

Istio traffic management with nginx-ingress working but only for port 80

I've seen something strange where I've been able to have an nginx-ingress with an injected sidecar (i.e. part of the mesh) successfully route traffic that it receives into a cluster based on a k8s ingress definition, and then apply Istio traffic routing to route traffic as desired internally, but this only works when the traffic is being sent to the k8s services via port 80, and only when that is a port that is NOT served by the associated k8s service. This tells me my success is likely some kind of hack.
I'm asking if anyone can point out where I'm going wrong and/or why this is working. (I need to use the nginx ingress here, I can't switch to using istio-ingressgateway for this.)
My configuration / ability to reproduce this is documented in full on this github project: https://github.com/bob-walters/nginx-istio which I've created to provide a way to repeat this setup.
My setup:
a standard Istio installation in a k8s cluster (docker desktop) with the namespaces configured to do automatic sidecar injection.
an nginx-ingress deployment (file) with injected istio sidecar.
configured the nginx-ingress with these values in order to ensure that the sidecar would not try to handle inbound traffic but should permit the outbound traffic to go to the sidecar:
podAnnotations:
traffic.sidecar.istio.io/includeInboundPorts: ""
traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
A set of (demo) services based on podinfo representing the different services that I want to route between via Istio virtual services. Each serving on port 9898 with type: ClusterIP (i.e. only accessible via ingress)
A k8s Ingress definition (file) for the nginx-ingress which carries out the routing for some fictitious hostnames to the different podinfo deployments. The ingress includes the following specific annotations:
The annotation nginx.ingress.kubernetes.io/service-upstream: "true" is set in order to ensure that the nginx-ingress uses the cluster IP address, rather than individual pod IP addresses, when forwarding traffic.
The annotation nginx.ingress.kubernetes.io/upstream-vhost: nginx-cache-v2.whitelabel-dev.svc.cluster.local is NOT set. Many articles will indicate that you should typically set this in combination with the above, but setting this has the effect of altering the Host header to the value specified, and Istio routes based on the Host header, so setting this would require that all Istio routing rules be specified in terms of those hostnames and not the original hostnames. More details on this can be found at: https://github.com/kubernetes/ingress-nginx/issues/3171
Finally: a Virtual Service (file) for one of the hosts (same hostname given in the ingress definition) which is meant to apply once the traffic reaches the Istio cluster, and carries out routing based on a cookie header. (It's doing weighted service shifting with a cookie to pin user sessions.)
Here's the oddity:
The Istio traffic management seems to apply correctly if the target port of the ingress is 80. If it's 9898 (as you would expect because that is the service's available port), the Istio traffic management doesn't seem to apply at all.
This is what I'm seeing as I try varying the port numbers:
Target Port of Ingress Rule
K8s Service Port
Virtual service Port
Result
80
9898
not set
virtual service works as desired
9898
9898
not set
routes to K8s Service. Virtual service has no effect
8080
9898
not set
fails: timeout/502 while attempting to invoke service
9898
9898
9898
routes to K8s Service. Virtual service has no effect
443
9898
not set
fails: timeout/502 while attempting to invoke service
I'm really confused as to why this is not working with port 9898, but is working for port 80, especially given that K8s reports my ingress definition as invalid. My understanding of the routing is that the inbound traffic would go to the 'controller' container in the nginx-ingress service, bypassing the istio proxy as long as it comes in on ports 80 or 443. The outbound traffic should all be going through the proxy destined for the ClusterIP addresses of the k8s services, but with the 'Host' header still containing the original requested host. Thus Istio should be able to handle its routing responsibilities based on Host + Port, and does so, but only if I am routing into the mesh with port 80.
Any help greatly appreciated!
I struggled with this some more and eventually got it working.
There are some specific (non-intuitive) things that have to be correctly lined up for virtual services to work with traffic handled by an nginx-ingress. The details are at the README.md at https://github.com/bob-walters/nginx-istio

Istio ingress gateway : domain name and port forwarding

I have set up an Istio service mesh. It works fine as I want so far. From outside I can only access with the port number like http://www.mytest.com:41333. What do I have to do to forward 80 to 41333 so that I can access it with http://www.mytest.com
Here is my Gateway :
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mytest-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "www.mytest.com"
Not sure what to do...
I assume your istio ingress gateway service type is NodePort, if you istio ingress gateway is NodePort then you have to use http://www.mytest.com:41333.
If you want to use http://www.mytest.com then you would have to change it to LoadBalancer.
You can check if your istio ingress gateway is NodePort with
kubectl get svc -n istio-system
And check istio ingress gateway type.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
As mentioned in istio documentation
If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port.
If you use cloud like aws you can configure Istio with AWS Load Balancer with appropriate annotations.
On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service's .status.loadBalancer
If it´s on premise, like minikube, then you could take a look at metalLB
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.
Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.
Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.
MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.
You can read more about it in below link:
https://medium.com/#emirmujic/istio-and-metallb-on-minikube-242281b1134b

How to configure an AWS Elastic IP to point to an OpenShift Origin running pod?

We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.

I have setup a Kubernetes cluster on two EC-2 instances & dashboard but I'm not able to access the ui for the kubernetes dashboard on browser

I have setup a kubernetes(1.9) cluster on two ec-2 servers(ubuntu 16.04) and have installed a dashboard, the cluster is working fine and i get output when i do curl localhost:8001 on the master machine, but im not able to access the ui for the kubernetes dashboard on my laptops browser with masternode_public_ip:8001, master-machine-output
this is what my security group looks like security group which contains my machine ip.
Both the master and slave node are in ready state.
I know there are a lot of other ways to deploy an application on kubernetes cluster, however i want to explore this particular option for POC purpose.
I need to access the dashboard of the kubernetes UI and the nginx application which is deployed on this cluster.
So, my question: is it something else i need to add in my security group
or its because i need to do some more things on my master machine?
Also, it would be great if someone could throw some light on private and public IP and which one could be used to access the application and how does these are related
Here is the screenshot of deployment details describe deployment [2b][2c]4
This is an extensive topic ranging from Kubernetes Services (NodePort or LoadBalancer for this case) to Ingress Controllers and such. But there is a simple, quick and clean way to access your dashboard without all that.
Use either kubectl proxy or kubectl port-forward to access dashboard via embeded Kube apiserver proxy or directly forward from localhost to POD it self.
Found out the answer
Sorry for the delayed reply
I was trying to access the web application through its container's port but in kubernetes there is a concept of NodePort. so, if your container is running at port 8080 it will redirect it to a port between somewhere 30001 to 35000
all you need to do is add details to your deployment file
and expose the service
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-world
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001