I am trying with Ingress feature in GKE Cluster` . Following are the steps I followed
1. Create deployment with below command
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
2. Exposed the deployment of type NodePort
kubectl expose deployment hello --port=8080 --type=NodePort
3. my ingress manifests is as follows
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello
servicePort: 8080
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.0.41.132 <None> 8080:30820/TCP 113m
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 35.X.X.X 80 26m
But when I access the external IP using curl , it throws 404 not found .
Below error can be seen from GKE Console
I think I am missing something in the ingress definition . Please guide to fix this.
Image definition has been taken from this guide
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
I have tried to create the same ingress from the scratch (none cluster, none ingress service, none service), and I was able to create it and perform a curl successfully, these were the steps:
1.- Create a cluster (It does not matter the details, just create it as you want)
2.- Connect to the cluster and install kubectl-> sudo apt-get install kubectl
3.- kubectl create deployment hello --image=gcr.io/google-samples/hello-app:2.0
4.- kubectl expose deployment hello --port=8080 --type=NodePort
5.- Create the ingress as follows (Without annotations), as per Creating an Ingress resource
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: web
servicePort: 8080
6.- Review your ingress kubectl get ingress basic-ingress
#cloudshell:$ kubectl get ingress basic-ingress
NAME HOSTS ADDRESS PORTS AGE
basic-ingress * 130.211.xx.xxx 80 5m46s
7.- And now is working when I have performed the curl:
#cloudshell:$ curl http://130.211.xx.xxx
Hello, world!
Version: 2.0.0
Hostname: hello-86dbf5b7c6-f7qgl
You were using ingress annotations, and it is another way to create ingress services, but a little bit more advanced. My suggestion is to create it as simple as possible first.
Please try it at this way and let me know about it.
The same YAML definitions are failing for me in a SharedVPC . This got resolved after adding the below firewall rule
gcloud compute firewall-rules create k8s-fw-l7--60cada75751e6d79 --network <SharedVPC> --description "GCE L7 firewall rule" --allow tcp:30000-32767 --source-ranges 130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 --target-tags gke-privatetestgkecluster-cf899a18-node --project <Project>
https://cloud.google.com/load-balancing/docs/health-checks
Related
I am following this AWS guide: https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/ to setup my kubernetes cluster under ALB.
After installing the AWS ALB controller on my EKS cluster, following below steps:
helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=YOUR_CLUSTER_NAME \
--set serviceAccount.create=false \
--set region=YOUR_REGION_CODE \
--set vpcId=<VPC_ID> \
--set serviceAccount.name=aws-load-balancer-controller \
-n kube-system
I want to deploy my ingress configurations:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: 200,302
alb.ingress.kubernetes.io/target-type: instance
kubernetes.io/ingress.class: alb
name: staging-ingress
namespace: staging
finalizers:
- ingress.k8s.aws/resources
spec:
rules:
- http:
paths:
- backend:
serviceName: my-service
servicePort: 80
path: /api/v1/price
Everything looks fine. However, when I run the below command to deploy my ingress:
kubectl apply -f ingress.staging.yaml -n staging
I am having below error:
Error from server (InternalError): error when creating "ingress.staging.yaml": Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": the server could not find the requested resource
There are very few similar issues on Google an none was helping me. Any ideas of what is the problem?
K8s version: 1.18
the security group solved me:
node_security_group_additional_rules = {
ingress_allow_access_from_control_plane = {
type = "ingress"
protocol = "tcp"
from_port = 9443
to_port = 9443
source_cluster_security_group = true
description = "Allow access from control plane to webhook port of AWS load balancer controller"
}
}
I would suggest to take a look at the alb controller logs, the CRDs that you are using are for v1beta1 API group while the latest chart is registering v1 API group webhook aws-load-balancer-controller v2.4.0
If you look at the alb controller startup logs you should see a line similar to the below message
v1beta1
{"level":"info","ts":164178.5920634,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1beta1-ingress"}
v1
{"level":"info","ts":164683.0114837,"logger":"controller-runtime.webhook","msg":"registering webhook","path":"/validate-networking-v1-ingress"}
if that is the case you can fix the problem by using an earlier version of the controller or get the newer version for the CRDs
I deployed a EKS cluster in AWS. I'd like to create a ALB infront of my cluster. I use below command:
eksctl create iamserviceaccount --namespace default --name alb-ingress-controller --cluster $componentName --attach-policy-arn $servicePolicyArn --approve --override-existing-serviceaccounts to create a service account.
below is the ingress I created in k8s:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: es-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: es-entrypoint
servicePort: 80
After apply the config, I got an empty address when run:
$ kubectl get ingress/es-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
es-ingress <none> * 80 2d5h
I am able to see the service account:
$ kubectlaws get serviceaccount alb-ingress-controller
NAME SECRETS AGE
alb-ingress-controller 1 31h
what did I do wrong?
first of all it would be good to know how did you setup the ALB.
A Service Account is not required, instead you need to have an Ingress Controller. Without that, your Ingress resource is useless. There are a lot of different Ingress Controllers, one of the easiest is the ingress-nginx. But just check this awesome comparison: https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238
So, the easiest way is
Setup an Ingress Controller
Configure the Ingress Controller as NodePort with a port like 30080
Setup your ALB and configure the AWS Target Group to use the NodePort with 30080
Setup the Ingress resource like above (and you dont need the wildcard in the path)
Now all the traffic from the ALB will redirected to your NodePort (the Ingress Controller). The Ingress ressource is responsible to configure the Nginx configuration inside the Ingress Controller. Thats it!
The whole traffic flow could look like this
If this doesnt work
Check the Ingress Controller logs kubectl logs --follow --namespace YOURNAMESPACE NAMEOFTHEINGRESSCONTROLLER
If you dont get any logs there, just enable the AWS ALB logs and check them
I hope that was helpful, if not, please provide some more detailed information about your infrastructure.
I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123m
nginx LoadBalancer 10.107.199.170 <pending> 8080:31579/TCP 45m52s
My create command is
kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer
I'm not sure what I'm doing wrong.
What I expect to see is an EXTERNAL-IP address given for the load balancer.
Has anyone had this and successfully solved it, please?
Thanks.
You need to setup the interface between k8s and AWS which is aws-cloud-provider-controller.
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
More details can be found:
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/
https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/
https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2
Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using annotations.
apiVersion: v1
kind: Service
metadata:
name: example
namespace: kube-system
labels:
run: example
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 5556
protocol: TCP
selector:
app: example
Different settings can be applied to a load balancer service in AWS using annotations.
To Create K8s cluster on AWS using EC2, you need to consider some configuration to make it work as expected.
that's why your service is not exposed right with external IP.
you need to get the public IP of the EC2 instance that your cluster used it to deploy Nginx pod on it and then edit Nginx service to add external IP
kubectl edit service nginx
and that will prompt terminal to add external IP:
type: LoadBalancer
externalIPs:
- 1.2.3.4
where 1.2.3.4 is the public IP of the EC2 instance.
then make sure your security group inbound traffic allowed on your port (31579)
Now you are ready to user k8s service from any browser open: 1.2.3.4:31579
I have a kubernetes application using AWS EKS. With the below details:
Cluster:
+ Kubernetes version: 1.15
+ Platform version: eks.1
Node Groups:
+ Instance Type: t3.medium
+ 2(Minimum) - 2(Maximum) - 2(Desired) configuration
[Pods]
+ 2 active pods
[Service]
+ Configured Type: ClusterIP
+ metadata.name: k8s-eks-api-service
[rbac-role.yaml]
https://pastebin.com/Ksapy7vK
[alb-ingress-controller.yaml]
https://pastebin.com/95CwMtg0
[ingress.yaml]
https://pastebin.com/S3gbEzez
When I tried to pull the ingress details. Below are the values (NO ADDRESS)
Host: *
ADDRESS:
My goal is to know why the address has no value. I expect to have private or public address to be used by other service on my application.
solution fitted my case is adding ingressClassName in ingress.yaml or configure default ingressClass.
add ingressClassName in ingress.yaml
#ingress.yaml
metadata:
name: ingress-nginx
...
spec:
ingressClassName: nginx <-- add this
rules:
...
or
edit ingressClass yaml
$ kubectl edit ingressclass <ingressClass Name> -n <ingressClass namespace>
#ingressClass.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true" <-- add this
....
link
In order for your kubernetes cluster to be able to get an address you will need to be able to manage route53 from withtin the cluster, for this task I would recommend to use externalDns.
In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
source: ExternalDNS
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.
I am trying to connect my ingress to a static ip. I seem to be following all the tutorials, but still I cannot seem to attach my static ip to ingress. My ingress file is as follows (refering to the static ip "test-ip")
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-web
annotations:
kubernetes.io/ingress.global-static-ip-name: "test-ip"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- http:
paths:
- path: /api/
backend:
serviceName: api-cluster-ip-service
servicePort: 5005
- path: /
backend:
serviceName: web-cluster-ip-service
servicePort: 80
However, when I run
kubectl get ingress ingress-web
it returns
kubectl get ingress ingress-web
NAME HOSTS ADDRESS PORTS AGE
ingress-web * 80 4m
without giving the address. In the VPC network [External IP addresses
] the static ip is there, it is global, but it keeps saying: In use by None
gcloud compute addresses describe test-ip --global
gives
address: 34.240.xx.xxx
creationTimestamp: '2019-03-26T00:34:26.086-07:00'
description: ''
id: '536303927960423409'
kind: compute#address
name: test-ip
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/my-project- adbc8/global/addresses/test-ip
status: RESERVED
What am I missing here?
I ran into this issue. I believe it has been fixed by this pull request.
Changing
kubernetes.io/ingress.global-static-ip-name
to
kubernetes.io/ingress.regional-static-ip-name
Worked for me.
I've spent hours trying to figure the issue out.
It simply seems like a bug with GKE.
What solved it was:
Starting ingress with no static ip
Going to cloud console on the web under VPC Network > External IP addresses
Waiting for the Ingress ip to show up
Setting is as static, and giving it a name
Adding kubernetes.io/ingress.global-static-ip-name: <ip name> Ingress yaml and applying it.
You have to make sure the IP you created in GCP is Global and not Regional in order to use the following annotation in your ingress:
kubernetes.io/ingress.global-static-ip-name
I had the same problem, but after some research and testing I managed to solve this issue. These are the steps I took:
First you need to create a Global static IP address on GCP.
I happened to use Terraform to do this eg see example below
resource "google_compute_global_address" "static" {
name = "global-test-ip"
project = var.gcp_project_id
address_type = "EXTERNAL"
}
based on this documentation: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address
You could however use the GCP console to do this.
Note: I created this Global Static IP in the same GCP project as my GKE cluster.
Once I had completed the creation of the Global Static IP I then added the following annotation to the Kubernetes ingress yaml file and applied it (ie kubectl apply -f ingress.yaml):
annotations:
kubernetes.io/ingress.global-static-ip-name: "global-test-ip"
Note: it took a few minutes for the Ingress and Google Load balancer to update after I applied this ingress change.
The first thing you should check is the status of the IP, e.g.
gcloud compute addresses describe traefik --global
You should see something along the lines of:
address: 34.111.200.XXX
addressType: EXTERNAL
creationTimestamp: '2022-07-25T14:06:48.827-07:00'
description: ''
id: '5625073968713218XXX'
ipVersion: IPV4
kind: compute#address
name: traefik
networkTier: PREMIUM
selfLink: https://www.googleapis.com/compute/v1/projects/contrawork/global/addresses/traefik
status: RESERVED
Your Ingress should look something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: 'gce'
kubernetes.io/ingress.global-static-ip-name: 'traefik'
name: secondary-ingress
spec:
defaultBackend:
service:
name: 'traefik'
port:
number: 80
After this is deployed, within 5 minutes you should see status change to IN USE.
If not, I would attempt to delete and re-create the Ingress resource.
If it still does not happen, then I would check the documentation if you have properly configured the cluster, e.g. Ensure that GKE cluster has "HTTP Load Balancing" enabled.