Kubernetes forbidden: User "system:anonymous" cannot get path "/" - amazon-web-services

I'm struggling to expose my app over the Internet when deployed to AWS EKS.
I have created a deployment and a service, I can see both of these running when using kubectl. I can see that the app has successfully connected to an external database as it runs a script on startup that initialises said database.
My issue is arising when trying to access the app over the internet. I have tried accessing the cluster endpoint and I am getting this error:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User "system:anonymous" cannot get path "/"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
However, if I access the "/readyz" path I get "ok" returned.
"/version" returns the following:
{
"major": "1",
"minor": "16+",
"gitVersion": "v1.16.8-eks-e16311",
"gitCommit": "e163110a04dcb2f39c3325af96d019b4925419eb",
"gitTreeState": "clean",
"buildDate": "2020-03-27T22:37:12Z",
"goVersion": "go1.13.8",
"compiler": "gc",
"platform": "linux/amd64"
}
My deployment.yml file contains the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
labels:
app: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: image/repo
ports:
- containerPort: 80
imagePullPolicy: Always
My service.yml:
apiVersion: v1
kind: Service
metadata:
name: client
labels:
run: client
spec:
type: LoadBalancer
ports:
- name: "80"
port: 80
targetPort: 80
protocol: TCP
selector:
run: client
I can see the Load Balancer has been created in the AWS console and I have tried updating the security group of the LB to be able to talk to the cluster endpoint. The LB dashboard is showing the one attached instance is 'OutOfService' and also under the monitoring tab, I can see one Unhealthy Host.
I've tried accessing the Load Balancer endpoint as provided in the EC2 area of the console (this matches what is returned from kubectl get services as the EXTERNAL-IP of the LB service) and I'm getting an empty response from there.
curl XXXXXXX.eu-west-2.elb.amazonaws.com:80
curl: (52) Empty reply from server
This is the same when accessing in a web browser.
I seem to be going round in circles with this one any help at all would be greatly appreciated.

I've tried accessing the Load Balancer endpoint
You are accessing the EKS URL, which is the kubernetes apiserver endpoint, and not the LoadBalancer that was (hopefully) created for your client Service
You will want to kubectl get -o wide svc client and if it was successful in provisioning a LoadBalancer for you, then its URL will appear in the output. You can get more details about that situation by kubectl describe svc client, which will include any events that affected it during provisioning

Because your eks instance is OutOfService in the LoadBalancer section, you should check which port the LoadBalancer is doing Health Check on.
You can do that by executing kubectl get svc client -oyaml and seeing the nodePort section.
After that, check that your LoadBalancer is doing the Health Check to this exact port, if not than change it to the correct one.
If you have the correct port and but the instance is still OutOfService then i suggest you go to the security group of your eks instance and give the specific port access from the ELB.

I never got to the bottom of the issue here. I started again and used A pre-made Helm chart for the software I was trying to deploy and it worked.

Related

AWS Load Balancer Controller on EKS - Sticky Sessions Not Working

I have deployed AWS Load Balancer Controller on AWS EKS. I have created k8s Ingress resource
I am deploying java web application with k8s Deployment. I want to make sure sticky session holds to make my application work.
I have read that if I set below annotation then sticky sessions will work :
alb.ingress.kubernetes.io/target-type: ip
But I am seeing ingress is routing requests to different replica each time letting login fail as session cookies are not persisting.
What am I missing here ?
alb.ingress.kubernetes.io/target-type: ip is required.
but the annotation to enable stickiness is:
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
Also you can set cookie_duration_settings.
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=300
If you want to manage the stick session from the K8s level you can use the, sessionAffinity: ClientIP
kind: Service
apiVersion: v1
metadata:
name: service
spec:
selector:
app: app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000

AWS all annotations not applied

I am using a yaml config to create a network load balancer in AWS using kubectl.
The load balancer is created successfully and the target groups are attached correctly.
As the part of settings, I have passed annotations required for AWS, but all annotations are not applied when looking at the Load Balancer in aws console.
The name is not getting set and the load balancer logs are not enabled. I get a load balancer with random alphanumeric name.
apiVersion: v1
kind: Service
metadata:
name: test-nlb-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: test-nlb # not set
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-2016-08
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:***********:certificate/*********************
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443,8883
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,app=test, name=test-nlb-dev"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "15" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "random-bucket-name" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "random-bucket-name/dev/test-nlb-dev" # not set
labels:
app: test
spec:
ports:
- name: mqtt
protocol: TCP
port: 443
targetPort: 8080
- name: websocket
protocol: TCP
port: 8883
targetPort: 1883
type: LoadBalancer
selector:
app: test
If anyone can point what could be the issue here ? I am using kubectl v1.19 and Kubernetes v1.19
I think this is a version problem.
I assume you are running the in-tree cloud controller and not an external one (see here).
The annotation service.beta.kubernetes.io/aws-load-balancer-name is not present even in the master branch of kubernetes.
That does not explain why the other annotations do not work though. In fact
here you can see what annotations are supported by kubernetes 1.19.12 and the others you mentioned are not working are listed in the sources.
You might find more information in the controller-manager logs.
My suggestion is to disable the in-tree cloud controller in controller manager and run the standalone version.

Google Cloud Run custom domains do not work with web sockets

I successfully deployed a simple Voila dashboard using Google Cloud Run for Anthos. However, since I created the deployment using a GitLab CI pipeline, by default the service was assigned a long and obscure domain name (e.g. http://sudoku.dashboards-19751688-sudoku.k8s.proteinsolver.org/).
I followed the instructions in mapping custom domains to map a shorter custom domain to the service described above (e.g http://sudoku.k8s.proteinsolver.org). However, while the static assets load fine from this new custom domain, the interactive dashboard does not load, and the javascript console is populated with errors:
default.js:64 WebSocket connection to 'wss://sudoku.k8s.proteinsolver.org/api/kernels/5bcab8b9-11d5-4de0-8a64-399e35258aa1/channels?session_id=7a0eed38-77bb-40e8-ad77-d05632b5fa1b' failed: Error during WebSocket handshake: Unexpected response code: 503
_createSocket # scheduler.production.min.js:10
[...]
Is there a way to get web sockets to work with custom domains? Am I doing something wrong?
TLDR, the following yaml needs to be applied to make websocket work:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: allowconnect-cluser-local-gateway
namespace: gke-system
spec:
workloadSelector:
labels:
app: cluster-local-gateway
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 80
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
http2_protocol_options:
allow_connect: true
Here is the explanation.
For the custom domain feature, the request path is
client ---> istio-ingress envoy pods ---> cluster-local-gateway envoy pods ---> user's application.
Specifically for websocket request, it needs cluster-local-gateway envoy pods to support extended CONNECT feature.
The EnvoyFilter yaml enables the extended CONNECT feature by setting allow_connect to true within the cluster-local-gateway pods.
I tried it by myself, and it works for me.
I don't know anything about your GitLab CI pipeline. By default, Knative (Cloud Run for Anthos) assigns external domain names like {name}.{namespace}.example.com where example.com can be customized based on your domain.
You can find this domain at Cloud Console or kubectl get ksvc.
First try if this domain works correctly with websockets. If so, indeed it's a "custom domain" issue. (If you are not sure, please edit your title/question to not to mention "custom domains".)
Also, you need to explicitly mark your container port as h2c on Knative for websockets to work. See ports section below, specifically name: h2c:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
ports:
- name: h2c
containerPort: 8080
I also see that the response code to your requests is HTTP 503, likely indicating a server error. Please check your application’s logs.

AWS EKS WITH FARGATE PROFILE USING KONG INGRESS- Unable to expose port 80 to public

I deployed kong ingress controller on aws eks cluster with fargate option.
I am unable to access out application over the internet over http port.
I am keep getting -ERR_CONNECTION_TIMED_OUT in browser.
I did follow the Kong deployment as per steps given at -
https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/eks.md
Kong-proxy service is created wihtout issue.
kong-proxy service is created yet its “EXTERNAL-IP” is still showing pending.
We are able to access our local application in internal network (by logging on to running pod) via Kong-proxy CLUSTER-IP without any problem using curl.
A nlb load balancer is also created automatically in aws console when we created kong-proxy service. Its DNS name we are using to try to connect from internet.
Kindly help me understand what could be the problem.
My kong-proxy yaml is-
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
externalTrafficPolicy: Local
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 80
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 443
selector:
app: ingress-kong
type: LoadBalancer
I don't think it's supported now as per https://github.com/aws/containers-roadmap/issues/617

Kubernetes not creating ELB

I'm trying to set up my Kubernetes services as being external by using type: LoadBalancer on AWS. After I created my service using kubectl I can see the change but no ELB is created, not even async. Any hints on what could cause this? The pod I'm trying to expose is running a Docker image which exposes a web-server on port 8001.
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
name: my-service
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8001
selector:
name: my-service
This was answered by Jan Garaj in Google Container Engine: Kubernetes is not exposing external IP after creating container regarding a GCE deployment and the answer for AWS is the same: you need to wait a few minutes for the reconciler to kick in, notice that the ELB should be created, talk to the AWS APIs and create it for you.