Kubernetes/AWS - ELB is not deleted when the service is deleted - amazon-web-services

I'm using the --cloud-provider=aws flag to integrate Kubernetes with AWS. I'm able to create and expose a service via ELB using the following commands:
kubectl run sample-nginx --image=docker.io/nginx --port=80
kubectl expose deployment/sample-nginx --port=80 --target-port=80 \
--name=sample-nginx-service --type=LoadBalancer && \
kubectl annotate svc sample-nginx-service \
service.beta.kubernetes.io/aws-load-balancer-internal=0.0.0.0/0
This exposes the nginx service on an internal-ELB on a private subnet. I'm able to access the service on the ELB as well.
Now, when I delete the service, the service is deleted, but the ELB is not. Here's how I deleted the service:
kubectl delete services sample-nginx-service
Any pointers to what could be going wrong? I did not see any errors in the kube-controller-manager log when I ran the deletion command. What other logs should I be checking?

Upgrading to etcd v3.0.17 from v3.0.10 fixed the issue. I found another log messaged in the controller logs which pointed to the issue here:
https://github.com/kubernetes/kubernetes/issues/41760

You seem to have a typo on your service delete command.
It should be kubectl delete services sample-nginx-service.
I just tried this on my AWS cluster and confirmed that the ELB was deleted successfully after running this.
I'm running Kubernetes 1.6.2 with kubectl 1.6.3

Can you do the same process but with a config file and see if it's command or config related?

Related

Kubernetes cluster hosted in AWS using Kops doesn't provide detailed logs for Pod failure

I created a single master and 2 worker cluster using Kops on AWS. Now for my experiment, I need to kill the pod on a worker and check the kubelet logs to know:
When the pod was removed from service endpoint list?
When a new pod container got recreated?
When the new pod container was assigned the new IP Address?
While when I created an on-prem cluster using kubeadm, I could see all the information (like the one mentioned above) in the kubelet logs of the worker node (whose pod was killed).
I do not see detailed kubelet logs like this, specially logs related to assignment of IP address in Kops created K8s cluster.
How to get the information mentioned above in the cluster created using kops?
On the machines with systemd both the kubelet and container runtime write to journald. If systemd is not present they write to .log in the /var/log location.
You can access the systemd logs with journalctl command:
journalctl -u kubelet
This information of course has to be collected after login into desired node.
In Kops on AWS, the kubelet logs are not that descriptive as they are in a Kubernetes cluster created using kubeadm.

kOps 1.19 reports error "Unauthorized" when interfacing with AWS cluster

I'm following the kOps tutorial to set up a cluster on AWS. I am able to create a cluster with
kops create cluster
kops update cluster --yes
However, when validating whether my cluster is set up correctly with
kops validate cluster
I get stuck with error:
unexpected error during validation: error listing nodes: Unauthorized
The same error happens in many other kOps operations.
I checked my kOps/K8s version and it is 1.19:
> kops version
Version 1.19.1 (git-8589b4d157a9cb05c54e320c77b0724c4dd094b2)
> kubectl version
Client Version: version.Info{Major:"1", Minor:"20" ...
Server Version: version.Info{Major:"1", Minor:"19" ...
How can I fix this?
As of kOps 1.19 there are two reasons you will suddenly get this error:
If you delete a cluster and reprovision it, your old admin is not removed from the kubeconfig and kOps/kubectl tries to reuse it.
New certificates have a TTL of 18h by default, so you need to reprovision them about once a day.
Both issues above are fixed by running kops export kubecfg --admin.
Note that using the default TLS credentials is discouraged. Consider things like using an OIDC provider instead.
Kubernetes v1.19 removed basic auth support, incidentally making the default kOps credentials unable to authorize. To work around this, we will update our cluster to use a Network Load Balancer (NLB) instead of the default Classic Load Balancer (CLB). The NLB can be accessed with non-deprecated AuthZ mechanisms.
After creating your cluster, but before updating cloud resources (before running with --yes), edit its configuration to use a NLB:
kops edit cluster
Then update your load balancer class to Network:
spec:
api:
loadBalancer:
class: Network
Now update cloud resources with
kops update cluster --yes
And you'll be able to pass AuthZ with kOps on your cluster.
Note that there are several other advantages to using an NLB as well, check the AWS docs for a comparison.
If you have a pre-existing cluster you want to update to a NLB, there are more steps to follow to ensure clients don't start failing AuthZ, to delete old resources, etc. You'll find a better guide for that in the kOps v1.19 release notes.

kubectl error while querying URL of service in minikube

I have been trying to run the echo server program (hello world) as part of leaning minikube with kubectl.
I was able to run and expose the service using the below commands..
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
However, while trying to get the URL minikube service hello-minikube --urlof the above service, got the below error:
Error: unknown command "service" for "kubectl"
did anyone face the similar issue ?
I have been following this document https://github.com/kubernetes/minikube
also, searched for the solution. I got a workaround timebeing with the following commands..
minikube ip
gives me the VM IP
kubectl get services
gives the port of the hello-minikube service. So, now I'm able to hit in the browser as http://192.168.99.100:32657/
Note: Your IP address may be different.
Approach 2:
Found and easy option - opens the service in browser itself.
minikube service hello-minikube
Hope this helps if someone faces similar issue.
What command are you running? You should be running
$ minikube service hello-minikube --url directly, without the kubectl prefix.
Use below command
.\minikube.exe service hello-minikube --url

Istio limit access to Google cloud resources

I have a service running o Google Container Engine(Kubernetes). It access Google Cloud Storage and works fine.
On the same Kubernetes cluster, I installed Istio 0.1 following to https://istio.io/v-0.1/docs/tasks/installing-istio.html
I deploy my service via kube-inject
kubectl create -f <(istioctl kube-inject -f myservice.yaml)
But now my service cannot access Google Cloud Storage any more. I get the following error message:
java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
To me it looks like the kube-inject and the sidecar make something so my service cannot access information about my google cloud project I am running in. As far as I can see is the sidecar the only difference.
Service still works when deploying without kube-inject.
What can cause this effect?
You may want to configure access to your external services as explained in Enabling Egress Traffic: either as Kubernetes external services or to use istioctl --includeIPRanges to exclude external traffic from being controlled by Istio.

How do I bind my kubernetes cluster master to an elastic ip with AWS?

I ran the install script:
export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash
And it set up my cluster just fine. The problem is that the master was not on an elastic ip. So I went to the VPC settings in the AWS management console and bound it to one. This obviously changed the ip of the master which I correspondingly changed in .kube/config.
Now whenever I try to do anything with kubectl (e.g. kubectl get pods) I get the error: error: couldn't read version from server: Get https://NEW_IP/api: x509: certificate is valid for OLD_IP, 10.0.0.1, not NEW_IP.
Is there a correct way to bind the master to an elastic IP? How do I fix this?
Solved by doing: kube-down then export MASTER_RESERVED_IP=[NEW_IP] and then kube-up