Starting Kyma after successful installation - kubectl

How do you start kyma after successful installation?
I've just used the command minikube start
However it gives me below error and kubernetes dashboard can't running
E0912 09:36:59.179484 5806 start.go:305] Error restarting cluster: restarting kube-proxy: waiting for kube-proxy to be up for configmap update: timed out waiting for the condition

Successful installation actually means minikube is already started. Look into https://kyma-project.io/docs/latest/root/kyma#prerequisites and note about usual minikube start vs our run.sh script
To work with Kyma, use only the provided installation and deinstallation scripts. Kyma does not work on a basic Minikube cluster that you can start using the minikube start command or stop with the minikube stop command. If you don't need Kyma on Minikube anymore, remove the cluster with the minikube delete command.
After successful installation, minikube is started, kubectl configured to use the minikube cluster, you can do kubectl get pods --all-namespaces and check if all pods are Ready

Related

docker containers on AWS now do not start until I run the “docker ps" in terminal

I have a number of containers on Docker in the AWS EC2 instance. All the containers are set to restart=always using this command
sudo docker update --restart=always 0576df221c0b
But, After the AWS Linux host start, docker containers on AWS now do not start until I run the “docker ps” command in the terminal. Here is a screenshot of docker ps. the screenshot below was taken after 1 hour of the AWS Linux reboot.
Any ideas about what might be causing the problem? Thanks a lot
Check the status of docker service & docker systemd socket. Probably docker service is down. And systemd socket is enabled for docker.
systemctl status docker.service
systemctl status docker.socket
When socket is enabled, Systemd daemon open listening sockets on behalf of the docker application and only start the docker daemon when a connection comes in. In your case when you execute the docker ps, a connection reaches to the listening socket and it in turn starting the docker.
To change the behaviour, enable the docker service. Then all your containers should start on system boot up. Run the following command
systemctl enable docker.service

kubctl not pulling images from public registry but docker pull works

In the fresh vmware PKS kubernetes cluster, the secret is created for private docker-registry and it works as expected. But the kubectl is not pulling the image from public registry "https://registry-1.docker.io/v2/".
I am connected to corporate network and http_proxy, https_proxy is set to reach internet.
The docker login,pull is working but images are not pulled when kubectl deployments are created. The public image is failing for "dduportal/bats:0.4.0". The kubectl describe output is copied to path in the github.
I tried to add the secrets for public docker registry like private seperately. This is pointed out by someone, to keep the secrets seperate incase of pulling images from multiple private regstries. In my case, its public, but still kept separate.
kubectl create secret docker-registry regcred-public --docker-server=registry-1.docker.io --docker-username=<public-user> --docker-password=<token> --docker-email=<myemail>
kubectl create secret docker-registry regcred-private --docker-server=private-registry --docker-username=<private-user> --docker-password=password --docker-email=<myemail>
What could be issue?
how to make my kubectl cluster to pull images from public repository
when docker pull from commandline is working without any issues.
There is no clue except the message that it has failed to pull from
public registry. It could be better if there is any suggestion from
kubernetes cluster.
Is there any rules/configuration required from the cluster end?
Failed to pull image "dduportal/bats:0.4.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Problem may lay in incorrect setup of proxy HTTP.
First, create a systemd drop-in directory for the Docker service:
mkdir /etc/systemd/system/docker.service.d
Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload
Verify that the configuration has been loaded:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Restart Docker:
$ sudo systemctl restart docker
Link to the official Docker documentation for proxy HTTP:
docker-http.

Kubernetes on AWS using Kops - kube-apiserver authentication for kubectl

I have setup a basic 2 node k8s cluster on AWS using KOPS .. I had issues connecting and interacting with the cluster using kubectl ... and I keep getting the error:
The connection to the server api.euwest2.dev.avi.k8s.com was refused - did you specify the right host or port? when trying to run any kubectl command .....
have done basic kops export kubecfg --name xyz.hhh.kjh.k8s.com --config=~$KUBECONFIG --> to export the kubeconfig for the cluster I have created. Not sure what else I'm missing to make a successful connection to the kubeapi-server to make kubectl work ?
Sounds like either:
Your kube-apiserver is not running.
Check with docker ps -a | grep apiserver on your Kubernetes master.
api.euwest2.dev.avi.k8s.com is resolving to an IP address where your nothing is listening.
208.73.210.217?
You have the wrong port configured for your kube-apiserver on your ~/.kube/config
server: https://api.euwest2.dev.avi.k8s.com:6443?

How to restart my minikube kubernetes cluster in AWS instance, after stopping and starting that instance?

I had a t2.micro server running where i had deployed Minikube, however due to memory issue i scared up the server size. For which i had to stop and start the instance.
But now after restarting, when i try with kubectl commands i get the below error.
root#ip-172-31-23-231:~# kubectl get nodes
The connection to the server 172.31.23.231:6443 was refused - did you
specify the right host or port?
So, how can i bring my earlier kube cluster up once i restart my AWS instance?
I had the same error. In my case minikube was not running. I started it with
minikube start
There are few methods to fix this issue.
Solution: 1
If all components are Stopped and if you run some commands you will get information to start the cluster again. This is usually solving this kind of issue. Starting the minikube cluster again won't delete anything from your cluster.
Example
$ minikube logs
🤷 The control plane node must be running for this command
👉 To start a cluster, run: "minikube start"
---
$ minikube status
minikube
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped
timeToStop: Nonexistent
---
$ minikube start
---
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
timeToStop: Nonexistent
Solution: 2 use followed commands
$ sudo -i
$ swapoff -a
$ exit
$ strace -eopenat kubectl version
Solution: 3 Restart docker service
$ sudo service docker restart
And wait ~20-40 seconds
Solution: 4
As the last option if everything else fails, you can delete and create a new Minikube.
$ minikube delete
🔥 Deleting "minikube" in docker ...
🔥 Deleting container "minikube" ...
---
$ minikube start
This option unfortunately will delete all your resources from the cluster. It will create cluster from scratch.
Useful information and commands:
Minikube Troubleshooting guide
$ minikube logs
$ minikube dashboard
$ minikube status
check this :
run -> minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: ...
if isn't running -> minikube start
then check the dashboard -> minikube dashboard

Kubectl is not working on AWS EC2 instance

I am unable to install kubectl on AWS ec2 instance(Amazon ami and ubuntu).
After installing kops and kubectl tried to check the version of kubectl but it is throwing the error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have already opened the ports, but still, I'm getting the same error.
I have installed Minikube also, but still, I am facing the same issue.
This is because your ~/.kube/config file is not correct. Configure it correctly so that you can connect to your cluster using kubectl.
Kubectl is the tool to control your cluster. It can be installed by Kops, for example.
If you already have the cluster to manage it from the host you did not use for the initialization, you should export your Kubeconfig by kops export kubecfg command on the node where you have the configured installation of kops.
If not, initialize the cluster first, and Kops will setup the Kubectl configuration for you automatically.
If you want to run with cluster,
You should try after getting token by kubeadm init, which gives advice that
-run:
sudo cp /etc/kubernetes/config $HOME/
sudo chown $(id -u):$(id -g) $HOME/config
export KUBECONFIG=$HOME/config
~/.kube/config is your missing file.