Kubernetes cluster appears to have been deleted after upgrading from trial version - google-cloud-platform

After my trial version credits got depleted on GCP, I upgraded my account. However my Kubernetes cluster and all its deployments appear to have been deleted. It now appears almost like I am activating Kubernetes Engine for the very first time. What can I do to restore my Kubernetes cluster?
enter image description here

Related

Migrating AWS Neptune Snapshots

I have taken a snapshot of a Neptune cluster which is on Neptune Engine V1.0.x and when I try to restore it I am getting an option to create a cluster with engine versions 1.0.x or 1.1.x.
The option to restore it on a cluster with engine version 1.2.x is not present.
If engine version 1.0.x and 1.1.x reach their end of life, then how would a snapshot created from engine version 1.0.x get restored?
Is it possible to migrate AWS Neptune snapshot from one engine version to another?
I reached out AWS support and got a response for this query.
As per them, there are significant changes in the architecture starting from engine version 1.1.1.0 and that's why db engines on 1.0.x.x must be upgraded to 1.1.1.0 before upgrading to 1.2.x.x.
Also, there is no way to restore snapshots which are on 1.0.x.x to 1.2.x.x once they reach their end of life. Only way to restore those snapshots will be to restore the snapshot before end of life, upgrade the restored cluster, and then take a new snapshot of the upgraded cluster.

Cost of running small kubernetes cronjob?

I am currently learning kubernetes and would like to run a cronjob every 6 hours (job is running under a minute). Minikube is not suitable as I cannot ensure my laptop stay alive 24h/7d... I wonder what is the cost on main kubernetes providers (GCP, AWS, Azure) for this type of workload? Is it better to rent a VM and install a small kubernetes instance to do so?
Thanks
Getting former user feedback will be helpful.
You can have a look to Cloud Run and Cloud Run jobs that allow you to run container in serverless mode.
In addition, you can also have a look to GKE Autopilot where you pay only when you consume resource on the cluster (and the first cluster is free).

EKS upgrade of first of November

I've an EKS cluster in AWS.
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl version --short
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
Client Version: v1.23.7-eks-4721010
Server Version: v1.20.11-eks-f17b81
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl get nodes
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
NAME STATUS ROLES AGE VERSION
fargate-172.101.0.134 Ready <none> 13d v1.20.15-eks-14c7a48
fargate-172.101.0.161 Ready <none> 68d v1.20.15-eks-14c7a48
On Nov 1st AWS is going to update the server version to 1.21 because AWS going to end the support of 1.20.
What problems will come up? I read
no EKS features were removed in 1.21
What should I do in order to be safe?
With EKS version upgraded, it is based on the official Kubernetes upstream version upgrade so you better check what changes and deprecation between 1.20 and 1.21 and do they affect your current workload because of changes from APIs.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#changelog-since-v1200
If yes, you have to prepare updating your manifests: https://aws.amazon.com/blogs/containers/preparing-for-kubernetes-api-deprecations-when-going-from-1-15-to-1-16/
From the official calendar, it says End Of Support on November 1st but there are some FAQs that you need to understand first: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#version-deprecation
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version.
Amazon EKS will automatically upgrade existing control planes (not nodes) to the oldest supported version through a gradual deployment process after the end of support date. Amazon EKS does not allow control planes to stay on a version that has reached end of support.
After the automatic control plane update, you must manually update cluster add-ons and Amazon EC2 nodes.
To be safe, you have to
prepare your manifests to be updated with latest Kubernetes version that you are going to upgrade to.
proactively upgrade your EKS before AWS forces doing it.
remember to upgrade EKS add-ons, node groups and test any of your current controllers in the new version: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
I suggest you just provision a test cluster with spot instances and try out your application first.
References:
You do not really need to perform any safety checks on your end, AWS will take care of the upgrade process (at least from 20 to 21. My recommendation will be to upgrade before AWS tries to upgrade the cluster, as once it reached the end of life, the upgrade can happen anytime
The only thing you need to update manually
Self-managed Node Group
Any add-on
The breaking change is the service account token expiry
Any service that depends on a service token account, keep in mind that now the token will expire in one hour and the service/pod need to refresh the token
Service account tokens now have an expiration of one hour. In previous Kubernetes versions, they didn't have an expiration.
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes.
kubernetes-versions-1.21

Worker nodes not joining the EKS cluster after upgrade

We are have an eks cluster with version 1.16. I was try to upgrade it to the version 1.17. Since, our entire setup is deployed using terraform, I used the same for the upgrade by using cluster_version = "1.17". The upgrade of EKS control plane worked fine. I also updated kube-proxy,CoreDns and Amazon VPC CNI. But, I am facing an issue with worker nodes. I tried to create a new worker group, the newly created worker nodes got created successfully created in aws. I am also able to see them in the ec2 console. But, the nodes didn´t joined the cluster. I am not able to see newly created worker-nodes when i try the command kubectl get nodes. Can anyone please guide me regarding this issue. Is there any extra setup i need to perform to join the worker-nodes to the cluster.

Is VPC-native GKE cluster production ready?

This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update