EKS upgrade of first of November - amazon-web-services

I've an EKS cluster in AWS.
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl version --short
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
Client Version: v1.23.7-eks-4721010
Server Version: v1.20.11-eks-f17b81
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl get nodes
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
NAME STATUS ROLES AGE VERSION
fargate-172.101.0.134 Ready <none> 13d v1.20.15-eks-14c7a48
fargate-172.101.0.161 Ready <none> 68d v1.20.15-eks-14c7a48
On Nov 1st AWS is going to update the server version to 1.21 because AWS going to end the support of 1.20.
What problems will come up? I read
no EKS features were removed in 1.21
What should I do in order to be safe?

With EKS version upgraded, it is based on the official Kubernetes upstream version upgrade so you better check what changes and deprecation between 1.20 and 1.21 and do they affect your current workload because of changes from APIs.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#changelog-since-v1200
If yes, you have to prepare updating your manifests: https://aws.amazon.com/blogs/containers/preparing-for-kubernetes-api-deprecations-when-going-from-1-15-to-1-16/
From the official calendar, it says End Of Support on November 1st but there are some FAQs that you need to understand first: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#version-deprecation
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version.
Amazon EKS will automatically upgrade existing control planes (not nodes) to the oldest supported version through a gradual deployment process after the end of support date. Amazon EKS does not allow control planes to stay on a version that has reached end of support.
After the automatic control plane update, you must manually update cluster add-ons and Amazon EC2 nodes.
To be safe, you have to
prepare your manifests to be updated with latest Kubernetes version that you are going to upgrade to.
proactively upgrade your EKS before AWS forces doing it.
remember to upgrade EKS add-ons, node groups and test any of your current controllers in the new version: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
I suggest you just provision a test cluster with spot instances and try out your application first.
References:

You do not really need to perform any safety checks on your end, AWS will take care of the upgrade process (at least from 20 to 21. My recommendation will be to upgrade before AWS tries to upgrade the cluster, as once it reached the end of life, the upgrade can happen anytime
The only thing you need to update manually
Self-managed Node Group
Any add-on
The breaking change is the service account token expiry
Any service that depends on a service token account, keep in mind that now the token will expire in one hour and the service/pod need to refresh the token
Service account tokens now have an expiration of one hour. In previous Kubernetes versions, they didn't have an expiration.
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes.
kubernetes-versions-1.21

Related

Migrating AWS Neptune Snapshots

I have taken a snapshot of a Neptune cluster which is on Neptune Engine V1.0.x and when I try to restore it I am getting an option to create a cluster with engine versions 1.0.x or 1.1.x.
The option to restore it on a cluster with engine version 1.2.x is not present.
If engine version 1.0.x and 1.1.x reach their end of life, then how would a snapshot created from engine version 1.0.x get restored?
Is it possible to migrate AWS Neptune snapshot from one engine version to another?
I reached out AWS support and got a response for this query.
As per them, there are significant changes in the architecture starting from engine version 1.1.1.0 and that's why db engines on 1.0.x.x must be upgraded to 1.1.1.0 before upgrading to 1.2.x.x.
Also, there is no way to restore snapshots which are on 1.0.x.x to 1.2.x.x once they reach their end of life. Only way to restore those snapshots will be to restore the snapshot before end of life, upgrade the restored cluster, and then take a new snapshot of the upgraded cluster.

Kubernetes cluster appears to have been deleted after upgrading from trial version

After my trial version credits got depleted on GCP, I upgraded my account. However my Kubernetes cluster and all its deployments appear to have been deleted. It now appears almost like I am activating Kubernetes Engine for the very first time. What can I do to restore my Kubernetes cluster?
enter image description here

AWS Elasticsearch Update version 6.3 -> 7.1

I am newbie and started to use some AWS services these days. I have a little bit confused about how I update AWS ES version 6x to 7x, what I should concern about it? Any effect on the existed data on ES.
Thank you
Before performing an upgrade Amazon recommends performing the following steps:
Pre-upgrade checks – Amazon ES performs a series of checks for issues that can block an upgrade and doesn't proceed to the next step unless these checks succeed.
Snapshot – Amazon ES takes a snapshot of the Elasticsearch cluster and doesn't proceed to the next step unless the snapshot succeeds. If the upgrade fails, Amazon ES uses this snapshot to restore the cluster to its original state. For more information about this snapshot, see Can't Downgrade After Upgrade.
Upgrade – Amazon ES starts the upgrade, which can take from 15 minutes to several hours to complete. Kibana might be unavailable during some or all of the upgrade.
If you're using a single node you may experience downtime during this upgrade from between 15 minutes to several hours, however Amazon do perform checks prior to validate that your data will be compatible with the newer version. If you have a multi node setup, upgraded nodes will be rotated to not affect your service.

Upgrading AWS elasticsearch & logstash with terraform

I am currently trying to upgrade our AWS Elasticsearch (ES) with terraform and want to create two new clusters from the one that we currently have.
It would be preferable to do it through terraform as we have a huge cluster that is run through terraform so if we update through the console it would revert back when we apply the terraform. Has anyone any experience doing this from ES version 2.3 to version 5? I have been told to snapshot and restore but cant find any documentation on how to do this through terraform. Thanks
So I think we have figured it out. You cannot upgrade from Elasticsearch version 2.3 through terraform as it is just to old. You need to manually create two new clusters, point you logs at that & then take a snapshot of the old cluster and restore it to the new one. After version 5 of elasticsearch I believe you can upgrade through the terraform as long as your provider (e.g. AWS) is at least version 1.55. This should not need a snapshot as the cluster will be updated in place, I have been informed

Is VPC-native GKE cluster production ready?

This happens while trying to create a VPC-native GKE cluster. Per the documentation here the command to do this is
gcloud container clusters create [CLUSTER_NAME] --enable-ip-alias
However this command, gives below error.
ERROR: (gcloud.container.clusters.create) Only alpha clusters (--enable_kubernetes_alpha) can use --enable-ip-alias
The command does work when option --enable_kubernetes_alpha is added. But gives another message.
This will create a cluster with all Kubernetes Alpha features enabled.
- This cluster will not be covered by the Container Engine SLA and
should not be used for production workloads.
- You will not be able to upgrade the master or nodes.
- The cluster will be deleted after 30 days.
Edit: The test was done in zone asia-south1-c
My questions are:
Is VPC-Native cluster production ready?
If yes, what is the correct way to create a production ready cluster?
If VPC-Native cluster is not production ready, what is the way to connect privately from a GKE cluster to another GCP service (like Cloud SQL)?
Your command seems correct. Seems like something is going wrong during the creation of your cluster on your project. Are you using any other flags than the command you posted?
When I set my Google cloud shell to region europe-west1
The cluster deploys error free and 1.11.6-gke.2(default) is what it uses.
You could try to manually create the cluster using the GUI instead of gcloud command. While creating the cluster, check the “Enable VPC-native (using alias ip)” feature. Try using a newest non-alpha version of GKE if some are showing up for you.
Public documentation you posted on GKE IP-aliasing and the GKE projects.locations.clusters API shows this to be in GA. All signs point this to be production ready. For whatever it’s worth, the feature has been posted last May In Google Cloud blog.
What you can try is to update your version of Google Cloud SDK. This will bring everything up to the latest release and remove alpha messages for features that are in GA right now.
$ gcloud components update