I have taken a snapshot of a Neptune cluster which is on Neptune Engine V1.0.x and when I try to restore it I am getting an option to create a cluster with engine versions 1.0.x or 1.1.x.
The option to restore it on a cluster with engine version 1.2.x is not present.
If engine version 1.0.x and 1.1.x reach their end of life, then how would a snapshot created from engine version 1.0.x get restored?
Is it possible to migrate AWS Neptune snapshot from one engine version to another?
I reached out AWS support and got a response for this query.
As per them, there are significant changes in the architecture starting from engine version 1.1.1.0 and that's why db engines on 1.0.x.x must be upgraded to 1.1.1.0 before upgrading to 1.2.x.x.
Also, there is no way to restore snapshots which are on 1.0.x.x to 1.2.x.x once they reach their end of life. Only way to restore those snapshots will be to restore the snapshot before end of life, upgrade the restored cluster, and then take a new snapshot of the upgraded cluster.
Related
I've an EKS cluster in AWS.
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl version --short
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
Client Version: v1.23.7-eks-4721010
Server Version: v1.20.11-eks-f17b81
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl get nodes
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
NAME STATUS ROLES AGE VERSION
fargate-172.101.0.134 Ready <none> 13d v1.20.15-eks-14c7a48
fargate-172.101.0.161 Ready <none> 68d v1.20.15-eks-14c7a48
On Nov 1st AWS is going to update the server version to 1.21 because AWS going to end the support of 1.20.
What problems will come up? I read
no EKS features were removed in 1.21
What should I do in order to be safe?
With EKS version upgraded, it is based on the official Kubernetes upstream version upgrade so you better check what changes and deprecation between 1.20 and 1.21 and do they affect your current workload because of changes from APIs.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#changelog-since-v1200
If yes, you have to prepare updating your manifests: https://aws.amazon.com/blogs/containers/preparing-for-kubernetes-api-deprecations-when-going-from-1-15-to-1-16/
From the official calendar, it says End Of Support on November 1st but there are some FAQs that you need to understand first: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#version-deprecation
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version.
Amazon EKS will automatically upgrade existing control planes (not nodes) to the oldest supported version through a gradual deployment process after the end of support date. Amazon EKS does not allow control planes to stay on a version that has reached end of support.
After the automatic control plane update, you must manually update cluster add-ons and Amazon EC2 nodes.
To be safe, you have to
prepare your manifests to be updated with latest Kubernetes version that you are going to upgrade to.
proactively upgrade your EKS before AWS forces doing it.
remember to upgrade EKS add-ons, node groups and test any of your current controllers in the new version: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
I suggest you just provision a test cluster with spot instances and try out your application first.
References:
You do not really need to perform any safety checks on your end, AWS will take care of the upgrade process (at least from 20 to 21. My recommendation will be to upgrade before AWS tries to upgrade the cluster, as once it reached the end of life, the upgrade can happen anytime
The only thing you need to update manually
Self-managed Node Group
Any add-on
The breaking change is the service account token expiry
Any service that depends on a service token account, keep in mind that now the token will expire in one hour and the service/pod need to refresh the token
Service account tokens now have an expiration of one hour. In previous Kubernetes versions, they didn't have an expiration.
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes.
kubernetes-versions-1.21
I am newbie and started to use some AWS services these days. I have a little bit confused about how I update AWS ES version 6x to 7x, what I should concern about it? Any effect on the existed data on ES.
Thank you
Before performing an upgrade Amazon recommends performing the following steps:
Pre-upgrade checks – Amazon ES performs a series of checks for issues that can block an upgrade and doesn't proceed to the next step unless these checks succeed.
Snapshot – Amazon ES takes a snapshot of the Elasticsearch cluster and doesn't proceed to the next step unless the snapshot succeeds. If the upgrade fails, Amazon ES uses this snapshot to restore the cluster to its original state. For more information about this snapshot, see Can't Downgrade After Upgrade.
Upgrade – Amazon ES starts the upgrade, which can take from 15 minutes to several hours to complete. Kibana might be unavailable during some or all of the upgrade.
If you're using a single node you may experience downtime during this upgrade from between 15 minutes to several hours, however Amazon do perform checks prior to validate that your data will be compatible with the newer version. If you have a multi node setup, upgraded nodes will be rotated to not affect your service.
I am currently trying to upgrade our AWS Elasticsearch (ES) with terraform and want to create two new clusters from the one that we currently have.
It would be preferable to do it through terraform as we have a huge cluster that is run through terraform so if we update through the console it would revert back when we apply the terraform. Has anyone any experience doing this from ES version 2.3 to version 5? I have been told to snapshot and restore but cant find any documentation on how to do this through terraform. Thanks
So I think we have figured it out. You cannot upgrade from Elasticsearch version 2.3 through terraform as it is just to old. You need to manually create two new clusters, point you logs at that & then take a snapshot of the old cluster and restore it to the new one. After version 5 of elasticsearch I believe you can upgrade through the terraform as long as your provider (e.g. AWS) is at least version 1.55. This should not need a snapshot as the cluster will be updated in place, I have been informed
We are using AWS RDS Aurora MySQL 5.6 for our production database.
AWS launched MySQL 5.7 compatible Aurora engine on 6th Feb, 2018.
I dont see any option in "modify instance" to change engine to MySQL 5.7
I dont see any option in restore snapshot to database with MySQL 5.7 either.
We want to do this upgrade with least downtime. Pls suggest what could be done here.
According to this link, you cannot upgrade an in-place database, you will need to restore a snapshot of the existing database and change the engine version during that process. These restrictions appear to be only temporary and may be lifted at a later point to allow for in-place upgra
The comments above are true; there is still no in place upgrade for 5.6 to 5.7; the process is still pretty easy though;
1) Go to the RDS dashboard, in the left hand menu there is a menu item called 'Snapshots'; you can either click on this if you are ok using a recent snapshot; otherwise select your database & on the actions drop down & choose 'Take Snapshot'
2) In snapshots simply select your snapshot & choose 'Restore Snapshot' from the actions drop down; it will automatically duplicate a bunch of your previous settings. Its at this juncture you can select the new database engine of 5.7
All and all you should allow for at least a half an hour of downtime for the whole process. Probably a couple of hours to be on the safe side.
You can now perform in-place upgrade from Aurora MyQSL from 5.6 to 5.7
Only a matter of invoking modify-db-cluster or modify-global-cluster (if you are using global clusters of course).
More in the docs (including how to do this using the AWS console).
Easiest way is:
Take a manual snapshot first of Aurora MySQL 5.6 cluster.
Then, create a new Aurora MySQL 5.7 using that manual snapshot which was taken in step 1
Your credentials would be same as that of the older 5.6 cluster.
Verify if data is correct.
Need was to upgrade the AWS RDS Aurora MySQL from 5.6 to 5.7 without causing any downtime to our production. Being a SaaS solution, we could not afford any downtime.
Background
We have distributed architecture based on micro services running in AWS Fargate and AWS Lambda. For data persistency AWS RDS Aurora MySQL is used. While there are other services being used, those are not of interest in this use case.
Approach
After a good deliberation on in place upgrade by declaring a downtime and maintenance window, we realized that having zero downtime upgrade is the need. As without which we would have created a processing backlog for us.
High level approach was:
Create an AWS RDS Cluster with the required version and copy the data from the existing RDS Cluster to this new Cluster
Setup AWS DMS(Data Migration Service) between these two clusters
Once the replication is done and is ongoing then switch the application to point to the new DB. In our case, the micro-services running in AWS Fargate has to upgraded with the new end point and it took care of draining the old and using the new.
For Complete post please check out
https://bharatnainani1997.medium.com/aws-rds-major-version-upgrade-with-zero-downtime-5-6-to-5-7-b0aff1ea1f4
To manage an update for a DB instance or DB cluster
Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
In the navigation pane, choose Instances to manage updates for a DB instance, or Clusters to manage updates for an Aurora DB cluster.
Select the checkbox for the DB instance or DB cluster that has a required update.
Choose Instance actions for a DB instance, or Actions for a DB cluster, and then choose one of the following:
Upgrade now
Upgrade at next window
Note: If you choose Upgrade at next window and later want to delay the update, you can select Defer upgrade.
I have launched a hadoop EMR Cluster (5.5.0 - components - Hive, Hue) but not SQOOP. But now i need to have sqoop also to query and dump data from mysql database. Since the cluster is already launched with good amount of data wanted to know if i can also add Sqoop. I dont see this option on AWS Console.
Thanks
I installed it manually, done the required configuration. The limitation i guess now would be that if i have to clone the cluster then it wont be available.