AWS Elasticsearch Update version 6.3 -> 7.1 - amazon-web-services

I am newbie and started to use some AWS services these days. I have a little bit confused about how I update AWS ES version 6x to 7x, what I should concern about it? Any effect on the existed data on ES.
Thank you

Before performing an upgrade Amazon recommends performing the following steps:
Pre-upgrade checks – Amazon ES performs a series of checks for issues that can block an upgrade and doesn't proceed to the next step unless these checks succeed.
Snapshot – Amazon ES takes a snapshot of the Elasticsearch cluster and doesn't proceed to the next step unless the snapshot succeeds. If the upgrade fails, Amazon ES uses this snapshot to restore the cluster to its original state. For more information about this snapshot, see Can't Downgrade After Upgrade.
Upgrade – Amazon ES starts the upgrade, which can take from 15 minutes to several hours to complete. Kibana might be unavailable during some or all of the upgrade.
If you're using a single node you may experience downtime during this upgrade from between 15 minutes to several hours, however Amazon do perform checks prior to validate that your data will be compatible with the newer version. If you have a multi node setup, upgraded nodes will be rotated to not affect your service.

Related

Migrating AWS Neptune Snapshots

I have taken a snapshot of a Neptune cluster which is on Neptune Engine V1.0.x and when I try to restore it I am getting an option to create a cluster with engine versions 1.0.x or 1.1.x.
The option to restore it on a cluster with engine version 1.2.x is not present.
If engine version 1.0.x and 1.1.x reach their end of life, then how would a snapshot created from engine version 1.0.x get restored?
Is it possible to migrate AWS Neptune snapshot from one engine version to another?
I reached out AWS support and got a response for this query.
As per them, there are significant changes in the architecture starting from engine version 1.1.1.0 and that's why db engines on 1.0.x.x must be upgraded to 1.1.1.0 before upgrading to 1.2.x.x.
Also, there is no way to restore snapshots which are on 1.0.x.x to 1.2.x.x once they reach their end of life. Only way to restore those snapshots will be to restore the snapshot before end of life, upgrade the restored cluster, and then take a new snapshot of the upgraded cluster.

EKS upgrade of first of November

I've an EKS cluster in AWS.
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl version --short
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
Client Version: v1.23.7-eks-4721010
Server Version: v1.20.11-eks-f17b81
[cloudshell-user#ip-10-0-87-109 ~]$ kubectl get nodes
Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
NAME STATUS ROLES AGE VERSION
fargate-172.101.0.134 Ready <none> 13d v1.20.15-eks-14c7a48
fargate-172.101.0.161 Ready <none> 68d v1.20.15-eks-14c7a48
On Nov 1st AWS is going to update the server version to 1.21 because AWS going to end the support of 1.20.
What problems will come up? I read
no EKS features were removed in 1.21
What should I do in order to be safe?
With EKS version upgraded, it is based on the official Kubernetes upstream version upgrade so you better check what changes and deprecation between 1.20 and 1.21 and do they affect your current workload because of changes from APIs.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#changelog-since-v1200
If yes, you have to prepare updating your manifests: https://aws.amazon.com/blogs/containers/preparing-for-kubernetes-api-deprecations-when-going-from-1-15-to-1-16/
From the official calendar, it says End Of Support on November 1st but there are some FAQs that you need to understand first: https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#version-deprecation
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version.
Amazon EKS will automatically upgrade existing control planes (not nodes) to the oldest supported version through a gradual deployment process after the end of support date. Amazon EKS does not allow control planes to stay on a version that has reached end of support.
After the automatic control plane update, you must manually update cluster add-ons and Amazon EC2 nodes.
To be safe, you have to
prepare your manifests to be updated with latest Kubernetes version that you are going to upgrade to.
proactively upgrade your EKS before AWS forces doing it.
remember to upgrade EKS add-ons, node groups and test any of your current controllers in the new version: https://docs.aws.amazon.com/eks/latest/userguide/update-managed-node-group.html
I suggest you just provision a test cluster with spot instances and try out your application first.
References:
You do not really need to perform any safety checks on your end, AWS will take care of the upgrade process (at least from 20 to 21. My recommendation will be to upgrade before AWS tries to upgrade the cluster, as once it reached the end of life, the upgrade can happen anytime
The only thing you need to update manually
Self-managed Node Group
Any add-on
The breaking change is the service account token expiry
Any service that depends on a service token account, keep in mind that now the token will expire in one hour and the service/pod need to refresh the token
Service account tokens now have an expiration of one hour. In previous Kubernetes versions, they didn't have an expiration.
On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes.
kubernetes-versions-1.21

Upgrading AWS elasticsearch & logstash with terraform

I am currently trying to upgrade our AWS Elasticsearch (ES) with terraform and want to create two new clusters from the one that we currently have.
It would be preferable to do it through terraform as we have a huge cluster that is run through terraform so if we update through the console it would revert back when we apply the terraform. Has anyone any experience doing this from ES version 2.3 to version 5? I have been told to snapshot and restore but cant find any documentation on how to do this through terraform. Thanks
So I think we have figured it out. You cannot upgrade from Elasticsearch version 2.3 through terraform as it is just to old. You need to manually create two new clusters, point you logs at that & then take a snapshot of the old cluster and restore it to the new one. After version 5 of elasticsearch I believe you can upgrade through the terraform as long as your provider (e.g. AWS) is at least version 1.55. This should not need a snapshot as the cluster will be updated in place, I have been informed

How to upgrade AWS RDS Aurora MySQL 5.6 to 5.7

We are using AWS RDS Aurora MySQL 5.6 for our production database.
AWS launched MySQL 5.7 compatible Aurora engine on 6th Feb, 2018.
I dont see any option in "modify instance" to change engine to MySQL 5.7
I dont see any option in restore snapshot to database with MySQL 5.7 either.
We want to do this upgrade with least downtime. Pls suggest what could be done here.
According to this link, you cannot upgrade an in-place database, you will need to restore a snapshot of the existing database and change the engine version during that process. These restrictions appear to be only temporary and may be lifted at a later point to allow for in-place upgra
The comments above are true; there is still no in place upgrade for 5.6 to 5.7; the process is still pretty easy though;
1) Go to the RDS dashboard, in the left hand menu there is a menu item called 'Snapshots'; you can either click on this if you are ok using a recent snapshot; otherwise select your database & on the actions drop down & choose 'Take Snapshot'
2) In snapshots simply select your snapshot & choose 'Restore Snapshot' from the actions drop down; it will automatically duplicate a bunch of your previous settings. Its at this juncture you can select the new database engine of 5.7
All and all you should allow for at least a half an hour of downtime for the whole process. Probably a couple of hours to be on the safe side.
You can now perform in-place upgrade from Aurora MyQSL from 5.6 to 5.7
Only a matter of invoking modify-db-cluster or modify-global-cluster (if you are using global clusters of course).
More in the docs (including how to do this using the AWS console).
Easiest way is:
Take a manual snapshot first of Aurora MySQL 5.6 cluster.
Then, create a new Aurora MySQL 5.7 using that manual snapshot which was taken in step 1
Your credentials would be same as that of the older 5.6 cluster.
Verify if data is correct.
Need was to upgrade the AWS RDS Aurora MySQL from 5.6 to 5.7 without causing any downtime to our production. Being a SaaS solution, we could not afford any downtime.
Background
We have distributed architecture based on micro services running in AWS Fargate and AWS Lambda. For data persistency AWS RDS Aurora MySQL is used. While there are other services being used, those are not of interest in this use case.
Approach
After a good deliberation on in place upgrade by declaring a downtime and maintenance window, we realized that having zero downtime upgrade is the need. As without which we would have created a processing backlog for us.
High level approach was:
Create an AWS RDS Cluster with the required version and copy the data from the existing RDS Cluster to this new Cluster
Setup AWS DMS(Data Migration Service) between these two clusters
Once the replication is done and is ongoing then switch the application to point to the new DB. In our case, the micro-services running in AWS Fargate has to upgraded with the new end point and it took care of draining the old and using the new.
For Complete post please check out
https://bharatnainani1997.medium.com/aws-rds-major-version-upgrade-with-zero-downtime-5-6-to-5-7-b0aff1ea1f4
To manage an update for a DB instance or DB cluster
Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
In the navigation pane, choose Instances to manage updates for a DB instance, or Clusters to manage updates for an Aurora DB cluster.
Select the checkbox for the DB instance or DB cluster that has a required update.
Choose Instance actions for a DB instance, or Actions for a DB cluster, and then choose one of the following:
Upgrade now
Upgrade at next window
Note: If you choose Upgrade at next window and later want to delay the update, you can select Defer upgrade.

Installing Impala 2.3 on Amazon EMR

I see that Impala 2.3 is only supported on Cloudera CDH 5.5 & above. Impala 2.2 can be installed on Amazon EMR as there is Bootstrap script available on GitHub & you don't require Cloudera installation.
However, I don't see any way to install Cloudera CDH 5.5 or 5.6 on Amazon EMR. I want to install Impala 2.3 so is there any way through which Impala 2.3 can be installed on Amazon EMR?
Well, my previous answer has been deleted as long as "does not provide an answer to the question". I'm not going to argue if it's better to have a partially incorrect answer to this question or if making categorical claims without foundation is a good answer :/.
In any case, I'm not giving up :)
Yes, it's possible to install "anything" on the paper.
Once you launch the EMR cluster, all instances will appear on your EC2 console. The only thing is that you have to be careful assigning the right permissions to access thru SSH to your instances. My suggestion is to create a specific security group with the access and assign this extra security group to the instances using the Advanced configuration of the cluster.
By having the proper configuration, you could ssh into any instance and install anything (you should be able to scp any file or download from internet if you have the proper configuration of your VPC). Note that the user will be "hadoop" instead "ec2-root" but this is documented on the EMR user guide.
Keep in mind that the cluster is "Terminated" so, the EMR instances are volatile and the installation is not going to survive the cluster termination.
On the other hand, using the latest versions of EMR AMIs and the latest capabilities of AWS (I think that it was all the time the case, but, it doesn't matter now) you should be able to create some actions on the bootstrap and install anything you want.
Using the "Advanced configuration" of your cluster, you can access to the "Bootstrap" actions to be executed on your cluster. You could even have different actions depending on the node type (master, core, tasks). You should store your scripts (and/or jar files) on an S3 bucket and made this bucket available to your cluster. On the paper, you could install Impala on these EC2 instances comprising the EMR cluster but I'm not sure if this will work.
For more information, you can read http://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-plan-bootstrap.html
And for a previous version of EMR AMI and not so recent version of Impala you can read https://github.com/awslabs/emr-bootstrap-actions/tree/master/impala
Thanks Mark, you forced me to elaborate better my comment.
No, it is not possible to "install" anything on EMR because it's a PaaS provided by AWS. But if your goal is to run a newer version of Impala on AWS, there is an AWS Quick Start path for installing CDH 5.x (including Impala) that makes the process relatively easy.
http://aws.amazon.com/quickstart/