istio stopped working after AKS cluster update from 1.15 to 1.23 - istio

We have a aks clusters, I managed to upgrade that from 1.15 to 1.23. After that, our istio stopped working. We will need to upgrade it as well. Currently it's on version 1.5.0, we need to upgrade it to the latest supported version. we can't skip one minor version. i.e. we can't upgrade from 1.50. to 1.7.0 without upgrade to 1.6.0 first.
I had a brief look, it doesn't seem to be straightforward as we can't skip one minor version. i.e. we can't upgrade from 1.50. to 1.7.0 without upgrade to 1.6.0 first.

Related

Will there be any compatibility issue if i upgrade my Databricks run time version

Will there be any issue in my current notebooks and jobs if i upgrade my Databricks run time version from 9.1 LTS to 10.4 LTS
I didn't tried upgrading the version. If I upgrade it then will I be able to change it back to previous version
It's really a very broad question - exact answer depends on the features and libraries/connectors that you're using in your code. You can refer to the Databricks Runtime 10.x migration guide and Spark 3.2.1 migration guide for more information about upgrade.
Usually, the correct way to do is to try to run your job with new runtime, but using the test environment, where your production data won't be affected.

Should I upgrade from LTS to another LTS ember version?

I've been trying to upgrade my ember app from 2.18 to 3.4.4 and I just want to know if I chose the correct ember version which is 3.4.4? Any response is much appreciated. Also what are the disadvantages or issues I may face if I jump from 2.18 to 3.8.1?
My personal recommendation is to upgrade from one LTS to the next LTS version. There's a great video from Ember Map that discusses a great strategy for upgrading your ember apps which I will summarize here in case the link ever goes stale.
Upgrade all forwards-compatible packages
Upgrade 3rd-party dependencies and addons, one at a time
Upgrade Ember CLI and friends using ember-cli-update
And in my opinion, use ember-cli-update --to next-lts-version-here. Once you upgrade to the LTS, fix deprecations and tests until all green, and then continue. I used this process to go from 2.16 -> 3.8 over the weekend
The latest long term support is 3.8. here is release cycle. You can jump to 3.8 if suits you.
There is a addon named ember-cli-update. It applies changes automatically. Also you can checkout the ember-blog to learn changes.

WSO2 Governance Registry upgrade?

WSO2 has some great documentation about upgrading between versions, but we have version 4.5.3, and I don't see in the documents if I can upgrade directly from 4.5.3 to 5.1.0?
Their docs go between levels, like 4.5.3 to 4.6, 4.6 to 5.0, 5.0 to 5.1.
Is there a process to go directly from 4.5.3 to 5.1 (without having to do the interim levels)?
No, We provide steps for migrating to next version from current version. You have to migrate step by step. Or you can combine the steps and migrate.
Providing migration scripts for two not consequent version isn't scalable. Image how many combination of script we have to maintain.
You can't migrate directly from 4.5.3 to 5.1.0, you have to do incremental migration. Other than that recently we released the new G-Reg 5.2.0 which is a much more improved version of 5.1.0, therefore we prefer you to use the latest version possible.
We have tested these migrations scripts in our local environments and it won't take that much time to migrate. lets say you have 500 artifacts in G-Reg 4.5.3 and it will take less than 2 days to do the migrations till G-Reg 5.2.0. By looking at the benefits your getting after the migration I think the time/resource consumed will worth a lot.
Please find the official documentation for Upgrading from a Previous Release. Please download the all new G-Reg 5.2.0 from here.

Couchbase Community Upgrade - couchbase-server (3.x) conflicts with couchbase-server-community (4.x)

I am trying to upgrade a Couchbase Community server that is currently running 3.0 to 4.0. I am using the 'Amazon Linux' on AWS, and have used the CentOS 6 build to upgrade from 2.5 to 3.0 - that upgrade was super smooth. According to the documentation, I should be able to go from 3.x to 4.x just fine as well.
http://developer.couchbase.com/documentation/server/4.0/install/upgrade-matrix.html:
Upgrade from the latest version 3.x directly to version 4.x using any supported upgrade strategy.
But I get the message
couchbase-server conflicts with couchbase-server-community-4.0.0-4051.x86_64
I have found that the couchbase-server name is now reserved for the enterprise edition, and couchbase-server-community is now used in 4.0 for the community edition, which would explain the conflict. https://issues.couchbase.com/browse/MB-15716
Is this really an upgrade-breaking change? I cannot find any documentation on how to resolve this change short of uninstalling and reinstalling.
If it were me and since you are on AWS, just spin up new instances, install Couchbase on them and do rebalances where you add one in and remove an old one (1 in, 1 out or 2 in, 2 out, etc.). With the same amount going in and out of the cluster, the cluster will do a swap rebalance which is the most efficient. All of this can be done while up and serving traffic. This is a very standard upgrade path and the recommended approach when in the cloud.
Once upgraded, discard the old instances. Yes you are running more instances at the same time during the upgrade, but for the cost of a few lattes you are upgraded smoothly.
I have experienced the same conflict when trying to upgrade from Community version 3.0.1 to Community 4.0.0.
It is worth mentioning that if you uninstall the 3.0.1 version and then install 4.0.0, all your buckets and their data are kept. Maybe there are some cases where this would fail, always good to take a backup first, but in my case the transformation was smooth.
This was on my developer machine, for a cloud installation I really like the swap in/out which means you can do the upgrade without interruption of the service.

Upgrading GKE Kubernetes node version to v1.1.1, master still at v1.0.7 despite conflicting release notes

We're trying to take advantage of some of the new features in the v1.1.1 Kubernetes release by upgrading our cluster running on Google Container Engine.
On the release notes Google states that cluster masters are currently running v1.1.1. However, when trying to upgrade our existing cluster nodes (following the cluster upgrade docs), I get the following the trace:
Failed to start node upgrade: Desired node version (1.1.1) cannot be greater than current master version (1.0.7)
This is confirmed by running kubectl version:
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
All the while, the gcloud console reports a cluster api version of 1.0.6.
Are the master upgrades still in process for existing clusters? Does a timeline exist on that? Earlier release notes mention a 1 week runway for existing cluster version upgrades; we've just surpassed that window from the release date of v1.1.1.
The release notes state that "Kubernetes v1.1.1 is the default version for new clusters" (emphasis added). Existing clusters will be upgraded from 1.0 to 1.1 in the coming weeks. If you want to take advantage of the 1.1 features immediately you can create a new cluster at 1.1 or contact us on the #google-containers channel on Slack to ask for your cluster to be upgraded sooner.