Couchbase Community Upgrade - couchbase-server (3.x) conflicts with couchbase-server-community (4.x) - amazon-web-services

I am trying to upgrade a Couchbase Community server that is currently running 3.0 to 4.0. I am using the 'Amazon Linux' on AWS, and have used the CentOS 6 build to upgrade from 2.5 to 3.0 - that upgrade was super smooth. According to the documentation, I should be able to go from 3.x to 4.x just fine as well.
http://developer.couchbase.com/documentation/server/4.0/install/upgrade-matrix.html:
Upgrade from the latest version 3.x directly to version 4.x using any supported upgrade strategy.
But I get the message
couchbase-server conflicts with couchbase-server-community-4.0.0-4051.x86_64
I have found that the couchbase-server name is now reserved for the enterprise edition, and couchbase-server-community is now used in 4.0 for the community edition, which would explain the conflict. https://issues.couchbase.com/browse/MB-15716
Is this really an upgrade-breaking change? I cannot find any documentation on how to resolve this change short of uninstalling and reinstalling.

If it were me and since you are on AWS, just spin up new instances, install Couchbase on them and do rebalances where you add one in and remove an old one (1 in, 1 out or 2 in, 2 out, etc.). With the same amount going in and out of the cluster, the cluster will do a swap rebalance which is the most efficient. All of this can be done while up and serving traffic. This is a very standard upgrade path and the recommended approach when in the cloud.
Once upgraded, discard the old instances. Yes you are running more instances at the same time during the upgrade, but for the cost of a few lattes you are upgraded smoothly.

I have experienced the same conflict when trying to upgrade from Community version 3.0.1 to Community 4.0.0.
It is worth mentioning that if you uninstall the 3.0.1 version and then install 4.0.0, all your buckets and their data are kept. Maybe there are some cases where this would fail, always good to take a backup first, but in my case the transformation was smooth.
This was on my developer machine, for a cloud installation I really like the swap in/out which means you can do the upgrade without interruption of the service.

Related

istio stopped working after AKS cluster update from 1.15 to 1.23

We have a aks clusters, I managed to upgrade that from 1.15 to 1.23. After that, our istio stopped working. We will need to upgrade it as well. Currently it's on version 1.5.0, we need to upgrade it to the latest supported version. we can't skip one minor version. i.e. we can't upgrade from 1.50. to 1.7.0 without upgrade to 1.6.0 first.
I had a brief look, it doesn't seem to be straightforward as we can't skip one minor version. i.e. we can't upgrade from 1.50. to 1.7.0 without upgrade to 1.6.0 first.

Will there be any compatibility issue if i upgrade my Databricks run time version

Will there be any issue in my current notebooks and jobs if i upgrade my Databricks run time version from 9.1 LTS to 10.4 LTS
I didn't tried upgrading the version. If I upgrade it then will I be able to change it back to previous version
It's really a very broad question - exact answer depends on the features and libraries/connectors that you're using in your code. You can refer to the Databricks Runtime 10.x migration guide and Spark 3.2.1 migration guide for more information about upgrade.
Usually, the correct way to do is to try to run your job with new runtime, but using the test environment, where your production data won't be affected.

Should I upgrade from LTS to another LTS ember version?

I've been trying to upgrade my ember app from 2.18 to 3.4.4 and I just want to know if I chose the correct ember version which is 3.4.4? Any response is much appreciated. Also what are the disadvantages or issues I may face if I jump from 2.18 to 3.8.1?
My personal recommendation is to upgrade from one LTS to the next LTS version. There's a great video from Ember Map that discusses a great strategy for upgrading your ember apps which I will summarize here in case the link ever goes stale.
Upgrade all forwards-compatible packages
Upgrade 3rd-party dependencies and addons, one at a time
Upgrade Ember CLI and friends using ember-cli-update
And in my opinion, use ember-cli-update --to next-lts-version-here. Once you upgrade to the LTS, fix deprecations and tests until all green, and then continue. I used this process to go from 2.16 -> 3.8 over the weekend
The latest long term support is 3.8. here is release cycle. You can jump to 3.8 if suits you.
There is a addon named ember-cli-update. It applies changes automatically. Also you can checkout the ember-blog to learn changes.

Ambari-agent "CERTIFICATE_VERIFY_FAILED", Is it safe to disable the certificate verification in Python?

Ambari version: 2.2.2.18
HDP stack: 2.4.3
OS: centos 7.3
Issue description:
Ambari-server can't communicate with Ambari agent. I can see below error in the ambari-agent logs:
ERROR 2017-09-18 06:35:34,684 NetUtil.py:84 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-09-18 06:35:34,684 NetUtil.py:85 - SSLError: Failed to connect. Please check openssl library versions.
I am facing this issue recently and it appears this can be replicated consistently after the instances are restarted. (I am using EC2 instances).
I am able to register agent nodes successfully, install HDP cluster, run yarn jobs etc.. no problem at all. Once i restart my instances, I see this problem.
There are some solutions already posted for this problem like:
Downgrade the Python from 2.7 to lower. This is a known problem of
Ambari with Python 2.7
Control the certificate verification by disabling it.
Set "verify = disable"; under /etc/python/cert-verification.cfg
I don't want to play with Python as it can disrupt lot many things like Cassandra, yum package manager etc...
Second work around is very much easy and it works well!
Now comes my question :- Is it safe to disable the certificate verification in Python ? i.e. by setting property verify = disable
Generally, it's a bad idea. If somebody has access to port on server that is used for agent-server communication (8443 if I'm not mistaken), he can register as agent and get all your cluster configs&passwords. Or classic man-in-the-middle attack would allow to do the same by reading your unencrypted traffic. A bit more difficult attack would allow to send commands to agents (probably with root permissions).
Your issue sounds like you reprovisioned your ambari-server host, and left old ambari-agent instances running, or maybe your certificates became outdated? At first connection to ambari-server, agents generate certificates and send to server. Server signs these certificates with it's own key, so now server-agent connection is encrypted. Did you try to remove old certificates and restart server&agents as suggested here?
How did we investigate this issue and What solution we adopted:
Investigation Details:
Downgrading to Python 2.6 is not feasible as there are OS dependencies and as per Suggestion from 'Dmitriusan' in the previous comment, it's not a good idea to disable certificate verification in Python.
We use AWS EC2
With Python 2.7, JDK 1.8 and Cent OS 7.2 there is no issue. Everything is smooth.
With Python 2.7, JDK 1.8 and Cent OS 7.3 and Centos 7.4 we are seeing this issue.
Issue which I have reported here, is with respect to Centos 7.3 and with Centos 7.4 Issue is slightly different. Certificate verification fails while adding nodes to the cluster itself.
Downgrading from centos 7.3 to 7.2 is not straight forward. And AWS EC2 market place provides Centos 7.0 Image and when we create instance from this image, it applies security and patch updates resulting in Centos 7.3.
We can create our own Image of Centos 7.2 from existing servers but, It's always good to be with the latest update for the OS for security reasons.
To describe it shortly, we had workarounds but not a solution.
Solution which we adopted:
After series of tests, we decided to upgrade to Centos 7.4, HDP-2.6.3.0, and Ambari 2.6.0.0
With Centos 7.4 and Ambari Version 2.6.0.0, we don't see this issue even though I have 'Python 2.7.5' installed.
So this looks to be an Issue with Ambari
Older version of Ambari (2.4.2) does not recognize the force TLS configuration. We upgraded Ambari to 2.6.2 and heart beat started working.

Rethinkdb chef solo cookbook

Is there any RethinkDB chef solo cookbook that allows one to install latest rethinkdb on ubuntu 14.04 / AWS.
I tried couple options, however it didn't help.
https://github.com/vFense/rethinkdb-chef - how to install latest version?
https://github.com/sprij/rethinkdb-cookbook.git - source compilation takes hours
I would appreciate any help regarding this.
Thanks
Try the cookbook that is available from the community repository first:
https://supermarket.chef.io/cookbooks/rethinkdb
It claims to be integration tested on Ubuntu. If it doesn't work under chef-solo, then I'd advise you to switch to local mode chef client instead.
https://www.chef.io/blog/2013/10/31/chef-client-z-from-zero-to-chef-in-8-5-seconds/
PS
Also checkout Berkshelf for managing cookbook dependencies. It's a standard tool in the chefdk
I updated rethinkdb-chef to work with the latest version of RethinkDB as well as removed the network portion of the .kitchen.yml file. I validated that this does work on CentOS 6 and Ubuntu 14.04.
I still need to write tests as well as documentation. As per Marks answer, try to use the community supported version 1st. I created this cookbook, so that I can customize it as per my needs with vFense.