Istio Version Upgradation from 1.7.3 to 1.13.1 with AKS Cluster version 1.22.6 step by step - istio

i am looking a solution to upgrade Istio version 1.7.3 to 1.13.1 along with ASK Cluster version 1.22.6. However i tried to follow canary and In-place approaches but Istiod Pod getting down during upgradation.
URL Link https://istio.io/latest/docs/setup/upgrade/canary/
Please help to provide right approach with step by step procedure
Your help and Support would be appreciated. !!!!

Related

Automatic Listener Rule Priority in AWS ALB

This is in continuation to -
Automatically set ListenerRule Priority in CloudFormation template
I tried the solution, mentioned in the link above, and have been using it successfully. However, there's a challenge now. This solution worked great on Python 3.6. But it's no longer supported. Tried running this solution on higher versions - 3.7, 3.8 & 3.9 but no luck. Can someone please help with a compatible code that can run on Python 3.8 or 3.9.

Migrate from Helm to Istioctl

I'am running Istio 1.3.5 on my kubernetes cluster. I have installed it using Helm. But, this method will be deprecated in the future, so I'd like to migrate to Istioctl.
Is there a way to migrate "silently" my actual Istio deployment from helm to istioctl ?
I read something about istioctl manifest migrate but it's not very clear.
I also read that I need to upgrade to 1.4.3 before upgrading to 1.5.x. So I'd like to take this opportunity to switch to the Istioctl installation mode.
Thank you for your help.
Unfortunately there is not yet a migration path for helm to istioctl.
There is an issue on github exactly about that.
There is not yet a migration path for helm to istioctl, but it will certainly exist in 1.6,which is what this issue is tracking. You can go directly from 1.4 - 1.6 if desired once that is in place. Sorry about some of the confusion, as didn't do a great job around this
So waiting a little bit more might be the easiest solution. As with migration path will most likely offer better support and documentation.
Like You mentioned it is possible to manually migrate istio from helm to istioctl after upgrading with helm first. However this is not a simple task.
Hope it helps.

Ambari-agent "CERTIFICATE_VERIFY_FAILED", Is it safe to disable the certificate verification in Python?

Ambari version: 2.2.2.18
HDP stack: 2.4.3
OS: centos 7.3
Issue description:
Ambari-server can't communicate with Ambari agent. I can see below error in the ambari-agent logs:
ERROR 2017-09-18 06:35:34,684 NetUtil.py:84 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-09-18 06:35:34,684 NetUtil.py:85 - SSLError: Failed to connect. Please check openssl library versions.
I am facing this issue recently and it appears this can be replicated consistently after the instances are restarted. (I am using EC2 instances).
I am able to register agent nodes successfully, install HDP cluster, run yarn jobs etc.. no problem at all. Once i restart my instances, I see this problem.
There are some solutions already posted for this problem like:
Downgrade the Python from 2.7 to lower. This is a known problem of
Ambari with Python 2.7
Control the certificate verification by disabling it.
Set "verify = disable"; under /etc/python/cert-verification.cfg
I don't want to play with Python as it can disrupt lot many things like Cassandra, yum package manager etc...
Second work around is very much easy and it works well!
Now comes my question :- Is it safe to disable the certificate verification in Python ? i.e. by setting property verify = disable
Generally, it's a bad idea. If somebody has access to port on server that is used for agent-server communication (8443 if I'm not mistaken), he can register as agent and get all your cluster configs&passwords. Or classic man-in-the-middle attack would allow to do the same by reading your unencrypted traffic. A bit more difficult attack would allow to send commands to agents (probably with root permissions).
Your issue sounds like you reprovisioned your ambari-server host, and left old ambari-agent instances running, or maybe your certificates became outdated? At first connection to ambari-server, agents generate certificates and send to server. Server signs these certificates with it's own key, so now server-agent connection is encrypted. Did you try to remove old certificates and restart server&agents as suggested here?
How did we investigate this issue and What solution we adopted:
Investigation Details:
Downgrading to Python 2.6 is not feasible as there are OS dependencies and as per Suggestion from 'Dmitriusan' in the previous comment, it's not a good idea to disable certificate verification in Python.
We use AWS EC2
With Python 2.7, JDK 1.8 and Cent OS 7.2 there is no issue. Everything is smooth.
With Python 2.7, JDK 1.8 and Cent OS 7.3 and Centos 7.4 we are seeing this issue.
Issue which I have reported here, is with respect to Centos 7.3 and with Centos 7.4 Issue is slightly different. Certificate verification fails while adding nodes to the cluster itself.
Downgrading from centos 7.3 to 7.2 is not straight forward. And AWS EC2 market place provides Centos 7.0 Image and when we create instance from this image, it applies security and patch updates resulting in Centos 7.3.
We can create our own Image of Centos 7.2 from existing servers but, It's always good to be with the latest update for the OS for security reasons.
To describe it shortly, we had workarounds but not a solution.
Solution which we adopted:
After series of tests, we decided to upgrade to Centos 7.4, HDP-2.6.3.0, and Ambari 2.6.0.0
With Centos 7.4 and Ambari Version 2.6.0.0, we don't see this issue even though I have 'Python 2.7.5' installed.
So this looks to be an Issue with Ambari
Older version of Ambari (2.4.2) does not recognize the force TLS configuration. We upgraded Ambari to 2.6.2 and heart beat started working.

Python/Django Elastic Beanstalk now failing on deploy

I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.

Upgrading GKE Kubernetes node version to v1.1.1, master still at v1.0.7 despite conflicting release notes

We're trying to take advantage of some of the new features in the v1.1.1 Kubernetes release by upgrading our cluster running on Google Container Engine.
On the release notes Google states that cluster masters are currently running v1.1.1. However, when trying to upgrade our existing cluster nodes (following the cluster upgrade docs), I get the following the trace:
Failed to start node upgrade: Desired node version (1.1.1) cannot be greater than current master version (1.0.7)
This is confirmed by running kubectl version:
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.1", GitCommit:"92635e23dfafb2ddc828c8ac6c03c7a7205a84d8", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.7", GitCommit:"6234d6a0abd3323cd08c52602e4a91e47fc9491c", GitTreeState:"clean"}
All the while, the gcloud console reports a cluster api version of 1.0.6.
Are the master upgrades still in process for existing clusters? Does a timeline exist on that? Earlier release notes mention a 1 week runway for existing cluster version upgrades; we've just surpassed that window from the release date of v1.1.1.
The release notes state that "Kubernetes v1.1.1 is the default version for new clusters" (emphasis added). Existing clusters will be upgraded from 1.0 to 1.1 in the coming weeks. If you want to take advantage of the 1.1 features immediately you can create a new cluster at 1.1 or contact us on the #google-containers channel on Slack to ask for your cluster to be upgraded sooner.