I'am running Istio 1.3.5 on my kubernetes cluster. I have installed it using Helm. But, this method will be deprecated in the future, so I'd like to migrate to Istioctl.
Is there a way to migrate "silently" my actual Istio deployment from helm to istioctl ?
I read something about istioctl manifest migrate but it's not very clear.
I also read that I need to upgrade to 1.4.3 before upgrading to 1.5.x. So I'd like to take this opportunity to switch to the Istioctl installation mode.
Thank you for your help.
Unfortunately there is not yet a migration path for helm to istioctl.
There is an issue on github exactly about that.
There is not yet a migration path for helm to istioctl, but it will certainly exist in 1.6,which is what this issue is tracking. You can go directly from 1.4 - 1.6 if desired once that is in place. Sorry about some of the confusion, as didn't do a great job around this
So waiting a little bit more might be the easiest solution. As with migration path will most likely offer better support and documentation.
Like You mentioned it is possible to manually migrate istio from helm to istioctl after upgrading with helm first. However this is not a simple task.
Hope it helps.
Related
i am looking a solution to upgrade Istio version 1.7.3 to 1.13.1 along with ASK Cluster version 1.22.6. However i tried to follow canary and In-place approaches but Istiod Pod getting down during upgradation.
URL Link https://istio.io/latest/docs/setup/upgrade/canary/
Please help to provide right approach with step by step procedure
Your help and Support would be appreciated. !!!!
I am currently running Google Cloud Composer with a Composer version 2.0.9 and airflow version 2.1.4. I am trying install the most recent version of dbt (1.0.4 for core and 1.0.0 for the BigQuery plugin). Because cloud composter images has specific packages installed, I am getting conflicting PyPI dependency issues. When I try to fix one dependency another issue occurs. Does anyone know the specific set of packages installed that would resolve this issue? I have read the following posts by the community but I wanted to know if anyone has a solution for just using composer?
How to run DBT in airflow without copying our repo
How to set up dbt with Google Cloud Composer?
I was able to reproduce the behaviour you are seeing. Below are the dependency conflicts I saw in the Cloud Build logs. These conflicts are occurring between the dbt-core requirements and the pre-installed package requirements in Composer.
Pre-installed package requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
flask 1.1.4 has requirement click<8.0,>=5.1, but you have click 8.1.2.
apache-airflow 2.1.4+composer has requirement markupsafe<2.0,>=1.1.1, but you have markupsafe 2.0.1.
looker-sdk 22.4.0 has requirement typing-extensions>=4.1.1, but you have typing-extensions 3.10.0.2.
dbt-core requirements:
hologram 0.0.14 has requirement jsonschema<3.2,>=3.0, but you have jsonschema 3.2.0. ##=> can be installed manually
dbt-core 1.0.4 has requirement click<9,>=8, but you have click 7.1.2.
dbt-core 1.0.4 has requirement MarkupSafe==2.0.1, but you have markupsafe 1.1.1.
dbt-core 1.0.4 has requirement typing-extensions<3.11,>=3.7.4, but you have typing-extensions 4.1.1.
I tried downgrading the pre-installed packages, but subsequent package installations fail and it is not recommended as well.
Therefore, I would suggest using an external solution as stated in this thread you have linked. Quoting the workarounds given in #Ryan Yuan's answer here.
Using external services to run dbt jobs, e.g. Cloud Run.
Using Composer's KubernetesPodOperator(updated Composer 2 link). My colleague has put up a nice article on dbt discourse here going through the setup process.
Ignoring Composer's Dependency conflicts by setting Composer's environmental variable IGNORE_PYPI_DEPENDENCY_CONFLICTS to True.
However, I don't recommend this as it may cause potential issues.
Creating a Python virtual environment in Composer and install the dbt packages.
As mentioned by #Kabilan Mohanraj, the current version of dbt (1.0.4) and a more recent version of Composer has dependency issues (Composer version 2.0.9 and Airflow version 2.1.4). Therefore an alternative solution is needed. In my case, I played around and searched for a solution from other people in the community and found one person using a certain version of Composer and dbt that only had mimimal dependency issues. However, as mentioned by #Kabilan Mohanraj, Google does not recommend downgrading preinstalled packages, so this would not be a viable solution for something in production.
create composer through gcloud to use an older version that is not available via the Composer UI
gcloud composer environments create my_airflow_dbt_example
--location us-central1
--image-version composer-1.17.9-airflow-2.1.4
requirements
dbt-bigquery==0.21.0
jsonschema==3.1.1
packaging==20.9
For this specific composer version, you are downgrading jsonschema from 3.2.0 to 3.1.1 and packaging from 21.3 to 20.9
I am a scientist who is exploring the use of Dask on Amazon Web Services. I have some experience with Dask, but none with AWS. I have a few large custom task graphs to execute, and a few colleagues who may want to do the same if I can show them how. I believe that I should be using Kubernetes with Helm because I fall into the "Try out Dask for the first time on a cloud-based system like Amazon, Google, or Microsoft Azure" category.
I also fall into the "Dynamically create a personal and ephemeral deployment for interactive use" category. Should I be trying native Dask-Kubernetes instead of Helm? It seems simpler, but it's hard to judge the trade-offs.
In either case, how do you provide Dask workers a uniform environment that includes your own Python packages (not on any package index)? The solution I've found suggests that packages need to be on a pip or conda index.
Thanks for any help!
Use Helm or Dask-Kubernetes ?
You can use either. Generally starting with Helm is simpler.
How to include custom packages
You can install custom software using pip or conda. They don't need to be on PyPI or the anaconda default channel. You can point pip or conda to other channels. Here is an example installing software using pip from github
pip install git+https://github.com/username/repository#branch
For small custom files you can also use the Client.upload_file method.
I thought it would be a topic to find easily on the web, but I couldnt find a solution..
I deployed the parse-server-example on AWS Elastic Beanstalk according to the original documentation and it works perfectly. Can anyone give me a hint how to update this server to the newest version? I try to use the parse-dashboard and I get the error "server version too low".
I cloned the parse server with eb cli already. But I do not know how / which files to update.
Thanks for any hint!
In package.json, you update the version next to 'parse-server'. I think by default this is '~2.0'?
Parse Dashboard requires Parse-Server to be '>=2.1.4', HOWEVER, currently I'm having issues when changing the parse-server version, it breaks my AWS server instance. Currently have an issue open on GitHub (https://github.com/ParsePlatform/parse-server-example/issues/109#issuecomment-198001722), so keep an eye on that.
But yeah, that's where you update your Parse-Server version, I believe!
Once you've done this locally on your machine, you obviously need to deploy the updates to AWS via the Beanstalk Dashboard, as this will install/update any node modules from package.json.
I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.