ArgoCD doesn't sync correctly to OCI Helm chart? - argocd

Situation:
My ArgoCD app is created and synced to Azure OCI Registry Helm chart. (ex: Chart version 1.0.0).
Synchronize was successful at the first time.
But after I delete the Helm Chart version 1.0.0 from Azure OCI registry (ACR), ArgoCD still show that sync successful when I click on Sync button. (I expect the Syncing Error happen)
Question:
ArgoCD stored Helm chart v1.0.0 and not pull it again when I click Sync button, is that right?
If I push some changes overwrite the Helm Chart version 1.0.0 on OCI, How can I force ArgoCD pull the charts once more time?

In argocd do a hard refresh (the dropdown icon on the refresh button). This should detect it.
In the mean time this feature request has been opened: https://github.com/argoproj/argo-cd/issues/9499

Related

Wso2 mi dashboard create docker image

I am able to run my mi dashboard, (i'm using v4.0.1.17 as it allows me to replace the default H2 database), successfully on Windows. I now want to create a docker image for the same version, but I cannot find it's release in the official GitHub repositories. I tried vainly to build an image for provided version 4.0.1.2 and then replaced the zipped directory with my own 4.0.1.17 one, but it didn't work. Any suggestions would be welcome. Thanks.
You can get the updated docker image for MI Dashboard from the WSO2 Private Docker registry. The page wso2mi-dashboard contains a list of all the docker images released with the update tag.
In your case to pull the v4.0.1.17 you can use the following command,
docker pull docker.wso2.com/wso2mi-dashboard:4.0.1.17
Note that you need to log in to docker.wso2.com with the WSO2 subscription credentials before pulling the image
docker login docker.wso2.com

Update the Web-Front-End of a Google Kubernetes Engine Webapplication

i used this https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook?hl=de to bulid a Guestbook with Google Kubernetes Engine.
I applyed this an everything works.
Now i wanted to change the index.html for a better Look.
How can i upload or update the changed file?
I tried to apply the frontend service again with this
kubectl apply -f frontend-service.yaml
But it did not work.
You will have to rebuild the containers if you make changes to the source code. I suggest you:
Download docker and run docker build to rebuild the containers locally.
Push the containers to your own Artifact Registry(AR) following this guide https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling.
Update the yaml files to point to your own AR.
Redeploy the application to GKE

Set up customized Jwt Grant Handler when using Helm installation of WSO2 Api Manager 3.x

I have developed a custom JWTBearerGrantHandler which is packaged as a jar. In a bare metal WSO2 deployment I would add that jar into repository/components/lib and setup the relevant class in repository/resources/conf/default.json:
"oauth.grant_type.jwt_bearer.grant_handler": "xxx.MyJWTBearerGrantHandler",
However I want to deploy WSO2 API Manager in Kubernetes using the provided Helm chart (https://github.com/wso2/kubernetes-apim/tree/master/advanced/am-pattern-1). In this case, how can I add my custom handler?
Both changes, adding JAR and configuration(repository/resources/conf/default.json) can be done by building a Docker image. In this case, the base image would be the WSO2 provided Docker image for WSO2 API Manager and you can use Docker COPY Dockerfile instruction to add JAR + configuration file into the Docker image. Once the image is built and pushed to a private registry, please refer to the image via values.yaml.
Another way of adding the JAR file, assuming the JAR file is remotely accessible, is downloading the JAR using an Init container. Please have a look at how the MySQL connector is getting downloaded using init container.
If the given configuration is changing often, it is best to apply it via a
K8s Configmap. To add repository/resources/conf/default.json as a Configmap, please have a look at an existing configuration mount

Migrate from Helm to Istioctl

I'am running Istio 1.3.5 on my kubernetes cluster. I have installed it using Helm. But, this method will be deprecated in the future, so I'd like to migrate to Istioctl.
Is there a way to migrate "silently" my actual Istio deployment from helm to istioctl ?
I read something about istioctl manifest migrate but it's not very clear.
I also read that I need to upgrade to 1.4.3 before upgrading to 1.5.x. So I'd like to take this opportunity to switch to the Istioctl installation mode.
Thank you for your help.
Unfortunately there is not yet a migration path for helm to istioctl.
There is an issue on github exactly about that.
There is not yet a migration path for helm to istioctl, but it will certainly exist in 1.6,which is what this issue is tracking. You can go directly from 1.4 - 1.6 if desired once that is in place. Sorry about some of the confusion, as didn't do a great job around this
So waiting a little bit more might be the easiest solution. As with migration path will most likely offer better support and documentation.
Like You mentioned it is possible to manually migrate istio from helm to istioctl after upgrading with helm first. However this is not a simple task.
Hope it helps.

how to upgrade the Istio v1.1.1 bundled kiali(v0.14) to v0.17?

I've installed Istio v1.1.1 and Kiali v0.14 is up running. Per kiali document, Spring boot monitoring is available since v0.16. I'm trying to upgrade the kiali to latest(v0.17) by editing the kiali's deployment file(simply editting image: docker.io/kiali/kiali:v0.14 to v0.17). When I try to login again, the web UI complains "The Kiali secret is missing". See the below screen shot. Actually the secret is already there.
Have you tried the 0.17 deployment descriptor too ?
https://github.com/kiali/kiali/blob/v0.17.0/deploy/kubernetes/deployment.yaml
Note that between versions config might be changed, so just upgrading the image could be not enough.