WUM update of api-kubernetes image - wso2

We have a requirement to use application grouping in the API store ( see my previous post - with wso2 apim 2.1.0). The client uses apim-kubernetes and they claim they have an active wso2 subscrption (wum).
Question - do I need to rebuild the docker images with the newest product updates or is there a way to get already "wummed" images as well?

WUM updated docker images are pushed to the wso2 docker registry, every week. If you have a valid subscription you should be able to pull the docker images with wum updated product distribution.
http://docker.wso2.com/
If there is an urgent fix (that cannot wait for a week), you can do the wum update and build the docker image manually.
https://github.com/wso2/kubernetes-apim/blob/master/base/apim/Dockerfile

Related

In AWS App Runner, how does one roll back to old revision

What are the ways how one can roll back the App Runner service to the previous revision?
We use a simple GitHub Actions workflow to build and upload a new docker image to the ECR registry.
How can I select an old image in ECR and upload this instead (rolling back revision)?
I'm also interested in this issue. It seems that when you create an app runner app with a specific image tag and that's it you cannot choose which version to deploy. You either have to amend the service change the Container image URI to the previous image

Cloud Run deployment using image from last revision

We need to deploy labels to multiple CLoud Run services using below API method
https://cloud.google.com/run/docs/reference/rest/v1/namespaces.services/replaceService
We are looking for options where we can apply labels using API without any new image deployment from Container Registry . We understand that there will be deployment and revision change while applying labels but we want that during deployment it should not pull new image from container registry rather it should use the image from last revision . Any configuration parameter in Cloud Run to prevent new images being pulled while applying labels using API or gcloud run services update SERVICE --update-labels KEY=VALUE
The principle of Cloud Run (and Knative, because the behavior is the same) is that the revision is immutable. Thus, if you change something in it, a new revision is created. You can't fake it!
So, the solution is to not use the latest tag of your image, but the SHA of it.
# the latest
gcr.io/PROJECT_ID/myImage
gcr.io/PROJECT_ID/myImage:latest
# A specific version
gcr.io/PROJECT_ID/myImage:SHA123465465dfsqfsdf
Of course, you have to update your YAML for this.

Google Cloud: Auto update image used by container post auto build by trigger?

Is it possible to auto update image used by already deployed container post auto build by trigger?
One solution I know is to add command in cloud build.yaml for restarting the server.
Is there any better approach?
you will need to restart the container with an updated image, as you cannot modify an image when the container is already deployed. If you need the latest image version to be consistently deployed, then just omit a specific version number for the image, the client defaults to latest. More details here.
If this is not the issue, please specify and provide some examples.

WSO2 AS (5.3.0) - "Deployment Synchronizer" page Not found

I followed the official guide to set up a cluster (Clustering AS 5.3.0) (https://docs.wso2.com/display/CLUSTER44x/Clustering+AS+5.3.0).
and also configured the SVN-based Deployment Synchronizer.
but i cannot found the "Deployment Synchronizer" page on mgt console (https://localhost:9443/carbon/deployment-sync/index.jsp.)
<DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>true</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>https://10.13.46.34:8443/svn/repos/as</SvnUrl>
<SvnUser>svnuser</SvnUser>
<SvnPassword>svnuser-password</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>
So, anyone can tell me how to find the deployment-sync page?
Similar page(WSO2 4.1.2 ):
thanks!
That page is not longer exist in any of the WSO2 products.
When you have a cluster of WSO2 products, the master node will notify the worker nodes through a cluster message when there is a change in artifacts, so that, worker nodes will synch up the changes through SVN based deployment synchronizer.
You can achieve it through cronjob. You don't really need to have SVN-Based Deployment Synchronizer.

WS02 Minimized Deployment for GW Worker node

I would like to run WSO2 on two hosts, one serves as manager and the other as gateway worker.
I consulted the clustering guide and product profiles documentation, and I understand that after configuring the two hosts correctly, I can run the product with selected profile:
-Dprofile=gateway-manager on the manager node
-Dprofile=gateway-worker on the gateway worker node
In addition to perform selective-run, I would also like the gateway-worker to have the minimal possible deployment, i.e. to be installed only with artifacts it really needs.
Three options I can think of, from best to worst:
Download a minimized deployment package - in case there is one? In the site I saw only complete package which contains artifacts of all the components. Are there other download options which contain selective artifacts per profile?
Download the complete package and then remove the artifacts which are not necessary for gateway-worker (how do I know which files/directories to remove?)
Download the source from github and run a selective build? (which components should I build and how do I package them for deployment)?
There are no separate product packs for each profiles to download. So option 1 is not there. But you can do the option 2 to some extent. You can remove the publisher, store and admin-dashboard application from the product by removing 'jaggeryapps' folder in 'wso2am-1.10.0/repository/deployment/server/' location. Other than that we are not recommending to remove any components from the pack.
You can check the profile generation code for API Manager 1.10 in here. It only has module import definitions. These component are needed to be there for each profile.