WSO2 APIM 2.5 Deployment Patterns Pattern2 - wso2

As per the documentation https://docs.wso2.com/display/AM250/Deployment+Patterns#DeploymentPatterns-Pattern2 I am trying to setup WSO2 similar to Pattern 2. But from the documentation it is not quite clear what are the steps that needs to be followed.
Question:
1. How can I start Store, Publisher and Traffic Manager on same server?
Do I have to start them in single startup script or start them independently? When I try starting them independently I see there are conflicts. How to resolve those conflicts?
Regards,
Deepak

Question: 1. How can I start Store, Publisher and Traffic Manager on same server? Do I have to start them in single startup script or start them independently?
If you start the default WSO2 API Manager (without any profile parameter), it starts the "all-in-one" mode. This mode contains all features (store, publisher, key maanger, gateway, traffic manaager, ..). You don't have to do anything with it.
For the key manager and gateway, you should specify the profile parameter which should disable some non-essential feature for the specified profile.

Related

On AWS X-Ray, stop tracing a particular path on a Node API running in a docker container on ECS

I have a Node API running inside a docker container on ECS. I am using X-Ray to trace incoming requests.
I would like to instruct x-ray to not trace a particular API end-point on my Node API. Is this possible?
The API endpoint is:
/api/upload/directUploadConfirmation
I do not want that end-point traced via X-Ray.
What I have tried
I tried to create a sampling rule via the X-Ray console. I wanted to cheat and see for this paritular URL, capture 0 requests per second. But that plan failed because it doesn't accept 0, the number has to be greater than or equal to 1.
EDIT: The CloudWatch console team has since fixed this in production. Thanks for finding this issue!
=============================================================
Just to confirm, are you using the X-Ray SDK for Node.JS? If you are, feel free to open an issue there so people familiar with the SDK can help answer your question too!
Otherwise, I was able to create a sampling rule with 0 Reservoir Size and 0 Fixed Rate. Below is a picture showing how to get to the X-Ray console to create this.
The AWS documentation lists several Sampling options that you can use including URL Path. This option is not available in APIGW, but should be available in ECS which you mentioned you are using. In my image I am using that to filter output requests to URL Path /foo/bar.
But that plan failed because it doesn't accept 0, the number has to be greater than or equal to 1.
Can you please confirm where you saw it fail to accept 0? Based on that I can try to replicate your setup and see if I get the same issue.
Thanks!

WSO2 apim 3.0.0 M6, unable to start the gateway

For 3.0.0-M6, installing as per
https://docs.wso2.com/display/AM300/Installation+Guide
and then publishing the pet store api described at:
https://docs.wso2.com/display/AM300/Create+and+Publish+an+API
Then, when trying to start the gateway, this message is received:
ballerina: no bal files in the package: org/wso2/carbon/apimgt/gateway
I've seen a an older post at
unable to start ballerina as gateway
where some developer suggested adding an environment variable to have the publisher copy data directly into the file structure of the gateway, but it doesn't describe how to set the environment variable.
Is this still a viable solution? Is there any point in installing and running the five processes locally and expect deployment of apis to work locally? It seems to me the milestone is still a few milestones away from proper testing on localhost.
The docs on this are still a bit sparse...
The documentation has been updated now on how to start the gateway [1].
[1] https://docs.wso2.com/display/AM300/Installation+Guide

Set up Auto Scaling Apps

Is it possible to setup auto-scaling capabilities for an app depending on the workload?
I haven't found anything useful neither in the Developer Console nor in the docs. Is there may be a hidden possibility via the CLI?
Just wondering if this is possible as I'm doing a basic evaluation on Swisscom Application Cloud.
There are several opensource autoscaling projects of various readiness for production use like
https://github.com/cloudfoundry-incubator/app-autoscaler
https://github.com/cloudfoundry-samples/cf-autoscaler
Pivotal Cloud Foundry supports auto-scaling of the applications out of the box (http://docs.pivotal.io/pivotalcf/1-8/appsman-services/autoscaler/autoscale-configuration.html)
This capability is not present at the moment, and it is not part of the (open source) cloudfoundry platform either. Some platforms provide it, but this has not been released to the community yet!
There are various ways how you can do that.
As described by Anatoly, you can obvisouly use the "Auto Scaler" Service, if this is deployed from your respective Provider.
(you can figure that out by just calling this Feature-Flags-API check: https://apidocs.cloudfoundry.org/253/feature_flags/get_the_app_scaling_feature_flag.html)
An other option is actually writing your own small auto-scaler based on the custom-defined scaling-behaviours you've to meet your application. (DIY ;))
Get Load
: First you need to get information about your current "load" of the app (i.e. memory usage, cpu usage etc). You can easily do that by pulling data from the v2/apps//stats API. See details here:
https://apidocs.cloudfoundry.org/253/apps/get_detailed_stats_for_a_started_app.html
Write some magic :
Now you need to write some magic around to verify if the app is under heavy load. Could be CPU or Memory or other bottle necks you try to get our of the stats API.
Scale up/down :
With the PUT v2/apps// API you can easily now change the amount of instances of your app by filling the paramter "instances" accordingly.
https://apidocs.cloudfoundry.org/253/apps/updating_an_app.html
For PCF you can take a look at this https://github.com/Pivotal-Field-Engineering/autoscaling-cli-plugin. It should give you what you are looking for.
You will need to install it via the
cf install-plugin https://github.com/pivotal-cf-experimental/autoscaling-cli-plugin
and configure using steps similar to below
Get the details of the autoscaler from your marketplace
cf m | grep app-autoscaler
Install the auto scaler plugin using service & plan from above
cf create-service <service> <plan> myAutoScaler
Bind the service to your app (or u can do this via you deployment manifest)
cf bind-service myApp myAutoScaler
Configure your scaling parameters
cf configure-autoscaling --min-threshold ## --max-threshold ## --max-instances # --min-instances # myApp myAutoScaler

WS02 Minimized Deployment for GW Worker node

I would like to run WSO2 on two hosts, one serves as manager and the other as gateway worker.
I consulted the clustering guide and product profiles documentation, and I understand that after configuring the two hosts correctly, I can run the product with selected profile:
-Dprofile=gateway-manager on the manager node
-Dprofile=gateway-worker on the gateway worker node
In addition to perform selective-run, I would also like the gateway-worker to have the minimal possible deployment, i.e. to be installed only with artifacts it really needs.
Three options I can think of, from best to worst:
Download a minimized deployment package - in case there is one? In the site I saw only complete package which contains artifacts of all the components. Are there other download options which contain selective artifacts per profile?
Download the complete package and then remove the artifacts which are not necessary for gateway-worker (how do I know which files/directories to remove?)
Download the source from github and run a selective build? (which components should I build and how do I package them for deployment)?
There are no separate product packs for each profiles to download. So option 1 is not there. But you can do the option 2 to some extent. You can remove the publisher, store and admin-dashboard application from the product by removing 'jaggeryapps' folder in 'wso2am-1.10.0/repository/deployment/server/' location. Other than that we are not recommending to remove any components from the pack.
You can check the profile generation code for API Manager 1.10 in here. It only has module import definitions. These component are needed to be there for each profile.

Hot Deployment with WSO2 ESB

Can I do a hot deployment in WSO2 ESB. As an example I want add a new service / new route without restarting the ESB to minimize the service interruption.
If possible can you give any example.
If not possible can I know if it will be in future releases.
Hot deployment/hot update may take the system to inconsistent states if the updates are not properly coordinated. Therefore it is recommended to turn hot deployment and hot update off for production deployments.
More Details here