Infinispan 12.1.1.Final missing cache configuration in the admin console (Server Management Console) - admin

we have installed Infinispan 12.1.1.Final using the official link:
https://infinispan.org/docs/dev/titles/server/server.html
After download and install of the infinispan-server-${version}.zip
The cluster (3 server nodes) is available but accessing to the admin console a lot of missing functionalities:
Cache container configuration: it's not possibile to configure all cache parameters
Missing cluster management (i.e. start new server)
Only available:
Data Container: missing parameters configuration
Global Statistics
Cluster Membership => only visualization of the 3 nodes of the cluster
About
Does anyone know if we need to enable some flags in the configuration or we need to install some plugins to have a complete management with all the features?
Thank you

Related

WSO2 API Manager NFS for Active-Active Deployment

I am trying to setup Active-Active setup for the WSO2 API Manager by following this url:Configuring an Active-Active Deployment
Everything is working fine except step 5 where I am trying to setup NFS. I moved /repository/deployment/server folder to another drive. For e.g. at location:
D:/WSO2AM/Deployment/server
so that both nodes can share deployment folder together.
Now not knowing what config files to change to point deployment folder to location other than default, I made changes to carbon.xml and made changes to an element "RepositoryLocation" and set it to D:/WSO2AM/Deployment/server but uit looks like it is not enough. When I start the server, I get the following error messsage:
FATAL - SynapseControllerFactory The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
[2019-03-12 15:54:49,332] FATAL - ServiceBusInitializer Couldn't initialize the ESB...
org.apache.synapse.SynapseException: The synapse.xml location .\.\repository/deployment/server/synapse-configs\default doesn't exist
I will appreciate if someone can help me setup NFS so that both nodes can share same deployment folder and I don't have to worry about syncing them through some other mechanism.
Thanks
After struggling for almost a day, I found a solution in WSO2's completely separate thread.
Enable Artifact Synchronization
In this thread, they are asking to create a SMB share (for Windows) for Deployment and tenants directory, for APIM purpose, we need to create SMB share for the directory /repositiry/deployment/server directory.
It is just one command away to create a symbolic link as seen below:
mklink /D <APIM_HOME>/repositiry/deployment/server D:\WSO2\Shared\deployment\server
We need to create symlink in both nodes to point to the same location.
Once done, no configuration changes needed on APIM side. It will work by default and you have following scenario configured.

Set up Auto Scaling Apps

Is it possible to setup auto-scaling capabilities for an app depending on the workload?
I haven't found anything useful neither in the Developer Console nor in the docs. Is there may be a hidden possibility via the CLI?
Just wondering if this is possible as I'm doing a basic evaluation on Swisscom Application Cloud.
There are several opensource autoscaling projects of various readiness for production use like
https://github.com/cloudfoundry-incubator/app-autoscaler
https://github.com/cloudfoundry-samples/cf-autoscaler
Pivotal Cloud Foundry supports auto-scaling of the applications out of the box (http://docs.pivotal.io/pivotalcf/1-8/appsman-services/autoscaler/autoscale-configuration.html)
This capability is not present at the moment, and it is not part of the (open source) cloudfoundry platform either. Some platforms provide it, but this has not been released to the community yet!
There are various ways how you can do that.
As described by Anatoly, you can obvisouly use the "Auto Scaler" Service, if this is deployed from your respective Provider.
(you can figure that out by just calling this Feature-Flags-API check: https://apidocs.cloudfoundry.org/253/feature_flags/get_the_app_scaling_feature_flag.html)
An other option is actually writing your own small auto-scaler based on the custom-defined scaling-behaviours you've to meet your application. (DIY ;))
Get Load
: First you need to get information about your current "load" of the app (i.e. memory usage, cpu usage etc). You can easily do that by pulling data from the v2/apps//stats API. See details here:
https://apidocs.cloudfoundry.org/253/apps/get_detailed_stats_for_a_started_app.html
Write some magic :
Now you need to write some magic around to verify if the app is under heavy load. Could be CPU or Memory or other bottle necks you try to get our of the stats API.
Scale up/down :
With the PUT v2/apps// API you can easily now change the amount of instances of your app by filling the paramter "instances" accordingly.
https://apidocs.cloudfoundry.org/253/apps/updating_an_app.html
For PCF you can take a look at this https://github.com/Pivotal-Field-Engineering/autoscaling-cli-plugin. It should give you what you are looking for.
You will need to install it via the
cf install-plugin https://github.com/pivotal-cf-experimental/autoscaling-cli-plugin
and configure using steps similar to below
Get the details of the autoscaler from your marketplace
cf m | grep app-autoscaler
Install the auto scaler plugin using service & plan from above
cf create-service <service> <plan> myAutoScaler
Bind the service to your app (or u can do this via you deployment manifest)
cf bind-service myApp myAutoScaler
Configure your scaling parameters
cf configure-autoscaling --min-threshold ## --max-threshold ## --max-instances # --min-instances # myApp myAutoScaler

WS02 Minimized Deployment for GW Worker node

I would like to run WSO2 on two hosts, one serves as manager and the other as gateway worker.
I consulted the clustering guide and product profiles documentation, and I understand that after configuring the two hosts correctly, I can run the product with selected profile:
-Dprofile=gateway-manager on the manager node
-Dprofile=gateway-worker on the gateway worker node
In addition to perform selective-run, I would also like the gateway-worker to have the minimal possible deployment, i.e. to be installed only with artifacts it really needs.
Three options I can think of, from best to worst:
Download a minimized deployment package - in case there is one? In the site I saw only complete package which contains artifacts of all the components. Are there other download options which contain selective artifacts per profile?
Download the complete package and then remove the artifacts which are not necessary for gateway-worker (how do I know which files/directories to remove?)
Download the source from github and run a selective build? (which components should I build and how do I package them for deployment)?
There are no separate product packs for each profiles to download. So option 1 is not there. But you can do the option 2 to some extent. You can remove the publisher, store and admin-dashboard application from the product by removing 'jaggeryapps' folder in 'wso2am-1.10.0/repository/deployment/server/' location. Other than that we are not recommending to remove any components from the pack.
You can check the profile generation code for API Manager 1.10 in here. It only has module import definitions. These component are needed to be there for each profile.

How do I have to configure pacemaker for OpenShift Origin V3

I want to fake an enterprise environment with OpenShift Origin V3 to test some stuff.
I'm going to try the advanced installation with multiple masters, etcds and multiple nodes.
https://docs.openshift.org/latest/install_config/install/advanced_install.html
I already did the quick installation once (running OpenShift in a container) and I did the advanced installation a few times (one host which contains a master + a node, and some nodes).
First of all, I'm installing the whole environment on AWS EC2 instances with CentOS7 as OS. I have 2 masters (master1.example.com and master2.example.com) and 3 nodes (node1.example.com, node2.example.com, ...)
I want to seperate my masters and nodes. So containers and images will only be on the nodes. (So no host which contains a master and a node).
My masters needs to be HA. So they will use a virtual IP and pacemaker. But how do I have to configure this? There are some tutorials to use pacemaker with apache. But there is nothing that describes the configuration of pacemaker and vip for using it in OpenShif.
great news, I had to deal with pacemaker as well but now Pacemaker is not the native HA method for Openshift anymore(from v3.1). So we can get rid of the tedious pcs configuration and fencing tunning.
Now ansible installation playbook's take care of multiple masters installation, with what they called Openshift HA native method. No additional setup is required for a standard configuration.
Openshift HA method takes advantage of etcd to select the active leader every 30s(by default)
There is a method to migrate from Pacemaker to native HA
https://docs.openshift.com/enterprise/3.1/install_config/upgrading/pacemaker_to_native_ha.html#install-config-upgrading-pacemaker-to-native-ha

Activity Monitoring Toolbox missing from WSO2 BAM Server 2.4.0

The documentation of the WSO2 Business Activity Monitor version 2.4.0 refers to an Activity Monitoring Toolbox which is not present in my installation (default configuration on 64bit Linux with 64bit JVM v1.6.0_39).
Can I download and install the Activity Monitoring Toolbox from an external location?
Thanks,
Kai
From BAM 2.4.0 release onwards, the previous BAM activity monitoring components have been deprecated. They were replaced by a new implementation of activity search and monitoring with many more added features.
The following artifacts will no longer be shipped:
With BAM distribution: the activity monitoring sample and the activity monitoring toolboxes.
With BAM data agents: activity monitoring data agent which has so far been available under 'Service Data Publishing'
The newer activity search component has its own Jaggery app which can use to query data directly from Cassandra using indices rather than use Hive scripts for summarising data. It will also be shipped with the BAM distribution by default, thereby negating the need for installation of a dedicated toolbox.
The message tracer will replace the activity data publisher for dumping SOAP payloads to BAM. It will also serve in correlating messages based on ID.
Additional information can be found at: http://docs.wso2.org/display/BAM240/Activity+Monitoring+Dashboard
Activity Monitoring toolbox is packed and installed by default in the BAM 2.4.0 where you do not have to install is again. If you want to install it into a BAM 2.3.0 or an older version it will not be easy as the new Activity Monitoring toolbox in BAM 2.4.0 is not only dependent on the deployable artifacts, Stream definition, Analytic script and the dashboard. But it need some other jar files as well which is in the plugins.