Does manager node using ESB cluster no longer supported? - wso2

as i can see that current version of clustering ESB doesn't include configuration with worker and manager nodes - only worker (https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile).
Does manager node no longer supported?

IMHO it is not needed. Both nodes work as a manager and worker (so - it is the "worker only node" which is not used).
The trick is to deploy artifacts in both nodes, see https://docs.wso2.com/display/EI640/Clustering+the+ESB+Profile#ClusteringtheESBProfile-Deployingartifactsacrossthenodes
I personally prefer using NFS, for stable environments you can deploy the artifacts using a ci/cd tool

Related

Am I fully utilizing my EMR cluster?

Total Instances: I have created an EMR with 11 nodes total (1 master instance, 10 core instances).
job submission: spark-submit myApplication.py
graph of containers: Next, I've got these graphs, which refer to "containers" and I'm not entirely what containers are in the context of EMR, so this isn't obvious what its telling me:
actual running executors: and then I've got this in my spark history UI, which shows that I only have 4 executors ever got created.
Dynamic Allocation: Then I've got spark.dynamicAllocation.enabled=True and I can see that in my environment details.
Executor memory: Also, the default executor memory is at 5120M.
Executors: Next, I've got my executors tab, showing that I've got what looks like 3 active and 1 dead executor:
So, at face value, it appears to me that I'm not using all my nodes or available memory.
how do I know if i'm using all the resources I have available?
if I'm not using all available resources to their full potential, how do I change what I'm doing so that the available resources are being used to their full potential?
Another way to go to see how many resources are being used by each of the nodes of the cluster is to use the web tool of Ganglia.
This is published on the master node and will show a graph of each node's resource usage. The issue will be if you have not enable Ganglia at the time of cluster creation as one of the tools available on the EMR cluster.
Once enable however you can go to the web page and see how much each node is being utilized.

How to analyze and fix heap dump problems for hazelcast in WSO2 API Manager 1.10?

We have a problem with our setup of WSO2 API Manager 1.10.0. We're using a clustered setup, with 3 gateway-worker-nodes and a manager node; separate store, publisher & key manager nodes (We recently updated from v1.8.0 to 1.10.0).
After the upgrade, every ~2 weeks, all our worker-nodes (and sometimes other nodes) heapdumps and crashes (pretty much at the same time).
Analyzing the heap dumps reveal:
28,509 instances of "com.hazelcast.nio.tcp.WriteHandler", loaded by "hazelcast" occupy 945,814,400 (44.42%) bytes
28,509 instances of "com.hazelcast.nio.tcp.ReadHandler", loaded by "hazelcast" occupy 940,796,960 (44.18%) bytes
with thread:
com.hazelcast.nio.tcp.iobalancer.IOBalancerThread # 0x7877eb628 hz.wso2.pubstore.domain.instance.IOBalancerThread Thread
We've not been able to search for a remedy. The logs tells us nothing other than the nodes getting OOM Exception. This happens on nodes with very little traffic and on nodes with very high traffic (different environments have the same behavior).
Anyone come across a similar behavior? Any ideas for going forward?
This did indeed turn out to be a memory-leak issue with Hazelcast. After upgrading to a later version, this problem stopped.
In order to upgrade Hazelcast, there's a bit of "trickery" to be done.
1) Download the WSO2 GitHub repo (or simply the pom-file) for your specific Hazelcast version here: https://github.com/wso2/orbit/tree/master/hazelcast
2) Change the Hazelcast version in this section of the POM (to your preferred version):
<properties>
<version.hazelcast>3.7.2</version.hazelcast>
</properties>
3) Build the package.
4) Deploy the built package as a Patch to your server.
This is a "work-around" as it's only possible to patch components with the same name as the ones already shipping with the product.

DC/OS service development with Akka

First of all, I'm new to DC/OS ...
I installed DC/OS locally with Vagrant, everything worked fine. Then I installed Cassandra, Spark and I think to understand the container concept with Docker, so far so good.
Now it's time to develop an Akka service and I'm a little bit confused how I should start. The Akka service should simply offer a HTTP REST endpoint and store some data to Cassandra.
So I have my DC/OS ready, and Eclipse in front of me. Now I would like to develop the Akka service and connect to Cassandra from outside DC/OS, how can I do that? Is this the wrong approach? Should I install Cassandra separately and only if I’m ready I would deploy to DC/OS?
Because it was so simple to install Cassandra, Spark and all the rest I would like to use it for development as well.
While slightly outdated (since it's using DC/OS 1.7 and you should be really using 1.8 these days) there's a very nice tutorial from codecentric that should contain everything you need to get started:
It walks you through setting up DC/OS, Cassandra, Kafka, and Spark
It shows how to use Akka reactive streams and the reactive kafka extension to ingest data from Twitter into Kafka
It shows how to use Spark to ingest data Cassandra
Another great walkthrough resource is available via Cake Solutions:
It walks you through setting up DC/OS, Cassandra, Kafka, and Marathon-LB (a load balancer)
It explains service discovery for Akka
It shows how to expose a service via Marathon-LB

WSO2 AS (5.3.0) - "Deployment Synchronizer" page Not found

I followed the official guide to set up a cluster (Clustering AS 5.3.0) (https://docs.wso2.com/display/CLUSTER44x/Clustering+AS+5.3.0).
and also configured the SVN-based Deployment Synchronizer.
but i cannot found the "Deployment Synchronizer" page on mgt console (https://localhost:9443/carbon/deployment-sync/index.jsp.)
<DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>true</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>https://10.13.46.34:8443/svn/repos/as</SvnUrl>
<SvnUser>svnuser</SvnUser>
<SvnPassword>svnuser-password</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>
So, anyone can tell me how to find the deployment-sync page?
Similar page(WSO2 4.1.2 ):
thanks!
That page is not longer exist in any of the WSO2 products.
When you have a cluster of WSO2 products, the master node will notify the worker nodes through a cluster message when there is a change in artifacts, so that, worker nodes will synch up the changes through SVN based deployment synchronizer.
You can achieve it through cronjob. You don't really need to have SVN-Based Deployment Synchronizer.

How to handle DB migration using AWS deployment tools

Amazon Web Services offer a number of continuous deployment and management tools such as Elastic Beanstalk, OpsWorks, Cloud Formation and Code Deploy depending on your needs. The basic idea being to facilitate code deployment and upgrade with zero downtime. They also help manage best architectural practice using AWS resources.
For simplicity lets assuming a basic architecture where you have a 2 tear structure; a collection of application servers behind a load balancer and then a persistence layer using a multi-zone RDS DB.
The actual code upgrade across a fleet of instances (app servers) is easy to understand. For a very simplistic overview the AWS service upgrades each node in turn handing connections off so the instance in question is not being used.
However, I can't understand how DB upgrades are managed. Assume that we are going from version 1.0.0 to 2.0.0 of an application and that there is a requirement to change the DB structure. Normally you would use a script or a library like Flyway to perform the upgrade. However, if there is a fleet of servers to upgrade there is a point where both 1.0.0 and 2.0.0 applications exist across the fleet each requiring a different DB structure.
I need to understand how this is actually achieved (high level) to know what the best way/time of performing the DB migration is. I guess there are a couple of ways they could be achieving this but I am struggling to see how they can do it and allow both 1.0.0 and 2.0.0 to persist data without loss.
If they migrate the DB structure with the first app node upgrade and at the same time create a cached version of the 1.0.0. Users connected to the 1.0.0 app persist using the cached version of the DB and users connected to the 2.0.0 app persist to the new migrated DB. Once all the app nodes are migrated, the cached data is merged into the DB.
It seems unlikely they can do this as the merge would be pretty complex but I can't see another way. Any pointers/help would be appreciated.
This is a common problem to encounter once your application infrastructure gets into multiple application nodes. In the olden days, you could take your application offline for "maintenance windows" during which you could:
Replace application with a "System Maintenance, back soon" page.
Perform database migrations (schema and/or data)
Deploy new application code
Put application back online
In 2015, and really for several years this approach is not acceptable. Your users expect 24/7 operation, so there must be a better way. Of course there is, the answer is a series of patterns for Database Refactorings.
The basic concept to always keep in mind is to assume you have to maintain two concurrent versions of your application, and there can be no breaking changes between these two versions. This means that you have a current application (v1.0.0) currently in production and (v2.0.0) that is scheduled to be deployed. Both these versions must work on the same schema. Once v2.0.0 is fully deployed across all application servers, you can then develop v3.0.0 that allows you to complete any final database changes.