WSO2 APIM 2.0 deployment - wso2

I'm trying to understand WSO2 APIM components and deployment scenarios but the terminology is confusing/vague for me. Clustering vs distributed deployments, profiles, and Port Offsets.
Basically I'd like to deploy a minimal 5 node setup where:
Node # (Location) Purpose
(DMZ) the GW (worker=True right?) and KeyManager
(DMZ) 2nd GW node (as above) for GW & KeyManager
(non-dmz) the Management Console, MySQL master
(non-dmz) the Publisher UI,TrafficManager, MySQL slave
(DMZ) the Store
Questions:
Should I use -DportOffset=0 on all nodes?
What -Dprofile=?? do I need to use on each of the 5 nodes?
The 2 gateway nodes will be load-balanced by an F5 load balancer
for incoming api-traffic. What port is used there, 9443 or 9763?
What ports need to be accessible on the DMZ hosts for this to work?
I assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Please don't point me to the documentation, that's what is confusing me.

Running the key manager nodes, Store node in the DMZ is not recommended as they need db access. If you are using multi tenancy, you cannot host gateway worker nodes in the DMZ as well due to db access. What you can do is host those nodes in LAN and have a reverse proxy in the DMZ to expose the endpoints on the Gateway and Store. If you do not use multi tenancy, then you can run gateway worker nodes in the DMZ as dbs are not used.
As you are running multiple WSO2 servers in a single server you need to use port offsets to avoid conflicts. Default port offset is 0. You can run one WSO2 server with default port offset. For the other server you need to use port offset 1 or any value other than 0. You can start the server by giving the -DportOffset=1 at the startup. Best way is to change the value offset to 1 in /repository/conf/carbon.xml so that you do not need to provide the -DportOffset value at the startup.
-Dprofile is denote the profile which API Manager should start. If you start with -Dprofile=api-publisher, it would only starts the front end/backend features relevant to the API Publisher. Running product profiles are mostly recommended as it would only load relevant features of the profile. You can use profiles in your deployment as you are running 6 profiles of API Manager.
I think you are referring gateway worker nodes which serve API traffic. If so, it will use passthrough ports that are 8280(http) and 8243(https). Requests can serve using both. 9443 and 9763 are servlet ports are those will not used in gateway worker nodes and only in gateway manager node for service calls.
My recommendation is you should revise this setup as you are running nodes in DMZ which have db access.

Should I use -DportOffset=0 on all nodes?
It depends on how do you set up those nodes. If all of these servers in the same node (machine), you must use different port offset as all the API Manager servers use those port, so, there will be port conflicts.
What -Dprofile=?? do I need to use on each of the 5 nodes?
It will adjust the ports used by API Manager so that, there won't be any port conflicts between them if you are running on same node.
The 2 gateway nodes will be load-balanced by an F5 load balancer for
incoming api-traffic. What port is used there, 9443 or 9763?
For API requests/responses handling, you need 9763.
What ports need to be accessible on the DMZ hosts for this to work? I
assume 3306,9443,9763,8280,8243,7711, and 9999,11111 if JMX reqd
Yes, it's correct.
Further, you can use WSO2 support any issues you encountered.

Related

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

How 2 services can talk to each other on AWS Fargate?

I setup a Fargate cluster on AWS. My cluster has the following services:
server-A (port 3000)
server-B (port 4000)
Each service is in the same VPC and have the same security group (any ports, any source, any destination). The VPC is isolated from internet.
Now, I want server-A to send a http query to server-B. I would assume that, as in Docker swarm, there is a private DNS that maps the service name to its private IP, and it would be as simple as sending the query to: http://server-B:4000. However, server-A gets a timeout, which means it can't reach server-B.
I've read in the documentation that I can put the 2 containers in the same service, each container listening on a different port, so that, thanks to the loopback interface, from server-A, I could query http://127.0.0.1:4000 and server-B will respond, and vice-versa.
However, I want to be able to scale server-A and server-B independently, so I think it makes sense to keep each server independant from each other by having 2 services.
I've read that, for 2 tasks to talk to each other, I need to setup a load balancer. Coming from the world of Docker Swarm, it was so easy to query the services by their service name, and behind the scene, the request was forwarded to one of the containers in that service. But it doesn't seem to work like that on AWS Fargate.
Questions:
how can server-A talk to server-B?
As service sometimes redeploy, their private IP changes, so it makes no sense to query by IP, querying by hostname seems the most natural way
Do I need to setup any kind of internal DNS?
Thanks for your help, I am really lost on doing this simple setup.
After searching, I found out it was due to the fact that I was not enabling "Service Discovery" during the service creation, so no private DNS was created. Here is some additional documentation which explains exactly the steps:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html

How to create an scalable Websocket application using AWS elb?

I am developing an Websocket application and I am having doubts on how to create a scalable application.
1- Should I use Nginx? And if so, where does nginx stand? It would be like this:
ELB -> Nginx -> Ec2 instances
or
Nginx -> ELB -> Ec2 instances
2- Is it necessary to use a service like Redis to make the communication between servers? Example: I am connected to server1 and my friend is connected to server2, but we are in the same room chat. If I send a message, it needs to reach my friend.
3 - Is it possible to let my Elb receives only calls in https but the conversation with the backend is http? I ask this, because I use OpsWorks and it was very hard to normalize cookbooks to create my environment.
Thank you.
Generally the architecture looks like:
ALB --> nginx1,niginx2 --> ALB --> ec2 websocket server1, server2
This allows your web servers and app servers to be load balanced independently of each other
Not necessarily. Redis is used primarily as an in memory data store for caching.
Yes - You can terminate ssl on ALB and it is in fact recommended to do it this way in order to offload ssl processing on load balancer as opposed to doing it on instances themselves. See - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html . Additional benefit of using this is that you can use ACM to issue certificates for free that can be deployed on ALB. ACM can handle renewals for you automatically as well.

How to configure activemq-replicatedLevelDB to configure instance to connect to specific port of master/slave

I'm new with activemq-replicatedLevelDB so I might assumed things wrong based on my limited understanding.
I'm setting up 3 activemq instances with zookeeper which then determine which among the activemq instances is the master in AWS. Zookeeper are deployed within a private subnet and activemq are deployed within a public subnet, there's no problem with zookeeper and activemq communication.
For security purposes:
Question/Issue: I can't find where I can configure the activemq intances to which port should these activemq instances communicate with each other.
Why the issue: I need to restrict the available ports that are open of these activemq instances. And I cannot simply allow all access coming from public subnet
example below of port restrictions
port 22 should be open for ssh access
zookeeper client port (2181) should be open only for access coming
from these activemq instances
port 8161 should be accessible from specific sources
I am using security group to restrict these accesses in AWS. I tried allowing all ports accessible wihtin the public subnet which allows activemq to know that other activemq instances are alive, and they were capable of electing master/slaves. The port 45818 is not the same port after every setup from scratch. So I assume this is random.
sample logs below
Promoted to master
Using the pure java LevelDB implementation.
Master started: tcp://**.*.*.**:45818
Once I removed that port setup(allow all access), I got the below stacktrace
Not enough cluster members have reported their update positions yet.
org.apache.activemq.leveldb.replicated.MasterElector
If my understanding of the stacktrace above is right, it tells that the current activemq does not know the existence of other activemq instances. So I needed to know how I can configure the port of these activemq when checking of other activemq instances so I can restrict/allow access.
Here is the configuration of my activemq that points to zookeeper addresses. Other configuration are on default values.
activemq version: 5.13.4
<persistenceAdapter>
<replicatedLevelDB directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="testzookeeperip1:2181,testzookeeperip2:2181,testzookeeperip3:2181"
hostname="testhostnameofactivemqinstance"
/>
</persistenceAdapter>
Should there any information lacking, I'll update this question asap. thanks
This is rather a hint than a qualified answer, but too large for comment.
You configured dynamic ports with bind="tcp:0.0.0.0:0". I haven't used a fixed port on this configuration setting, but configuration doc says, you can set it.
The bind port will be used for the replication protocol with the master, so obviously, you cannot cut it off, but it should be ok to allow only the zk machines to communicate there.
I have not analyzed traffic between the brokers, but as I understand replicated LevelDB, the ZK decides over the active master, not the brokers. So there should be no communication between the brokers on that port.
The external broker address is configured on the transportConnectors element in the <broker> section of the config file, but I guess you already have that covered.
I suggest, you configure the bind to a fixed port and allow communication to that port from the ZK and if required from the cluster partners. Clients have only access to the transport ports. Allow communication to the ZKs and that should be it.

Dynamic load balancing with ESB and DSS Clustering, WSO2

I want to make a cluster of Data Services Servers(DSS), and use an Enterprise Service Bus (ESB) as load balancer. In this deployment, what is the purpose of having a manager DSS in the cluster, and if there is a manager, is it a single point of failure?
These are the references which I used for load balancing and DSS clustering:
Dynamic load balancing between 3 nodes
How to install WSO2 Carbon cluster management feature?
The dynamic load balancing mechanism in WSO2 ESB, discovers the DSS members in an application group using a group communication framework and shares the load in runtime.
Load balancer is not bound or coupled to any cluster manager - it will simply distribute the load among nodes in applicationDomain.
So - in runtime - cluster manager doesn't create any single point of failure.
If you want you can setup a DSS cluster even without a cluster manager and distribute the load among the nodes via ESB.
The cluster manager - which is a component installed only to manage your cluster...
This is an extension to Prabath's answer.
DSS can be configured to work in a cluster. So that all DSS nodes act as members in a single cluster. This facilitates sharing session among each of the nodes.
Or else, you can have all DSS nodes running in isolation (using the same configuration), fronted by a load balancer (LB). Unlike the previous approach, this method does not support share sessions between DSS nodes. Thus only supports stateless services.
WSO2 ESB can act as a LB. But having a single instance of LB will make it a SPoF. And, LB can be configured to run in a cluster as well.
I don't know what's behind the decision of using an ESB instead of an ELB for LB, but it's up to you which one to use.
The manager is not a single point of failure, it's just a way to manage the entire cluster from a single management console (with limitations), and can be configured to be a worker at the same time.
Regarding the LB layer, you can use keepalived to avoid having a SPoF in the ESB acting as a LB, the same way it's done for WSO2 ELB's.
Take a look on that Failover for ELB with keepalived