Passthrough ports in a ESB cluster - wso2

From the carbon docs
Non-blocking HTTP/S transport ports: Used to accept message mediation requests. If you want to send a request to an API or a proxy service for example, you must use these ports. ESB_HOME}/repository/conf/axis2/axis2.xml file.
8243 - Passthrough or NIO HTTPS transport
8280 - Passthrough or NIO HTTP transport
But in a cluster scenario, 1 MGR and 2 WRK where I'm supposed to send a request ?
To the MGR ?
To one of the WRK ?
According to the documentation thos port are not load balanced.
Thank to anyone that may clarify

Proxy and API requests are served by Worker nodes. Manager nodes are there to access UI and deploy artifacts only.
If you have 2 worker nodes, you can/should have a loadbalancer in front of them.

Related

Internal communication among services with app mesh in ECS

I have application stack consisting of three services in AWS ECS. I have been planning to implement service mesh using AWS App Mesh. I have followed the following instructions to setup the mTLS for my services.
https://awscloudfeed.com/whats-new/security/how-to-use-acm-private-ca-for-enabling-mtls-in-aws-app-mesh
Using the technique mentioned on the blog I was able to setup the mTLS and communication is working fine from virtual gateway to services.
But when one of the service tries to access another service it fails to make connection. Services are built using NodeJS and one service(let's say A) use request library to call service B. From my understanding of the service mesh, the TLS session initiation should start from the envoy proxy of Service A and terminate in the envoy proxy of Service B. In this case I should have used the service discovery url of the Service B (eg. http://serviceb.example.com) when calling it from the serivce A. While doing so, I get ECONNRESET error with message socket hangup. And while using https protocol (eg https://serviceb.example.com) I get ECONNRESET error with message of TLS error.
But if I disable the client certificate requirement for the service B, I am able to access it from service A with https protocol. Does this mean that if i need to set the mtls in appmesh, i will need to load the client certificate through the application itself? I think the request should have gone through without issue as client certificate is provided through the backed client configuration.
Can you help me understand how app mesh mTLS work and if I am missing something while configuring the app mesh?
Thank You

Is it possible to use AWS Application Loadbalancer with RSocket?

Is it possible to use AWS Application Loadbalancer for RSocket?
An AWS Application Loadbalancer can also be used for WebSocket connections and my project uses RSocket with WebSocket as its transport. This made me wonder if it is possible to use this loadbalancer for RSocket aswell.
On one hand I would think it is possible to use this loadbalancer, as it only receives a connection and passes this to the target RSocket server.
On the other hand, if all RSocket frames go through the loadbalancer, it might not know how to handles these frames, which would make it not possible to use.
I couldn't find much about RSocket and loadbalancing online besides this post .But this is client side loadbalancing and I was looking for server side loadbalancing.
And this post .But this uses LoadBalanceSocketClient while I want to find out if an AWS Application Loadbalancer can be used.
Here follows a simple diagram of what I would like to have (if possible):
The RSocket client connects to the loadbalancer which passes the connection to a RSocket server (for example server A). Then the client and RSocket server A can communicate.
AWS will see this as a typical websocket service. So as long as it lets HTTP/1.1 connections through and lets them upgrade to WebSocket there shouldn't be a problem. This is very standard so it shouldn't be an issue. Ideally it won't see individual frames of the traffic, and you app will handle all frames on a single WebSocket connection. But it looks like the API Gateway support does deal with individual messages https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-set-up-websocket-deployment.html. You should ignore the RSocket client load balancing, and focus on AWS WebSocket routing.
As an example, with GCP (instead of AWS) the complexity is that this bumps you up from AppEngine Standard to Flexible. The demo site https://demo.rsocket.io/ is deployed to GCP and exposes websockets.
The additional kink, is that you possibly want stateful routing if you want client resumption.

Does AMAZON MQ provides TCP endpoint?

I have created one broker on Amazon MQ and got a SSL endpoint on port 61617. I was looking for one non SSL endpoint as well (like we can have tcp on 61616 on Active MQ). Does Amazon MQ provides only SSL? Is there anyway, we can get the TCP endpoint as well?
AmazonMQ only provides SSL endpoint, it does not expose and TCP endpoint like activeMQ. But it works and connect equally well when you switch from activeMQ tcp endpoint to amazonMQ SSL.
For example:
activemq.broker.url =
failover:(tcp://abc1.gogole.com:61616,tcp://abc1.gogole.com:61616)?randomize=false&maxReconnectAttempts=10
amazonmq.broker.url =
failover:(ssl://efg-1.mq.us-west-2.amazonaws.com:61617,ssl://efg-2.mq.us-west-2.amazonaws.com:61617)?randomize=false&maxReconnectAttempts=5
Any specific use case on why are you looking for TCP endpoint?

WSO2 ESB REST API Port Change

Do we have a feature in WSO2 ESB REST API where we can deploy REST API services on Carbon server with different ports.
In the sense, REST-API-1 with port 1000
REST-API-2 with port 2000 and so on.
I don't want to use server port which is by default 8280 for all REST services.
I need unique ports for each REST API Service as mentioned above.
Thanks,
Abhishek
This is not supported. What you ca do is change the port. But you can't use different ports for each service.

Modify the ports for the exposed API in WSO2 API manager

I have to expose an API with the port other than 8280. i modified that in axis2.xml to port :8286 for HTTP Transportreceiver . Even after restarting the apigateway service , it is Refusing the connection on the particualar port.
Whats the process for modifying the ports.
You just have to change the Port offset of the API Manager Server. For that change offset configuration in <PRODUCT_HOME>/repository/conf/carbon.xml
<Offset>1</Offset>
This will change the NIO port which is 8280 by default to 8281 (8280 + 1).
After that make sure to edit all the hardcoded endpoints of default APIs available by following[1]
[1]http://docs.wso2.org/wiki/display/AM140/Configuring+Port+Offset