WSO2 APIM 2.0 Clustering deployment with traffic manager cluster - wso2

From the document, for traffic manager, except publisher will use HTTPS to communicate with traffic manager, others used thrift and jms. so for thrift & jms related URL:
connectionfactory.TopicConnectionFactory = amqp://admin:admin#clientid/carbon?brokerlist='tcp://<Traffic-Manager-host>:5676'
topic.throttleData = throttleDat
<ThrottlingConfigurations>
<EnableAdvanceThrottling>true</EnableAdvanceThrottling>
<DataPublisher>
<Enabled>false</Enabled>
<Type>Binary</Type>
<ReceiverUrlGroup>tcp://<Traffic-Manager-host>:9611</ReceiverUrlGroup>
<AuthUrlGroup>ssl://<Traffic-Manager-host>:9711</AuthUrlGroup>
……………………
</DataPublisher>
<PolicyDeployer><ServiceURL>https://<Traffic-Manager-host>:9443/services/</ServiceURL>
……………………
</PolicyDeployer>
………………
<JMSConnectionDetails>
<Enabled>false</Enabled>
<ServiceURL>tcp://<Traffic-Manager-host>:5672</ServiceURL>
<JMSConnectionDetails>
…………………
</ThrottlingConfigurations>
Can we config any of the traffic manager host in traffic manager cluster in gateway/publisher/store/key manager?

You can do it like this.
<ReceiverUrlGroup>{tcp://127.0.0.1:9612},{tcp://127.0.0.1:9613} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://127.0.0.1:9712},{ssl://127.0.0.1:9713}</AuthUrlGroup>
You can find several patterns of traffic manager deployments in this blog.

Related

Can I deploy two wso2 api manager in same cluster

I like to installing wso2 api manager in a cluster.Is it possible to create two api manager server in same cluster.
You can configure 2 API Manager nodes in Active-Active. Both starts accepting traffic routing, API creation, etc.
Please refer -
https://apim.docs.wso2.com/en/latest/install-and-setup/setup/single-node/configuring-an-active-active-deployment/#configuring-an-active-active-deployment

WSO2 Micro Gateway Installation and architecture

Does the Micro Gateway and API manager always have to be installed on the same servers?
Does the Micro Gateway setup require WSO2 Identity Server and WSO2 Enterprise Integrator for Hybrid API?
We have an architecture that would be working with 3 servers (1) Cloud for API Manager and Developer Portal providing Authentication and Analytics and store for (2) Production Environment (3) Sandbox Environment
Does the API manager needs to be installed in all the servers to set API Gateway and API Micro Gateway?
So here are the task that I have tried to set the WSO2 API manager and the Micro Gateway services in my local system , so here is a sample configuration
OS: Ubuntu server 18 LTS
WSO2 API Manager - Local Server IP: 192.168.1.50
WSO2 MicroGateway service (Sandbox) - Local Server IP: 192.168.1.51
WSO2 MicroGateway service (Production) - Local Server IP: 192.168.1.52
API Manager:
Installed all prerequisites
Install directly in the server
Changed the hostname to IP address in deployment.toml since I tried changing in carbon.xml and api-manager.xml but it get override
All the services were successfully active
Carbon Admin - http://192.168.1.50:9443/carbon, Admin Module - http://192.168.1.50:9443/admin, Developer Portal - http://192.168.1.50:9444/devportal, Publisher - http://192.168.1.50:9443/publisher
MicroGateway:
Create a mock hello world API using PHP to access the backend and datastore services.
Created a sample OpenAPI 3.0 yaml file to forward the request to the backend php services.
Installed all prerequisites
I had document to install the Microgateway services in docker, but I decided to try the MGW services installation directly within the server without docker
Created a API using micro-gw init and placed the yaml file within api-definitions
created the build successfully
Ran the build using gateway which was accessible using Postman.
Tried to change the hostname of microgateway service to listen IP instead of localhost but gets overrides to localhost:9090 however the service can be accessible using IP so no further changes made.
Configuration of API Manager and MGW
Uploaded the same YAML to API manager using publisher to configure the API manager and microgateway services.
Used API managers key certificates to set the microgateway (which failed)
create the end points to point to Production and Sandbox micro gateways using the IP address and the port number http://192.168.1.51:9090 and http://192.168.1.52:9090
Accessed the developer portal, created a token key.
However, the token failed to help access the microgateway service. Even I tried using the URL provided by the API manager http://192.168.1.50/sample/context/1/test but still inaccessible.

How to make frontend application talk to backend applications without creating ingress for the backend

I have deployed a kubernetes cluster using kops. The current cluster uses an nginx ingress controller which creates a classic load balancer in AWS. I have some backend applications that talk to the frontend application and some backend services that just talk to each other. The problem is that that the only way currently to make the frontend app talk to the backend apps is by creating an ingress for the backend apps since the frontend sends requests via the domain name since it doesn't understand the internal service names. For backends, it is fine since they can talk internally just by using the service name and their respective port. How can I achieve this without having to create ingress for backends. Is it possible to do that using an Application load balancer or do I need to have an API gateway for that? How do I achieve this architecture? Adding an architecture diagram to show what I want to achieve. Any help is appreciated.
From your "architecture diagramm" it looks like all your applications are within the cluster. So no need for ingress. You can just use kubernetes services.
Your frontend app should be able to call the endpoints of the backend services otherwise you made something wrong in the configuration of the frontend service.
If you have no chance to change the URL which the frontend app calls for backend services, you can use for example a kubernetes service with CNAME and redirect to your internal services.
You dont need ingress to connect backend from frontend.
assuming both backend and frontend pods are running in the same kubernetes cluster. frontend service can connect backend service using service dns
backend-service.<namespace>.svc.cluster.local

AWS ElasticBeans With Eureka and Zuul , how to Restrict access to services?

I have created a full micro services solution on AWS elasticBeansTalk(each service in its own container) on port 5000(default port for elasticbeans) -> this creates each micro service in its own security group.
I am using Zuul and eureka and everything is working great,
But my problem is that I had to create inbound and outbound rules for all of my containers(with all IP white listed 0.0.0.0/0).
I would like to block public access to each of the micro services except Zuul and the spring config server(And I am a really bad devops guy),
can Anyone help me with the correct configuration?
Many thanks and Kind regards,
Roie Beck
I am attaching an image of the structure(there is also a config server in there but I didn't find an image of one):
You would want to create a private subnet to run all of your micro-services and have your zuul gateway proxy all requests in your public subnet. Zuul and the micro-services can communicate through the NAT-gateway. More information can be found here: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

Testing WSO2 ELB

How to check if WSO2 ELB is working properly?
I have a ELB and 2 ELB(1 manager and 1 worker) running, I want to check if ELB is doing its work or not.
I want to check it using a SOAP request, SOAP endpoint should point to ELB or ESB?
I have configured ELB according to what is there in WSO2's documentation.
Thanks.
The WSO2 Elastic Load Balancer has been discontinued. You can download NGinx Plus [1] - the load balancer by NGinx - for which we provide support.
If you are currently using WSO2 ELB and need guidance, please visit our documentation page, Spacially Auto-Scaling in Load Balancer
In order to set up the WSO2 Elastic Load Balancer with one manager and one worker please refer document [1]
In order to check if WSO2 ELB is working properly, you can check it with autoscaling facilities in WSO2 ELB.
Please refer to document [2] for more information on autoscaling.
If you need to send a request to the ESB first you need to point it to ELB.
[1] https://www.nginx.com/resources/admin-guide/
[2] http://blog.afkham.org/2011/09/how-to-setup-wso2-elastic-load-balancer.html