WSO2 - API Manager 2.1.0 - wso2

Need to compute IOPS required for SSD when using API Manager 2.1.0
When working with API and Key Manager databases, what should be the typical IOPS for the database SSD? We expect around 1000 HTTP POST requests Per Second. This needs to be authenticated and routed to one of the back-end Service end-points. Also, can anyone provide details on database queries that get executed when API requests are processed by the Worker Gateways? Are stored procedures used while accessing the API and Key Manager databases and what is the schema for the databases? Please provide any links that will help understand the low-level details of the database interactions.

Related

Design a high available API service with database backend on AWS

I am working on an API layer which serves requests from a backend database.
So the requirements are:
Repopulate the whole table without downtime for API service: A main requirement for the API is that we should be able to re-populate the tables( 2to 3 tables, structured csv like data) in backend database periodically (bi-weekly or monthly), but the API service should not go down.
low latency globally in the order of 100s of millisecond
scalability with requests per second
rate limit clients
also switch to previous versions of the tables in the backend in case of issues.
My questions are about what kinds of AWS database that i can use and other AWS components which can achieve the above goals.
If you want a secure, low latency global API, then I would go with an edge optimized API Gateway API.
Documentation is here API GW limits regarding the maximum requests per second.
You can rate limit clients using API GW. Also, you can have different stages in API GW that correspond to different aliases in lambda. Lambda would be your serverless compute layer that would handle your API GW requests and in turn query your database. Using versioning and aliasing in lambda would allow you to switch to different database tables. Given that you are planning to use csv like data, you could go with RDS and use the Aurora engine, which is compliant with MySQL and PostgreSQL, and is an extremely cost effective option.
As some additional information, you should use lambda proxy integration between your API GW APIs and your lambda functions. This allows you to enable Identity and Access Management (IAM) for your APIs.
Documentation on Lambda proxy integration: Lambda proxy integration
Here is some documentation on Lambda: AWS Lambda versioning and aliases
Here is some documentation on RDS Aurora: AWS RDS Aurora

WSO2 API Manager Analytics Accuracy

We've deployed WSO2 Api Manager in Active-Active scheme with single Api Manager Analytics server. Also we're using Graylog+Elastic+Grafana to collect logs from back-end services behind API.
The case is that amount of back-end services calls are hugely greater (millions vs thousands) than a corresponding API hits count. Why is that? What are we missing?

AWS: Where I can learn AWS Cloud Computing for beginner to Advance level for REST API and Authorisation Deployment, for Free?

I have Requirement of developing a REST API with DB on AWS with Our custom Jar, that will be processing the data coming in the request, once processed we will give a response the result comes from our jar.
We have :
Our Java application that will process the data.
Need to develop Authorisation platform for a various client using REST API.
Need to log all the transaction that is requested and how many are rejected and processed successfully.
We are thinking to deploy the complete application on AWS, so I am looking for best study material on developing and Deployment on AWS that is free (budget issue).
Please suggest where should I start as I am a newbie on the cloud platform.
Thanks in advance for the help.
To save on cost with AWS, try to go serverless architecture.
Use:
S3: to host your front end code by making your bucket a website
Lambda: to host your backend code to insert and retrive from database. You get 1 million requers free per month
Api Gateway: it would provide an interface to access lambda function and detailed logging can be done to cloud watch. It also provides with Authorization with API keys and Cognito user pools.
DynamoDb: it is aws managed database, that give you 15 free read write provisioned throughput
You can start with this
https://medium.com/byteagenten/serverless-architecture-with-aws-adcaa3415acd?source=linkShare-22ecbac0bdc-1526628767

Micro service management

We are developing a merchant application in that we have various modules like Schedule, Booking, Invoice e.t.c, each of this module are runs in different server, those are exposed through as RESTful granular services. UI layer will communicate with these granular service accordingly. To identify the request and redirect to specific micro service runs in service layer of various sever we have created a service gateway. Some of the service required data manipulation on the go which is presently accomplished through Mule ESB and some routing activities are also managed through it.
Actual purpose of the Service gateway is to match the request with service dictionary available and redirect to the respective micro service, at present its been developed in j2ee framework and runs in wildfly server. So to achieve the same process in light weight manner we come across a micro service manager like"getKong" and Customising "nginx" server to manage microservices, Mule ESB.
Along with Service Bus management is it advisable to use the Mule ESB as MicroService maanager as like getKong or any other valuable suggestion ?
In my personal opinion, you have three options:
If you don't need to perform authentication/authorization or/and
Throttling and your routing can be quite complex/complicated than is
completely fine to do it in Mule ESB.
If you do just URL rewrite nginx is probally the best choice for
minimum overhead and maximum performances.
If you really need an API manager with all the rich features than is
fine getKong or, if you want to stay in the MuleSoft
world and your are willing to pay, you can have a look at API
Gateway.
Hope this helps

WSO2 API Manager v1.8.0 - Clustering

I have a question on WSO2 API Manager Clustering. I have gone through the deployment documentation in detail and understand the distributed deployment concept where in one can seggregate the publisher, store, key manager and gateway. But as per my asessment, that makes the deployment architecture pretty complex to maintain. So I would like to have a simpler deployment.
What I have tested is to simply have two different instances of the WSO2 API Manager to run in two different boxes pointing to the same underlying data sources in MySQL. What I have seen is that, the API calls work perfectly and the tokens obtained from one WSO2 instance would work for API invocation on the other API Manager instance. The only issue with this model is that we need to deploy the APIs from individual publisher components for as many WSO2 API Manager instances that are running. I am fine to do that since the publishing will be done by one single small team. We will have a hardware load balancer in front having the API endpoint URLs and token endpoint URLs for both the API managers and the harware LB will do the load balancing.
So my question is - are there any problems in following this simple approach from the RUNTIME perspective? Does the clustering add any benefit from RUNTIME perspective for WSO2 API Manager?
Thank you.
Your approach has following drawbacks (there can be more which I do not know);
It is not scalable. Meaning - you can't independently scale (adding more instances of) store or publisher or gateway or key manager.
Distributed throttling won't work. It will lead to throttling inconsistencies since the throttling replication won't happen if you don't enable clustering. Lets say you define 'Gold' tier for an API. Doesn't matter how many gateway instances you are using, a user should be restricted to access no more than 20req/min to this API. This should have been implemented based on a distributed counter (not sure the exact implementation details). So if you don't enable clustering, one gateway node doesn't know the number of requests served by other gateway nodes. So each gateway node will have their own throttle counter. Meaning - a user might be able to access your API more than 20req/min. So this is one of the throttling inconsistencies. Further, lets say one gateway node is throttled out a user but the other gateway node is not. Now, if your LB routes the request to 1st gateway node, user will not be able to access the API. If your LB routes the request to 2nd gateway node, user will be able to access the API. This is another instance of throttling inconsistency. To overcome all these issues, you just need to replicate the throttling across all the gateway nodes by enabling clustering.
Distributed caching won't work. For example, API Key validation information are cached. If you revoke a token in one API Manager node, cache will be cleared in that node. So a user can't use revoked token via that API Manager node, BUT he is able to use the token via the other API Manager node until the cache is invalidated (I guess 15 min by default). This is just one instance where things can go wrong if you don't cluster your API Manager instances. To solve these issues, you just need to enable clustering, then the cache will be in sync across the cluster. Read this doc for more details on various caching available in WSO2 API Manager.
You will be having several issues if you don't have above features. WSO2 highly recommends distributed deployment in production.