We are evaluating wso2 ei for a system where the expected system latency is 1,3 seg for 200 TPS (concurrent users).
In our first tests we have an API Proxy that invokes 3 web services, exist a json transformer with payloadfactory and class mediators.
We have performed the tuning recommended by WSO2 EI for production envirment.
The results indicate that we have Latency per transaction between 2 and 3 seconds (average) with 16 TPS, which is very distant from the expected. The times of the services (response time) are excellent.
What is the capacity of WSO2 EI in relation to the response time (Latency per transaction) vs TPS?.
Some recommendations to improve the response times vs TPS?
thank for youy support;
Related
My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration
We are doing evaluation for metering purpose using WSO2 API Manager and DAS. (Latest versions)
Environment:
WSO2 API Manager runs as 2 node active-active deployment model using Hazlecast. (4 Core 8GB Ram) &
DAS runs as single node.
Both are connecting to backend RDBMS as mysql.
DAS and MYSQL shares the same server of 12 Core 24GB RAM. We dedicatedly allocated 12GB to MYSQL.
We started the test at the rate of 750reads/sec and everything went well for 27hrs until the metering reaches 72 Million and after which we have got the below error.
At API Manager: [PassThroughMessageProcessor-130] WARN DataPublisher Event queue is full, unable to process the event for endpoint Group.
At Das: (After 10 mins), we have got INFO {com.leansoft.bigqueue.page.MappedPageFactoryImpl} - Page file /$DAS_HOME/repository/data/index_staging_queues/4P/index/page-12.dat was just deleted. {com.leansoft.bigqueue.page.MappedPageFactoryImpl}.
Is this something that we have reached the limit w.r.t the infra setup or some performance issues w.r.t DAS. Can you please help us?
You need to tune the server performance of DAS and APIM
I was wondering about the performance of WSO2 throttling. Whilst checking it out, I did the following :
I created a new subscription tier (say Tier1)
I set a limit of 10 requests per hour on it
I created another subscription tier (say Tier2)
I set a limit of 20 requests per hour on it
I allowed my API to use those tiers and created a test application to use the API with Tier1
Using the API console, I started sending requests to my API
After the 10 calls(though it exceeded 10), I unsubscribed the test application and subscribed to the same API using the same application using Tier2
It took around 15 mins for the throttler to unlock. Which brings me to my first concern ...
How long does it take for a change in tier subscription to reflect on the same application????
In relation to the excess calls going through, I also set up tiers with limits on a per minute basis. Once again, I noticed excess calls going through.
What I also noticed was that this excess was in the initial number of calls and the throttling worked normally after those initial excess calls.
Which leads to my next ...
How long does it take for a newly set up tier to throttle properly? In the per-minute basis, I assume that some calls carry over to the next minute. Am I right in assuming that?
Any help in understanding this would be much appreciated. Thanks
I have a testing in WSO2 ESB dbreport mediator.
When I send a "BIG message" (100 or 1,000 or 10,000 ... 500,000 rows) from Database A to WSO2 ESB.
And WSO2 ESB split the message as rows by Iterate mediators,
then use the DBReport mediator write row by row into Database B (by Data source pool).
When write 100 rows, it spent 5 seconds,
when write 1,000 rows, it spent 188 seconds,
and then write 10,000 rows, it need spent 19163 seconds.
How efficient use of DBReport mediators?
thanks.
DBReport mediator is executed synchronously in WSO2 ESB which means the thread that execute this DBReport mediator is stucked until the Database operation is completed. That means the performance is lower than when the execution happens asynchronously.
Therefore to gain the maximum performance, please use WSO2 Data Services server and use DSS to execute the insert Database operation. That way you can gain the maximum performance.
I am building a high load http service that will consume thousands of messages per second and pass it to a messaging system like activemq.
I currently have a rest service (non-camel, non-jetty) that accepts posts from http clients and returns a plain successful respone and i could load test this using apache ab.
We are also looking at camel-jetty as input endpoint since it has integration components for activemq and be part of an esb if required. Before i start building a camel-jetty to activemq route i want to test the load that camel-jetty can support. What should my jetty only route look like,
I am thinking of the route
from("jetty:http://0.0.0.0:8085/test").transform(constant("a"));
and use apache ab to test.
I am concerned if this route provides a real camel-jetty capacity since transform could add overhead. or would it not.
Based on these tests i am planning to build the http-mq with or without camel.
the transform API will not add significant overhead...I just ran a test against your basic route...
ab -n 2000 -c 50 http://localhost:8085/test
and got the following...
Concurrency Level: 50
Time taken for tests: 0.459 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Non-2xx responses: 2010
Total transferred: 2916510 bytes
HTML transferred: 2566770 bytes
Requests per second: 4353.85 [#/sec] (mean)
Time per request: 11.484 [ms] (mean)
Time per request: 0.230 [ms] (mean, across all concurrent requests)
Transfer rate: 6200.21 [Kbytes/sec] received