Apache server response timeout - web-services

So I am trying to use a web service on my Apache server and when I send a request to the service. I should be receiving about 9,000 items packed in xml format with multiple properties for each.
The problem I believe is when make this request, it takes so long to process the response that the server times out the request and I never receive anything. when making a request for about 1000 items it takes about 7 seconds. I believe there is a limit to 60 seconds somewhere in the server as 9000 if linear would be about 63 seconds which is just past this 1 minute limit.
Anyone got an idea on this problem?

You can try bumping up the connectionTimeout parameter to a higher number. Its set to 60 seconds by default.

Related

Is there a way to lower the time for a lifecycle event to be triggered in AWS IoT?

So my issue is rather simple in all honesty. I'm trying to see if there is a way to trigger Lifecycle Events within AWS IoT much quicker. So far my code is as follows on connect:
mqttc.connect(aws_iot_endpoint, port=443, keepalive=1)
The value for keepalive cannot be lower than 1, as it's not enough time for the thing to connect to AWS. When connection to the device is lost it takes approximately 7 to 8 seconds for AWS IoT to send out this message:
MQTT_KEEP_ALIVE_TIMEOUT
I was wondering if there is any way to decrease that time even further? Is using AWS IoT Events the way forward?
If your keep-alive is set to 1 second, then MQTT_KEEP_ALIVE_TIMEOUT should be 1.5x which is 1.5 seconds, not 7-8 seconds.
Make sure that you're also setting your ping timeout (in ms) to a value shorter than 1000ms as otherwise, AWS may just default to 3 seconds for ping timeout.
Keep Alive cannot be set to 1 sec per AWS docs. Values less than 30 are set to 30.
The default keep-alive interval is 1200 seconds. It is used when a client requests a keep-alive interval of zero. If a client requests an interval > 1200 seconds, the default interval is used. If a client requests a keep-alive interval < 30 seconds but > zero, the server treats the client as though it requested a keep-alive interval of 30 seconds.

I have tested my AWS server (8 GB RAM) on which my Moodle site is hosted for 1000 users using JMeter, I am getting 0% error, what could be the issue?

My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration

Why Application request limit reached #4 is received?

171 calls were initated during last day, it is less then 600 call per 600 sec. How is it I get Application request limit reached message? If this an app restriction, then why other users does not receive same message? I checked other SO answers, they does not help. Both Application Rate Limit and User Rate Limit seems fine. Facebook Help Community is sucks.

Batch of notifications takes ~10 seconds

I'm performing batches of 50 notifications requests to facebook graph api using POSTMAN and cURL according to the docs: batch-api-docs
Entire batch completion takes from 7 to 10 seconds with 46 success and 4 errors which is very slow.
Single success request(without batch I mean) takes like 1 second so I expected batch will take similar time but it's not. Does anybody know the reason?
A batch request only reduces the number of HTTP requests you are making. It does not affect anything else that is going on in the background.
Sending a notification to a user and getting a success/error response will simply take a certain amount of time. And multiplying that amount by 1 or by 50 is just going to result in different numbers.

Restricting the Web service request itself gets resubmitted after 5 minutes

the web service request for one of our java rest service gets submitted again from the client/browser after ever 5 minutes in case when the service is taking longer to execute.
Can we restrict this to get resubmitted once it has taken sufficiently longer.
Regards,
Vaibhav
Exactly happened to me, only difference is mine is simple HTTP Post request and not web-service request.
Which application server you are using? Its upto application server to resubmit the request in case of Request Timeout.
You observed it right, the interval of double trigger is exact 5 minutes, that means your HTTP server is configured to timeout after 5 minutes, You need to set the timeout to -1 (infinite) or some sensible longer value so that the request won't get timed out and re-submit again.