I'm performing batches of 50 notifications requests to facebook graph api using POSTMAN and cURL according to the docs: batch-api-docs
Entire batch completion takes from 7 to 10 seconds with 46 success and 4 errors which is very slow.
Single success request(without batch I mean) takes like 1 second so I expected batch will take similar time but it's not. Does anybody know the reason?
A batch request only reduces the number of HTTP requests you are making. It does not affect anything else that is going on in the background.
Sending a notification to a user and getting a success/error response will simply take a certain amount of time. And multiplying that amount by 1 or by 50 is just going to result in different numbers.
Related
My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration
When setting throttling limits for our API, it appears that the Rate Limit works successfully but the Quota does not.
We created a subscription that limits to 10 requests/second, and when running tests, we obtain a 429 response upon sending an 11th query in one second, which is exactly what we want and expect.
However, the filter also has a Quota of 100 requests/minute, yet we are able to run over 100 requests (have tested up to 300 queries and still gotten entirely 200 response codes) in the span of a minute without getting throttled.
We have a Slack slash command that executes a Lambda (written in node) in AWS. The Lambda calls an internal service we have and returns JSON. It often takes multiple executions to get the slash command to work. The caller gets the below message:
Darn - that slash command didn't work. If you see this message more than once we suggest you contact "name".
We ran a bash sript that calls the lambda once a minute for 12 hours. The average duration of the calls was about 1.5 seconds, well below the slash command expectation that a response will be returned in 3 seconds. Has anyone else experienced this issue?
Increase the timeout over 3 secs though your estimated run time is around 1.5 seconds.
Also, it is to be noted that AWS Lambda limits the total concurrent executions across all functions within a given region to 100 (default limit which can increased on request)
I have a python program which query youtube to get the video details. I use the version-3 api. I have multiple processes m and a python pool of 10 processes in each python process.
songs_pool = Pool()
songs_pool =Pool(processes=10)
return_pool = songs_pool.map(getVideo,songs_list)
I get some client errors when the value of m is increased to more than 2 and the pool is increased to >5. I get forbidden errors. When I check the number of requests in the google analytics,it shows that the number of requests are 250 per sec. But according to the documentation the limit is 3000 requests per sec. I dont understand why am I getting the client errors. Can you tell me if there is a way to not get this errors and run the program quicker.
if m = 2 and process = 10 , i get no errors but it takes so much time to complete.
But if I increase them , then I get client errors which are ~ 5% of the total requests.
The per-user-limit is 3000 requests per second from a single IP address, and as soon as you go above that in a given second you'll start getting the forbidden errors. The analytics you see in the developers console will only report your average number of requests over a 5 minute period; therefore, if you had zero requests for 4 minutes, then started running your routine, the console may show only 250 requests per second (as an average) but your app likely is overrunning the limit in a given period of time or two.
It seems that you're handling it in the best way possible if speed is your concern; you'll want to run it fast enough to get a very small number of errors (so you know you're staying up there at your limit). Another option, though, might be to look into using etags; if you find yourself requesting info on the same videos a lot, you can let etags tell you whether or not any info has changed (and if the API responds that nothing has changed, it doesn't count against either your quota or your reqests/sec.)
So I am trying to use a web service on my Apache server and when I send a request to the service. I should be receiving about 9,000 items packed in xml format with multiple properties for each.
The problem I believe is when make this request, it takes so long to process the response that the server times out the request and I never receive anything. when making a request for about 1000 items it takes about 7 seconds. I believe there is a limit to 60 seconds somewhere in the server as 9000 if linear would be about 63 seconds which is just past this 1 minute limit.
Anyone got an idea on this problem?
You can try bumping up the connectionTimeout parameter to a higher number. Its set to 60 seconds by default.