Postman only runs half of the requests in my collection - postman

There are 107 requests in my collection folder (postman). This collection was running perfectly fine before with all 107 requests executed. Now, i only see half of the requests running in the collection folder and no error prompted anywhere.
is there a limit on how many requests you can run now? or what's the issue?

Related

I have tested my AWS server (8 GB RAM) on which my Moodle site is hosted for 1000 users using JMeter, I am getting 0% error, what could be the issue?

My moodle site is hosted on AWS Server of 8 GB RAM, i carried out various tests on the server using JMeter (NFT), I have tested from 15 to almost 1000 users, however I am still not getting any error(less than 0.3%). I am using the scripts provided by moodle itself. What could be the issue? Is there any issue with the script? I have attached a screenshot with this which shows the reports of 1000 users test for referenceenter image description here
If you're happy with the amount of errors and response times (maximum response time is more than 1 hour which is kind of too much for me) you can stop here and report the results.
However I doubt that a real user will be happy to wait 1 hour to see the login page so I would rather define some realistic pass/fail criteria, for example would expect the response time to be not more than 5 seconds. In this case you will have > 60% of failures if this is what you're trying to achieve.
You can consider using the following test elements
Set reasonable response timeouts using HTTP Request Defaults:
so if any request will last longer than 5 seconds it will be terminated as failed
Or use Duration Assertion
in this case JMeter will wait for the response and mark it as failed if the response time exceeds the defined duration

Scheduled Task Never Runs unless I browse the URL manually

I have a particular scheduled task that CF claims runs every 2 minutes. However, it either doesn't run or complete since the database changes the task is supposed to perform do not occur after each run. However if I copy the exact same URL into a browser and run the script, it works 100% of the time.
I have no clue where to start debugging. There is no IP restriction on the page.
I can see in the CF Admin that it was last run at 2:06 for example and the next run will be at 2:08. I can also see it in the scheduler.log file.
We had updated our certs in IIS but didn't update our cacerts file. Once we did everything was great.
It was clear the process wasn't running when I added a line or two to email myself at the start of the task. The emails never came when the server ran the task but they did when I pinged the page. I changed the task to save the output to a log file and when I opened that up it just said "Connection Failure". This led me to some googling and some talk about certificates which made me remember that we just updated ours recently. Looking back at my emails with IT it did indeed happen on the same day that the last emails in the mailsent.log were sent from these scheduled tasks.

AWS Elastic Beanstalk: Looooooooong HEAD requests

I've just deployed a simple Java/Tomcat based application into Elastic Beanstalk (using the java8/tomcat8 config). Mostly the application works fine.
However, all HEAD requests seem to take 60 seconds. Feels like a timeout of some kind. I can't seem to find any settings regarding filtering or delaying particular types of requests. These requests work fine when I run locally. GET requests to the same URL work fine.
I've confirmed that both the Tomcat and the Apache instance on the server log the HEAD request instantly (which indicates they are done with it, right?).
I've confirmed (using telnet) that the client is not receiving any response header bytes until very late. This isn't a problem of the client waiting for a payload or something like that.
Furthermore, the delay is clearly tied to the load balancer's "Idle Timeout" setting. If I push that down to 5 seconds, then the HEAD requests take about 5 seconds, if I set the idle-timeout to 20 seconds then the HEAD requests take just about 20 seconds (always a few ms over). The default is 60s.
What could be causing all HEAD requests (even those returning a 401 unauthorized error, no processing) to clog up the works like that?
Turns out the problem was a firewall issue at the local site. AWS ElasticBeanstock was returning the responses in a timely manner, but they were getting clogged up in a local firewall. Grr..

Scheduled Tasks not running - Coldfusion Server Administration

I have a series of scheduled tasks that all run at various times of the day. Since the migration from Coldfusion version 7 to 10, these tasks have stopped running.
When I check the box, that outputs the results to a file, I get a text file that says nothing more than "Connection Failure". I have tried everything imaginable regarding the username and password for the task. It makes no difference. When I run the CFM page in my browser, the
page works correctly and generates an email just like it should. I just
can't make it run as a scheduled event.
Is the scheduled task folder has any check for the session or anything? I mean is the scheduled task folder is accessible without login? Please try with removing all the redirect rules for the application. That might work.
For me the requests were timing out. I increased the timeout in the administrator and that solved it. Doing a cfhttp in a test file and dumping the results helped me troubleshoot it.

Rails completes in 15 seconds, response received after 2 more minutes

I'm running into an unusual situation that I can't get to the bottom of.
We have a rails 4.1 application running on jRuby 1.7.12 inside Torquebox 3.1.0. One of the API endpoints retrieves a list of objects.
Currently the database has just under 7000 records in. When an API request is made, these are queried and rendered in JSON using the ActiveModel::Serializersgem. This all works as we'd expect, and doing this in a rails console, it works perfectly.
The problem lies when making an actual API request. It seems to work as expected, and looking at the rails log, there is the output
Completed 200 OK in 6354ms (Views: 5479.0ms | ActiveRecord: 867.0ms)
At this point, I expect to see the data returned from the server, however it takes a good 2.7 minutes to actually see a response from the server. I've tried making the request from Chrome, Safari and even in curl just to make sure it's not a weird browser issue but not having any luck.
I've implemented some caching within the serialisers as described here. I'm pretty sure this isn't the issue however as it works as expected in a console, so I'm really confused.
What else could be going on that is causing the 2+ minute delay. During this time I'm seeing around a 100% CPU usage for Java, so something is definitely going on.
Well it turns out it was the bullet gem taking up all the time. It seems that with that many records, detecting whether there is an N+1 query is quite time consuming. Simply disabling bullet reduces the time to exactly what the rails console says it is.