How to send back end request in Postman in a loop - postman

I am very new in using Postman. So please forgive me if this question is very naive to be asked in here.
I have a back end request that I want to repeat 1000 times. I have copied cURL the request from fire fox debugger tool / network tab and imported it in to postman.
Now instead of pushing send 1000 time I want to right an script to do it in post man. Do I put that script in pre-request script or test tab? after I wrote my script how do I run it?
Some thing like following : where get is the name of the request that I have saved.
for (i = 0; i < 10; i++) {
postman.setNextRequest("get");
}

Related

Disable collection pre-request script at request level

I have a test collection in postman. I have a pre-request script to be run at the Collection level, but there are specific requests in the collection where I'd like the pre-request script not to run.
I can't find a way to do this; does anybody know if this is possible, and how?
Thanks in advance!
You cannot do it, but as a workaround just add a condition for collection script like:
if (pm.info.request !== "the request you want to skip") {
}
If you want to do it dynamically, then you have to do it from the request running before the request you want to skip:
eg:
request 1:
pm.variable.set("skip", true) // to skip request 2
in request 2:
pm.variable.set("skip", false ) // to ensure remaining requests don't skip collection script
and in collection add
if (!pm.variable.set("skip")) {
//then execute
}

Making An HTTP PUT through BrightScript to AWS S3 Bucket with pre-signed url

I've set up an AWS api which obtainins a pre-signed URL for uploading to an AWS S3 bucket.
The pre-signed url has a format like
https://s3.amazonaws.com/mahbukkit/background4.png?AWSAccessKeyId=someaccesskeyQ&Expires=1513287500&x-amz-security-token=somereallylongtokenvalue
where backgournd4.png would be the file I'm uploading.
I can successfully use this URL through Postman By:
configuring it as a PUT call,
setting the body to Binary so I can select the file,
setting the header to Content-Type: image/png
HOWEVER, I'm trying to make this call using BrightScript running on a BrightSign player. I'm pretty sure I'm supposed to be using the roURTransfer object and PutFromFile function described in this doucmentation:
http://docs.brightsign.biz/display/DOC/roUrlTransfer
Unfortunately, I can't find any good working examples showing how to do this.
Could anyone who has experience with BrightScript help me out? I'd really appreciate it.
you are on the right track.
i would do
sub main()
tr = createObject("roUrlTransfer")
headers = {}
headers.addreplace("Content-Type","image/png")
tr.AddHeaders(headers)
info = {}
info.method = "PUT"
info.request_body_file = <fileName>
if tr.AsyncMethod(info)
print "File put Started"
else
print "File put did not start"
end if
delay(100000)
end sub()
note i have used two different methods to populate the two associative arrays. you need to use the addreplace method (rather then the shortcut of .) when the key contains special characters like '-'
this script should work , though i don't have a unit on hand to do a syntax check.
also you should set up a message port etc and Listen to the event that is generated to confirm if the put was successful and/or what the response code is.
note when you read responses from url events. if the response code from the server is anything other then 200 the BrightSign will trash the response body and you can not read it. This is not helpful as services like dropbox like to do a 400 response with more info on what was wrong (bad API key etc) in the body. so in that case you are left in the dark doing trial and error to figure out what was wrong.
good luck, sorry i didn't see this question sooner.

Is it possible to make a schedule that Postman executes request?

I am using Postman to run a Runner on some specific requests. Is it possible to create a schedule to execute (meaning every day on specific hour)?
You can set up a Postman Monitor on your collection, and schedule it to execute the request each minute/hour/weekly basis.
This article can get you started on creating your monitor. Postman allows 1000 monitoring requests for free per month.
PS: Postman gives you details about the responses as in No. of successful requests, response codes, response size etc. I wanted the actual response for my test. So I just printed the response body as shown below. Hope it helps someone out there :)
Well, if there is no other possibility, you can actually try doing this:
- launch postman runner
- configure the highest possible number of iterations
- configure the delay (in milliseconds) to fit your scheduling requirement
It is absolutely awful, but if the delay variable can be set high enough, it might work.
It implies that postman is continuousely running.
You may do this using a scheduling tool that can launch command lines and use Newman ...
I don't think Postman can do it on its own
Alexandre
EDIT:
You may do this using a scheduling tool that can launch command lines and use Newman ... I don't think Postman can do it on its own
check this postman feature : https://www.getpostman.com/docs/postman/monitors/intro_monitors
from postman v10.2.1 onwards you can schedule your collections to run directly (without using monitors) on the specified times
check out here - https://learning.postman.com/docs/running-collections/scheduling-collection-runs/

gae long time for calculation mapreduce 500 error python

I got a code to work on GAE but am struggling with the 500 error, which looks like due to the long wait (run) time.
I am doing the following:
Read the user given info
Run some mapreduce method to calculate some stats and send this as email
(Re)direct the user to a thank you page, since the results will be emailed
The code works fine on App engine SDK since there is no time limit. However, I keep getting the 500 error when I run the code on GAE. If I do not perform calculations in step 2 then the code works again (redirects to a new page and sends email). I tried doing step 2 after step 3, but keep getting the same error.
Is there any easy way to fix this? I am thinking of something like get the user info and let them know the results will be emailed to them or redirect them to the main page. In the meantime (or after the above) I can run mapreduce in the backend and email the completed results so the time limit does not abort my code.
class Guestbook(webapp2.RequestHandler):
def post(self):
#get info provided in form by user (code not shown here)
# send them to new page or main page
self.response.write('<html><body>You wrote:<pre>')
self.response.write("thanks")
self.response.write('</pre></body></html>')
#self.redirect('/')
dump_content = 'Error'
try:
dump_content = long_time_taking_mapreduce_method(user_given_info)
except DeadlineExceededError:
logging.warning("Deadline error")
send_results_as_email(OUTPFILE, dump_content)
app = webapp2.WSGIApplication([
('/', MainPage),
('/sign', Guestbook),
], debug=True)
The whole point of mapreduce is that it runs offline, taking as many tasks and as long as necessary. It's defeating the whole purpose to try and run it within your handler function.
Instead, your mapreduce task itself should call the send_results_as_email method once it has a result.

REST - Get updated resource

I working on a service which scrapes specific links from blogs. The service makes calls to different sites which pulls in and stores the data.
I'm having troubles specifying the url for updating the data on the server where I now use the verb update to pull in the latest links.
I currently use the following endpoints:
GET /user/{ID}/links - gets all previously scraped links (few milliseconds)
GET /user/{ID}/links/update - starts scraping and returned the scraped data (few seconds)
What would be a good option for the second url? some examples I came up with myself.
GET /user/{ID}/links?collection=(all|cached|latest)
GET /user/{ID}/links?update=1
GET /user/{ID}/links/latest
GET /user/{ID}/links/new
Using GET to start a process isn't very RESTful. You aren't really GETting information, you're asking the server to process information. You probably want to POST against /user/{ID]/links (a quick Google for PUT vs POST will give you endless reading if you're curious about the finer points there). You'd then have two options:
POST with background process: If using a background process (or queue) you can return a 202 Accepted, indicating that the service has accepted the request and is about to do something. 202 generally indicates that the client shouldn't wait around, which makes sense when performing time dependent actions like scraping. The client can then issue GET requests on the first link to retrieve updates.
Creative use of Last-Modified headers can tell the client when new updates are available. If you want to be super fancy, you can implement HEAD /user/{ID}/links that will return a Last-Modified header without a response body (saving both bandwidth and processing).
POST with direct processing: If you're doing the processing during the request (not a great plan in the grand scheme of things), you can return a 200 OK with a response body containing the updated links.
Subsequent GETs would perform as normal.
More info here
And here
And here