Is it possible to make a schedule that Postman executes request? - postman

I am using Postman to run a Runner on some specific requests. Is it possible to create a schedule to execute (meaning every day on specific hour)?

You can set up a Postman Monitor on your collection, and schedule it to execute the request each minute/hour/weekly basis.
This article can get you started on creating your monitor. Postman allows 1000 monitoring requests for free per month.
PS: Postman gives you details about the responses as in No. of successful requests, response codes, response size etc. I wanted the actual response for my test. So I just printed the response body as shown below. Hope it helps someone out there :)

Well, if there is no other possibility, you can actually try doing this:
- launch postman runner
- configure the highest possible number of iterations
- configure the delay (in milliseconds) to fit your scheduling requirement
It is absolutely awful, but if the delay variable can be set high enough, it might work.
It implies that postman is continuousely running.

You may do this using a scheduling tool that can launch command lines and use Newman ...
I don't think Postman can do it on its own
Alexandre
EDIT:
You may do this using a scheduling tool that can launch command lines and use Newman ... I don't think Postman can do it on its own
check this postman feature : https://www.getpostman.com/docs/postman/monitors/intro_monitors

from postman v10.2.1 onwards you can schedule your collections to run directly (without using monitors) on the specified times
check out here - https://learning.postman.com/docs/running-collections/scheduling-collection-runs/

Related

Deploy function with long running time on Google Cloud Run

If I understood it well, Google Cloud Run will make an API publicly available. Once a request is received, an instance is started and the job is processed. Once the job done, the instance will be terminated. Is this right?
If so, I presume that Google determine when the instance should be shutdown when the HTTP response in sent back to the client. Is that also right?
In my case the process will run from 10 to 20 Minutes. Can I still send the HTTP response after so much time? Any Advice on how to implement that?
Frankly, all of this is well documented in the cloud run docs:
Somewhat, but this depends on how you configured your scaling https://cloud.google.com/run/docs/about-instance-autoscaling
Also see above, but a request is considered "done" when the HTTP connection is closed (either by you or the client), yes
60 mins is the limit, see:
https://cloud.google.com/run/docs/configuring/request-timeout
Any Advice on how to implement that?
You just keep the connection open for 20mins, but do note the remark on long living connections in the link above.

Set Next Request Not Working In Postman Collection Runner

I'm running on Windows 10 with the latest version, 7.21.1. I imported the example collection featured in their documentation - https://learning.postman.com/docs/postman/collection-runs/building-workflows/. I run Request 1 in the collection through the Runner and Request 4 does not trigger as it seems it should. It is setup in Tests and seems correct based on their documentation. I have my own collection I was trying this with, but when it was not working I then tried this sample and realized there was something else amiss.
Any assistance would be helpful! I can provide more information if needed.
This is what the example collection has in Request 1 under Tests: postman.setNextRequest('Request 4');
Thanks!
I figured this out. I didn't understand the flow of how the operation worked or the flow of runner. All requests that could be called next need to be selected in runner. And this operation works like a goto where all requests after the request that is run next all run. Closing this out.

How to handle file processing request in Django?

I am making a Django Rest framework based server and in one of the request, I get an audio file from front-end, on which I need to run some ML based algorithm(I have script for same) and respond back to user with the result. Problem is that this request might take 5-10 seconds to execute. I am trying to understand following things:
Will Celery help me reduce the workload on server, as in any case I need to wait for the result of the ML Algo and respond back to user.
Should I create a different server to handle this type of request? Will that be a better approach?
Also, is my flow of doing things correct. First, Upload the file to some cloud platform for storage and serialize the instance to get the url of file. Second run the script using celery and wait for the result. Third, Respond back with the result.
Thanks for helping.

Is there an AWS / Pagerduty service that will alert me if it's NOT notified

We've got a little java scheduler running on AWS ECS. It's doing what cron used to do on our old monolith. it fires up (fargate) tasks in docker containers. We've got a task that runs every hour and it's quite important to us. I want to know if it crashes or fails to run for any reason (eg the java scheduler fails, or someone turns the task off).
I'm looking for a service that will alert me if it's not notified. I want to call the notification system every time the script runs successfully. Then if the alert system doesn't get the "OK" notification as expected, it shoots off an alert.
I figure this kind of service must exist, and I don't want to re-invent the wheel trying to build it myself. I guess my question is, what's it called? And where can I go to get that kind of thing? (we're using AWS obviously and we've got a pagerDuty account).
We use this approach for these types of problems. First, the task has to write a timestamp to a file in S3 or EFS. This file is the external evidence that the task ran to completion. Then you need an http based service that will read that file and calculate if the time stamp is valid ie has been updated in the last hour. This could be a simple php or nodejs script. This process is exposed to the public web eg https://example.com/heartbeat.php. This script returns a http response code of 200 if the timestamp file is present and valid, or a 500 if not. Then we use StatusCake to monitor the url, and notify us via its Pager Duty integration if there is an incident. We usually include a message in the response so a human can see the nature of the error.
This may seem tedious, but it is foolproof. Any failure anywhere along the line will be immediately notified. StatusCake has a great free service level. This approach can be used to monitor any critical task in same way. We've learned the hard way that critical cron type tasks and processes can fail for any number of reasons, and you want to know before it becomes customer critical. 24x7x365 monitoring of these types of tasks is necessary, and helps us sleep better at night.
Note: We always have a daily system test event that triggers a Pager Duty notification at 9am each day. For the truly paranoid, this assures that pager duty itself has not failed in some way eg misconfiguratiion etc. Our support team knows if they don't get a test alert each day, there is a problem in the notification system itself. The tech on duty has to awknowlege the incident as per SOP. If they do not awknowlege, then it escalates to the next tier, and we know we have to have a talk about response times. It keeps people on their toes. This is the final piece to insure you have robust monitoring infrastructure.
OpsGene has a heartbeat service which is basically a watch dog timer. You can configure it to call you if you don't ping them in x number of minutes.
Unfortunately I would not recommend them. I have been using them for 4 years and they have changed their account system twice and left my paid account orphaned silently. I have to find a new vendor as soon as I have some free time.

What is the best way to get response time in OSB

I am doing it like this:
Inside OSB pipeline's message flow, at the beginning of request, assign the current time to a variable. Then in the response, use the current time of the response subtract the variable to calculate the response time. Then I have a reporting action to reporting this number.
I know OSB has a build in monitoring tool, it can display the response time for proxy server, pipeline and business server. As you can see my solution only include the time from the beginning of the pipeline + business server, but not including the time of the request and response message going through the proxy server. Besides that calculating it this way also feels like a non-standard approach.
OSB provided a JMX API which can get these build in monitoring data. But this would make our project more complicated.
If we want to use the OSB reporting action to report the response time. Is there a best way to do it?
Just switch Weblogic to use extended log format, and tell it to add time-taken to the list of tokens it logs on each response.
http://middlewaretechnologies.blogspot.com.au/2012/03/configure-extended-logging-in-http.html
or if you want to read the official docs:
http://docs.oracle.com/cd/E14571_01/web.1111/e13701/web_server.htm#CNFGD207