Determine which API to execute based on a field
I have a collection with 4 requests(say req1,req2,req3,req4) where inputs are parameterized. The input to the collection is a csv file. The csv file contains a field "type" based on which I would like to determine which is the first request I execute.
For eg. if type = 1 :order of execution :req 1, req3,req4
if type = 2 :order of execution :req 2, req3,req4
I am aware of how to modify the flow using the postman.setNextRequest(); but not sure of how this would work if the condition is for the first request.
I'm not feeling very comfortable with your approach. Because the setNextRequest() Workflow modifications are quite error prone. I would suggest to redesign this approach. Propably the pre-request-script or different Collections can be helpful?
However, you can try this solution asa a workaround. But ist quite hacky. I suppose, that the Test should stop after each of the four requests, and continue with the next iteration.
Add a dummy request to the collection and put it on the first position of your requests.
Add your if statements with the "setNextRequest()" commands in the test-code of this first "dummy" request.
Dont forget to add postman.setNextRequest(null) to each of the four requests, to prevent executing their successors.
Related
My model represents users with unique names. In order to achieve that I store user and its name as 2 separate items using TransactWriteItems. The approximate structure looks like this:
PK | data
--------------------------------
userId#<userId> | {user data}
userName#<userName> | {userId: <userId>}
Data arrives to a lambda from a Kinesis stream. If one lambda invocation processes an "insert" event and another lambda request comes in about at the same time (the difference could be 5 milliseconds) the "update" event causes a TransactionConflictException: Transaction is ongoing for the item error.
Should I just re-try to run update again in a second or so? I couldn't really find a resolution strategy.
That implies you’re getting data about the same user in quick succession and both writes are hitting the same items. One succeeds while the other exceptions out.
Is it always duplicate data? If you’re sure it is, then you can ignore the second write. It would be a no-op.
Is it different data? Then you’ve got to decide how to handle that conflict. You’ll have one dataset in the database and a different dataset live in your code. That’s a business logic question not database question.
I'm very new to postman so please bear with me. Basically, I am trying to get data from the clinicaltrials.gov API, which can only give me 1000 studies at a time. Since the data I need is about 25000 studies, I'm querying it based on dates. So, is there any way in Postman that I can GET multiple requests at one time wherein I am only changing one parameter?
Here is my URL: ClinicalTrials.gov/api/query/study_fields??expr=AREA[LocationCountry]United States AND AREA[StudyFirstPostDate]RANGE[MIN,01/01/2017] AND AREA[OverallStatus]Recruiting
I will only be changing the RANGE field in each request but I do not want to manually change it every time. So, is there any other way in which I can maybe at a list of dates and have Postman go through them all?
There's several ways to do this.
So, is there any way in Postman that I can GET multiple requests at one time wherein I am only changing one parameter?
I'm going to assume you don't mind if the requests are sequenced or parallel, the latter is less trivial and doesn't seem to add more value to you. So I'll focus on the following problem statement.
We want to retrieve multiple pages of a resource, where the cursor is StudyFirstPostDate. On each page retrieved the cursor should increment to the latest date from the previous poge. The following is just one way to code this, but the building blocks are:
You have a collection with a single request, the GET described above
We will have a pre-request script that will read a collection variable with the next StudyFirstPostDate
We will have a test script (post-request) that will re-set the StudyFirstPostDate to the next value of the pagination.
On the test script you should save the data the same way you're doing now.
You set the next request (postman.setNextRequest("NAMEOFREQUEST")) to the same GET request we're dealing with, to effectively create a loop. When you've retrieved all pages you kill the looip with postman.setNextRequest(null) - although not calling any function should also stop it. This then goes to step (2) and loop.
This flow will only work on a collection run. Even if you code all of this, just triggering the request by itself will not initiate a loop. setNextRequest only works within a collection run.
Setting a initial value to the variable on the pre-request script
// Set the initial value on the collection variables
// You could use global or environment variables, up to you.
const startDate = pm.collectionVariables.get("startDate")
Re-setting the value on the Tests
// Loop through the results save the data and retrieve the next start date for the request
// After you have it
const startDate = pm.collectionVariables.set("startDate",variableWithDate)
// If you've reach the end you stop, if not you call the same request to loop
// nextPage is an example of a boolean that you've set before
if (nextPage) {
postman.setNextRequest("NAMEOFREQUEST")
} else
postman.setNextRequest(null)
}
I am using cloud endpoints with objectify and Firestore in Datastore mode. Although it says in the documentation that all queries are strongly consistent, I have found that they are not in the following examples:
Example 1
I made an endpoint that queries for an entity by a property, adds +1 to a count property on it, and saves it back to the datastore. I then have 50 different clients all execute that method at the same time. I would expect the count property to be 50, however, it usually ends up being somewhere between 25-30.
Example 2
I have an endpoint that queries for an entity by a property. If the entity does not exist, I create the entity and save it to the datastore. If it exists, I just return it. Again, I hit this endpoint with 50 different clients at the same time. I would expect there to only be one entity in the Datastore. However, I will have maybe 5-10 of the same entity.
It seems to me this is not strongly consistent. If I take my code in the above endpoints and put them in a transaction with retries, all works as intended. I looked around in objectify to see if there is a ReadOptions set somewhere, but from what I can see, there is not, so it should be using the default of read_consistency=STRONG
For example 1, you need to use transactions to ensure that writes do not stomp on each other.
For example 2, again you need to use a transaction to get consistency across clients.
Strong consistency means that if a client writes a value, it can read or query it back after the write succeeds. Not that if a client reads a value, another reads the same value, they each do a transformation, and try to write that the blinds writes for each client will merge together.
I am just looking for ideas on how to solve one specific thing I'd like to build.
Say I have two sets of items. Each item is just a couple of lines of JSON. Any time an item is added to one set I immediately (well, almost) want to process this against the full other set. So item is added to set A: Process against each item in set B. And vice versa.
Items come in through API Gateway + Lambda. Match processing in Lambda from a queue/stream.
What AWS technology would be a good fit? I have no idea and no clear pattern on when or how often the sets change. Also, I want it to be as strongly consistent as possible. And of course, I want it to be as serverless and cost-effective as possible. :)
Options could be:
sets stored in Aurora, match processing for a new item in A would need to query the full set B from the database each time
sets stored in DynamoDB, maybe with DynamoDB stream in the background; match processing for a new item in A would need to query the full set B from Dynamo; but spiky load, not a good fit because of unclear read/write provisioning
have each set in its own "static" Kinesis stream where match processing reads through items but doesn't trim. Streams to be replaced with fresh sets regularly
My pain point is: While processing items from A there might be thousands of items in B to be matched. And I want to avoid having to load the full set B from some database every time I process an item from A. I was thinking about some caching of sets but then would need a good option to invalidate that cache whenever something changes.
I'm writing a jmeter script and I have a huge csv file with a bunch of data which I use in my requests, is it possible to start not from first entry but from 5th or nth entry?
Looking at the CSVDataSet, it doesn't seem to directly support skipping to a given row. However, you can emulate the same effect by first executing N loops where you just read from the data set and do nothing with the data. This is then followed by a loop containing your actual tests. It's been a while since I've used JMeter - for this approach to work, you must share the same CVSDataSet between both loops.
If that's not possible, then there is an alternative. In your main test loop, use a Counter and a If Controller. The Counter counts up from 1. The If controller contains your tests, with the condition ${Counter}>N where N is the number to skip. ("Counter" In the expression is whatever you set the "reference name" property" to in the counter.)
mdma's 2nd idea is a clean way to do it, but here are two other options that are simple, but annoying to do:
Easiest:
Create separate CSV files for where you want to start the file, deleting the rows your don't need. I would create separate CSV data config elements for each CSV file, and then just disable the ones you don't want to run.
Less Easy:
Create a new column in your CSV file, called "ignore". In the rows you want to skip, enter the value "True". In your test plan, create an IF controller that is parent to your requests. Make the If condition: "${ignore}"!="True" (include the quotes and note that 'true' is case sensitive). This will skip the requests if the 'ignore' column has a value of 'true'.
Both methods require modifying the CSV file, but method two has other applications (like excluding a header row) and can be fast if you're using Open Office, Excel, etc.