Postman - How to send large JSON body? - postman

I am trying to do a POST request using Postman with a very large body. Only one JSON field is very large, I wonder if I could load that field from a file in Postman?
{
"field1": {
"field1.1" : ...
...
}
"field2": {
"LARGE FIELD": "<<Too large string to paste into this Raw JSON>>"
}
}
Is there any best-practice to deal with a very large body in Postman?

Use csv file for upload as
LARGEFIELD, <<Too large string>>
Now in 'Pre-request script' use parametrization as below:
var LARGE_FIELD= pm.iterationData.get("LARGEFIELD");
pm.environment.set("envLARGE_FIELD", LARGE_FIELD);
now use the above env . parameter directly into body as below:
{
"field1": {
"field1.1" : ...
...
}
"field2": {
"LARGE FIELD": {{envLARGE_FIELD}}
}
}
while running collection, load this csv file
postman runtime will pick the data and assign it to request body

Related

How to create a multi root flatbuffer json file?

How to create a multi root flatbuffer json file?
table Login {
name:string;
password:string;
}
table Attack {
damage:short;
}
I created the following json file
{
"Login": {
"name": "a",
"password": "a",
}
}
but get error: no root type set to parse json with
Add root_type Login to the bottom of your schema file. If you also want to parse JSON from the command-line with Attack then stick that into its own schema, or use --root-type manually.
Also see the documentation, e.g. https://google.github.io/flatbuffers/flatbuffers_guide_using_schema_compiler.html

Postman adding values to request json

Hi I am hoping this is a simple question.
In my pre-request-script I am getting a JSON object back from a GET.
This JSON object has 10 fields. I would like to add 2 more.
I tried myJson.add and myJson.push but those don't work. How would I accomplish this task? I am then taking that myJson and adding it to a push request in the test.
Thanks in Advance
With the lack of data in the description, I'm providing a very general answer
Assuming myJson contains your JSON string, first parse it to convert the JSON data to an object as follows:
let jsonObj = JSON.parse(myJson);
Once done, now you can add/remove/update the data - depending on the structure of your JSON.
For example, assuming your data is an array:
[
{
"data": "value"
},
{
"data": "value2"
}
]
You can add another element by using:
jsonObj.push({"data": "value3"});
Once you are done updating the data, convert it back to string as follows:
myJson = JSON.stringify(jsonObj);
You can now store this in an environment variable etc for use in the Postman request.
Reference: https://learning.postman.com/docs/sending-requests/variables/

Mapping geo_point data when importing data to AWS Elasticsearch

I have a set of data inside dynamodb that I am importing to AWS Elasticsearch using this tutorial: https://medium.com/#vladyslavhoncharenko/how-to-index-new-and-existing-amazon-dynamodb-content-with-amazon-elasticsearch-service-30c1bbc91365
I need to change the mapping of a part of that data to geo_point.
I have tried creating the mapping before importing the data with:
PUT user
{
"mappings": {
"_doc": {
"properties": {
"grower_location": {
"type": "geo_point"
}
}
}
}
}
When I do this the data doesn't import, although I don't receive an error.
If I import the data first I am able to search it, although the grower_location: { lat: #, lon: # } object is mapped as an integer and I am unable to run geo_distance.
Please help.
I was able to fix this by importing the data once with the python script in the tutorial.
Then running
GET user/_mappings
Copying the auto generated mappings to clipboard, then,
DELETE user/
Then pasting the copied mapping to a new mapping and changing the type for the geo_point data.
PUT user/
{
"mappings": {
"user_type": {
"properties": {
...
"grower_location": {
"type": "geo_point"
}
...
}
}
}
}
Then re-importing the data using the python script in the tutorial.
Everything is imported and ready to be searched using geo_point!

AWS API gateway response body template mapping (foreach)

I am trying to save data in S3 through firehose proxied by API gateway. I have create an API gateway endpoint that uses the AWS service integration type and PutRecord action for firehose. I have the mapping template as
{
"DeliveryStreamName": "test-stream",
"Records": [
#foreach($elem in $input.path('$.data'))
{
"Data": "$elem"
}
#if($foreach.hasNext),#end
#end
]
}
Now when I test the endpoint with below JSON
{
"data": [
{"ticker_symbol":"DemoAPIGTWY","sector":"FINANCIAL","change":-0.42,"price":50.43},{"ticker_symbol":"DemoAPIGTWY","sector":"FINANCIAL","change":-0.42,"price":50.43}
]
}
JSON gets modified and shows up as below after the transformation
{ticker_symbol=DemoAPIGTWY, sector=FINANCIAL, change=-0.42, price=50.43}
: is being converted to = which is not a valid JSON
Not sure if something is wrong in the above mapping template
The problem is, that $input.path() returns a json object and not a stringified version of the json. You can take a look at the documentation here.
The Data property expects the value to be a string and not a json object. So long story short - currently there is no built in function which can revert a json object into its stringified version. This means you need to re read the current element in the loop via $input.json(). This will return a json string representation of the element, which you then can add as Data.
Take a look at the answer here which illustrates this concept.
In your case, applying the concept described in the link above would result in a mapping like this:
{
"DeliveryStreamName": "test-stream",
"Records": [
#foreach($elem in $input.path('$.data'))
{
#set($json = $input.json("$[$foreach.index]"))
"Data":"$util.base64Encode($json)",
}
#if($foreach.hasNext),#end
#end
]
}
API Gateway considers the payload data as a text and not as a Json unless explicitly specified.
Kinesis also expects data to be in encoded format while proxying through API Gateway.
Try the following code and this should work, wondering why the for loop has been commented in the mapping template.
Assuming you are not looping through the record set, the following solution should work for you.
{
"DeliveryStreamName": "test-stream",
"Record": {
"Data": "$util.base64Encode($input.json('$.Data'))Cg=="
}
}
Thanks & Regards,
Srivignesh KN

Postman -- Unable to use environment variable when extracting from JSON response

I have a suite of postman collections. The server supports two type of user roles, partner and customer. The restful server URLs include the role name (e.g. http://server.com/customer/account and http://server.com/partner/account
The JSON data also differs by role.
"account" : {
"customer" : {
"name" : "Fred"
}
}
"account" : {
"partner" : {
"name" : "Fred"
}
}
I am trying to extract the name, but postman doesn't like the embedded environment variable specifying the role to use to extract the response. The environment variable works fine for creating the request.
var jsonData = JSON.parse(responseBody);
postman.setEnvironmentVariable("user_name", jsonData.account.{{role}}.name);
Postman reports a syntax error (Unexpected '{' ) so I'm not hopeful, but is there a way to do this?
--- UPDATE ---
I've found a workaround that is OK, but not as elegant.
if (environment["role"] == 'partner') {
postman.setEnvironmentVariable("user_name", jsonData.account.partner.name);
} else {
postman.setEnvironmentVariable("user_name", jsonData.account.customer.name);
}
Pre-request scripts and tests need to be valid javascript and so you cannot access global and environment variables using the {{var}} construct.
To make your tests look a bit better, you could do something like this
var jsonData = JSON.parse(responseBody),
role = environment["role"];
postman.setEnvironmentVariable("user_name", jsonData.account[role].name);