I need to upload files with users documents information just like so:
{
"documents": [
"documentType": "string",
"file": "byte[]",
],
"requestId": "string",
}
how do i do this on Postman?
i've tried many ways, such as:
also changing to just "file" + file, and "documents[0]" + file, and "documents[0][file]".
Use documents.file instead of documents[file] to access the content. For example:
Postman Key-Values Screenshot
Related
I am currently working on recommendation AI. since I am new to GCP recommendation, I have been struggling with data format for catalog. I read the documentation and it says each product item JSON format should be on a single line.
I understand this totally, but It would be really great if I could get what the JSON format looks like in real because the one in their documentation is very ambiguous to me. and I am trying to use console to import data
I tried to import data looking like down below but I got error saying invalid JSON format 100 times. it has lots of reasons such as unexpected token and something should be there and so on.
[
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
...
]
Maybe it was because each item was not on a single line, but I am also wondering if the above is enough for importing. I am not sure if those data should be included in another property like
{
"inputConfig": {
"productInlineSource": {
"products": [
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
}
I can see the above in the documentation but it says it is for importing inline which is using POST request. it does not mention anything about importing with console. I just guess the format is also used for console but I am not 100% sure. that is why I am asking
Is there anyone who can show me the entire data format to import data by using console?
Problem Solved
For those who might have the same question, The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
Posting this Community Wiki for better visibility.
OP edited question and add solution:
The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
However I'd like to elaborate a bit.
There are a few ways to import Importing catalog information:
Importing catalog data from Merchant Center
Importing catalog data from BigQuery
Importing catalog data from Cloud Storage
I guess this is what was used by OP, as I was able to import catalog using UI and GCS with below JSON file.
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{"id":"111","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"222","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
{"id":"333","title":"Test Movie (2020)","categories":["Adventure","Children's","Fantasy"]}
]
}
}
}
Importing catalog data inline
At the bottom of the Importing catalog information documentation you can find information:
The line breaks are for readability; you should provide an entire catalog item on a single line. Each catalog item should be on its own line.
It means you should use something similar to NDJSON - convenient format for storing or streaming structured data that may be processed one record at a time.
If you would like to try inline method, you should use this format, however it's single line but with breaks for readability.
data.json file
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{
"id": "1212",
"category_hierarchies": [ { "categories": [ "Animation", "Children's" ] } ],
"title": "Toy Story (1995)"
},
{
"id": "5858",
"category_hierarchies": [ { "categories": [ "Adventure", "Fantasy" ] } ],
"title": "Jumanji (1995)"
},
{
"id": "321123",
"category_hierarchies": [ { "categories": [ "Comedy", "Adventure" ] } ],
"title": "The Lord of the Rings: The Fellowship of the Ring (2001)"
},
]
}
}
}
Command
curl -X POST \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
--data #./data.json \
"https://recommendationengine.googleapis.com/v1beta1/projects/[your-project]/locations/global/catalogs/default_catalog/catalogItems:import"
{
"name": "import-catalog-default_catalog-1179023525XX37366024",
"done": true
}
Please keep in mind that the above method requires Service Account authentication, otherwise you will get PERMISSION DENIED error.
"message" : "Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the translate.googleapis.com. We recommend that most server applications use service accounts instead. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/.",
"status" : "PERMISSION_DENIED"
I'm going to hard code some data using x-mediation-script. Where as I want to use $ref which will be called in setPayloadjson. Is this possible can we do it? need suggestion with any of the sample
"x-mediation-script": "mc.setProperty('CONTENT_TYPE', 'application/json');mc.setPayloadJSON('$ref', '#/definitions/out');"
"definitions":{
"out":{
"type" : "object",
"required": ["NAME"],
"properties": {
"NAME2": {"type": "string"},
"NAME3": {"type": "string"},
"NAME3": {"type": "string"},
"NAME4": {"type": "string"},
}
}
}
It is not possible to access swagger content from the mediation script using $ref due to,
x-mediation-script is in JS and could not use swagger syntax in code.
API Manager does not process the script. Therefore, when publishing the API, only the x-mediation-script content is copied to the synapse file.
As the solution, create a JS variable in the x-mediation-script and use it.
mc.setProperty('CONTENT_TYPE', 'application/json'); // Set the content type of the payload to the message context
var town = mc.getProperty('uri.var.town'); // Get the path parameter 'town' and store in a variable
mc.setPayloadJSON('{ "Town" : "'+town+'"}'); // Set the new payload to the message context.
I am trying to run newman cli and specifying a collection where the body is read from file as below
"body": {
"mode": "file",
"file": {
"src": "body.json"
}
}
and passing a CSV data file but for some reason newman is not replacing the parameterised body with the data from the CSV.
I have a json file containing the body of my request
{
"name": "{{name}}
}
and a CSV file containing the following
name
Pete
Joe
So my body is parameterised stored in a file and I am want to use a data file to populate the requests in the running which will execute 2 runner with the values in my CVS file.
Can this be done somehow?
Assuming that other parts of the collection file are correct the body should be like below:
"body": {
"mode": "formdata",
"formdata": [
{
"key": "file",
"type": "file",
"src": "body.json"
}
]
}
The body.json file should live in the same directory that the collection exist. otherwise full path is required
This is basically the JSON representation of what you in the Postman UI. You can provide any key you'd like but type should be file
You can provide any key you'd like but type should be file
I have a POST request which validates a text/csv file in the request body. The request runs successfully in postman: returns HTTP code 200. The Request Body in the Postman Console is populated with the file path and name i.e. src:"/Users/username/Downloads/demo_file.csv" however when the collection is exported the file value in the request is empty. See below.
Question. Why is it empty, is this a bug / known issue?
"key": "Content-Type",
"name": "Content-Type",
"value": "text/csv",
"type": "text"
}
],
"body": {
"mode": "file",
"file": {}
As a quick test, I added the file to the same location as the postman collection and updated the value i.e. "file": {demo_file.csv} but the file was not found when the collection was run using newman.
Question: Should the relative path be used?
First of all, due to security reasons, Postman runner doesn't support file uploading
directly. Find further detail at here.
Question. Why is it empty, is this a bug / known issue?
No, that is not a bug nor the issue, it is by designed.
Question: Should the relative path be used?
If your file on the same location where collection is located; you just need to give file name without braces as follows,
"mode": "file",
"file": "demo_file.csv"
I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?
This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.
Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.