I have a POST request which validates a text/csv file in the request body. The request runs successfully in postman: returns HTTP code 200. The Request Body in the Postman Console is populated with the file path and name i.e. src:"/Users/username/Downloads/demo_file.csv" however when the collection is exported the file value in the request is empty. See below.
Question. Why is it empty, is this a bug / known issue?
"key": "Content-Type",
"name": "Content-Type",
"value": "text/csv",
"type": "text"
}
],
"body": {
"mode": "file",
"file": {}
As a quick test, I added the file to the same location as the postman collection and updated the value i.e. "file": {demo_file.csv} but the file was not found when the collection was run using newman.
Question: Should the relative path be used?
First of all, due to security reasons, Postman runner doesn't support file uploading
directly. Find further detail at here.
Question. Why is it empty, is this a bug / known issue?
No, that is not a bug nor the issue, it is by designed.
Question: Should the relative path be used?
If your file on the same location where collection is located; you just need to give file name without braces as follows,
"mode": "file",
"file": "demo_file.csv"
Related
I need to upload files with users documents information just like so:
{
"documents": [
"documentType": "string",
"file": "byte[]",
],
"requestId": "string",
}
how do i do this on Postman?
i've tried many ways, such as:
also changing to just "file" + file, and "documents[0]" + file, and "documents[0][file]".
Use documents.file instead of documents[file] to access the content. For example:
Postman Key-Values Screenshot
i am attempting to find the true file size as suggested by response at previous inquiry.
The response says to use GET projects/:project_id/folders/:folder_id/contents to find the hub_id and file_id of files on the BIM 360 docs hub.
Whevener I use the bucket id as retrieved from GET projects/:project_id/folders/:folder_id/contents the message is returned "Bucket not found"
How to find the bucket_id of files on BIM 360 docs?
For example, this is how my file information reads at GET https://developer.api.autodesk.com/data/v1/projects/:project_id/folders/:folder_id/contents
"type": "items",
"id": "urn:adsk.wipprod:dm.lineage:9tyhLmL0Q0--a3otlFxQZw",
"attributes": {
"displayName": "EQIX-BIM360 TEST-GH_R20.rvt",
"createTime": "2021-02-24T02:33:25.0000000Z",
"createUserId": "9EW2J6HXXX7M",
"createUserName": "Mario Lopez",
"lastModifiedTime": "2021-02-24T02:33:48.0000000Z",
"lastModifiedUserId": "9EW2J6HP9R7M",
"lastModifiedUserName": "Mario Lopez",
"hidden": false,
"reserved": false,
"extension": {
"type": "items:autodesk.bim360:C4RModel",
"version": "1.0.0",
"schema": {
"href": "https://developer.api.autodesk.com/schema/v1/versions/items:autodesk.bim360:C4RModel-1.0.0"
},
"data": {}
Which part of the above data is to be used as the bucketKey and objectKey for GET https://developer.api.autodesk.com/oss/v2/buckets/:bucketKey/objects/:objectKey/details, in order to get the true file size?
You're omitting the required part of the response from GET folder contents on your question.
Keep in mind that you'll need to get the id from included/relationships/storage/data/id of a specific file version, as you can find here.
With this id obtained, please refer here to understand better how you can retrieve the bucketKey.
You can also find it under included/relationships/storage/meta/href, but in this case, you'll find the value as "https://developer.api.autodesk.com/oss/v2/buckets/<bucket key>/objects/...
I am trying to run newman cli and specifying a collection where the body is read from file as below
"body": {
"mode": "file",
"file": {
"src": "body.json"
}
}
and passing a CSV data file but for some reason newman is not replacing the parameterised body with the data from the CSV.
I have a json file containing the body of my request
{
"name": "{{name}}
}
and a CSV file containing the following
name
Pete
Joe
So my body is parameterised stored in a file and I am want to use a data file to populate the requests in the running which will execute 2 runner with the values in my CVS file.
Can this be done somehow?
Assuming that other parts of the collection file are correct the body should be like below:
"body": {
"mode": "formdata",
"formdata": [
{
"key": "file",
"type": "file",
"src": "body.json"
}
]
}
The body.json file should live in the same directory that the collection exist. otherwise full path is required
This is basically the JSON representation of what you in the Postman UI. You can provide any key you'd like but type should be file
You can provide any key you'd like but type should be file
I am trying to use Google Cloud Vision API.
I am using the REST API in this link.
POST https://vision.googleapis.com/v1/files:asyncBatchAnnotate
My request is
{
"requests": [
{
"inputConfig": {
"gcsSource": {
"uri": "gs://redaction-vision/pdf_page1_employment_request.pdf"
},
"mimeType": "application/pdf"
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
],
"outputConfig": {
"gcsDestination": {
"uri": "gs://redaction-vision"
}
}
}
]
}
But the response is always only "name" like below:
{
"name": "operations/a7e4e40d1e1ac4c5"
}
My "gs" location is valid.
When I write the wrong path in "gcsSource", 404 not found error is coming.
Who knows why my response is weird?
This is expected, it will not send you the output as a HTTP response. To see what the API did, you need to go to your destination bucket and check for a file named "xxxxxxxxoutput-1-to-1.json", also, you need to specify the name of the object in your gcsDestination section, for example: gs://redaction-vision/test.
Since asyncBatchAnnotate is an asynchronous operation, it won't return the result, it instead returns the name of the operation. You can use that unique name to call GetOperation to check the status of the operation.
Note that there could be more than 1 output file for your pdf if the pdf has more pages than batchSize and the output json file names change depending on the number of pages. It isn't safe to always append "output-1-to-1.json".
Make sure that the uri prefix you put in the output config is unique because you have to do a wildcard search in gcs on the prefix you provide to get all of the json files that were created.
I know it's possible to put a limit on the file size with the content-length-range header. But is it possible to validate the file type?
https://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTForms.html#PolicyConditions
I see there is a Content-Type header, if I set this to say, audio/mp3 would that only allow MP3 files and return an error if the file is not an mp3?
I found this previous question but the answers only mention validating the file size: s3 direct upload restricting file size and type
You can specify the Content-Type at your POST request.
You can also specify it at your signature, so the post must be done with that Content-Type:
{ "expiration": "2007-12-01T12:00:00.000Z",
"conditions": [
{"acl": "public-read" },
{"bucket": "johnsmith" },
{"Content-Type: "audio/mp3"}
]
}
Creating an HTML Form (Using AWS Signature Version 4)
edit: At the previous question that you found, they are actually checking the Content-Type.