Test data to requests for Postman monitor - postman

I run my collection using Test data from a csv file, However there is no option to upload the test data file when adding monitor for the collection. On searching through internet could see that the test data file have to be provided in URL (saved in cloud ..google drive,.). But i couldn't get source for how to provide this URL to the collection . Can anyone please help

https://www.postman.com/praveendvd-public/workspace/postman-tricks-and-tips/request/8296678-d06b3fc0-6b8b-4370-9847-aee0f526e7db
you cannot use csv file in monitor , but could store the content of csv as variable and use that to drive the monitor . An example can be seen in the above public repository

Related

Correct way to fetch data from an aws server into a flutter app?

I have a general understanding question. I am building a flutter app that relies on a content library containing text files, latex equations, images, pdfs, videos etc.
The content lies on an aws amplify backend. Depending on the navigation of the user in the app, the corresponding data is fetched and displayed.
I am not sure about the correct way of fetching the data. The current method (which works) is that the data is stored in an S3 bucket. When data is requested, the data is downloaded to a temporary directory and then opened and processed in the app. This is actually not slow, but I feel that it is not the way it should be done.
When data is downloaded a file transfer notification pops up, which bothers me because it is shown all the time. Also I would like to read the data directly with something like a get request, without downloading the file first (specially for text files, which I would like to read directly into a String). But here I don't know how it works, because I don't see that you can save data in a file system with the other amplify services like data store or the rest api. Also, the S3 bucket is an intuitive way of storing data that is easy to use for the content creators of my company, for me it seems that the S3 bucket is the way to go. However with S3 I have only figured out the download method to fetch data.
Could someone give me a hint on what is the correct approach for this use case? Thank you very much!

Azure Data Factory HDFS dataset preview error

I'm trying to connect to the HDFS from the ADF. I created a folder and sample file (orc format) and put it in the newly created folder.
Then in ADF I created successfully linked service for HDFS using my Windows credentials (the same user which was used for creating sample file):
But when trying to browse the data through dataset:
I'm getting an error: The response content from the data store is not expected, and cannot be parsed.:
Is there something I'm doing wrongly or it is kind of permissions issue?
Please advise
This appears to be a generic issue, you need to point to a file with appropriate extension rather than a folder itself. Also make sure you are using a supported data store activity.
You can follow this official MS doc to use HDFS server with Azure Data Factory

Why Transfer in GCP failed on csv file and where is the error log?

I am testing out the transfer function in GCP:
This is the open data in csv, https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2018-financial-year-provisional/Download-data/annual-enterprise-survey-2018-financial-year-provisional-csv.csv
My configuration in GCP:
The transfer failed as below:
Question 1: why the transfer failed?
Question 2: where is the error log?
Thank you very much.
[UPDATE]:
I checked log history, nothing was captured:
[Update 2]:
Error details:
Details: First line in URL list must be TsvHttpData-1.0 but it is: Year,Industry_aggregation_NZSIOC,Industry_code_NZSIOC,Industry_name_NZSIOC,Units,Variable_code,Variable_name,Variable_category,Value,Industry_code_ANZSIC06
I noticed in the transfer service if you choose the third option for source: it reads URL of TSV file. Essentially TSV, PSV are just variants of CSV, and I have no problem retrieving the source csv file. The error details seem to implicating something not expected there.
The problem is that in your example, you are pointing to a data file as the source of the transfer. If we read the documentation on GCS transfer, we find that the we must specify a file which contains the identity of the target URL that we want to copy.
The format of this file is called a Tab-Separated-Values (TSV) and contains a number of parameters including:
The URL of the source of the file.
The size in bytes of the source file.
An MD5 hash of the content of the source file.
What you specified (just the URL of the source file) ... is not what is required.
One possible solution would be to use gsutil. It has an option of taking a stream as input and writing that stream to a given object. For example:
curl http://[URL]/[PATH] | gsutil cp - gs://[BUCKET]/[OBJECT]
References:
Creating a URL list
Can I upload files to google cloud storage from url?

How to pass csv data in postman collection so monitor can pick it

I have a situation where I have added my testdata in csv sheet. Moreover, I have created automated tests in js and I am passing this csv data with csv at the time of Collection Runner. It is working fine with postman on my local but when I run these testcases on postman monitor(https://monitor.getpostman.com/) these testcases is failing. Surely because of csv file unavailability. Is there any way I can pass my csv to monitor mode?
Unfortunately there is no such functionality in place right now. You can use csv files with collection runs and directly via the newman cli tool as described here.
You can file a feature request for this and look at the current product roadmap here.

.csv upload not working in Amazon Web Services Machine Learning - AWS

I have uploaded a simple 10 row csv file (S3) into AWS ML website. It keeps giving me the error,
"We cannot find any valid records for this datasource."
There are records there and Y variable is continuous (not binary). I am pretty much stuck at this point because there is only 1 button to move forward to build Machine Learning. Does any one know what should I do to fix it? Thanks!
The only way I have been able to upload .csv files to S3 created on my own is by downloading an existing .csv file from my S3 server, modifying the data, uploading it then changing the name in the S3 console.
Could you post the first few lines of contents of the .csv file? I am able to upload my own .csv file along with a schema that I have created and it is working. However, I did have issues in that Amazon ML was unable to create the schema for me.
Also, did you try to save the data in something like Sublime, Notepad++, etc. in order to get a different format? On my mac with Microsoft Excel, the CSV did not work, but when I tried LibreOffice on my Windows, the same file worked perfectly.