While working with postman, data.someVariable returns data from within a csv file that can also be used as {{someVariable}} in uri/json.
This gives us the data for that variable from that row/iteration.
Is there a mechanism to write back to the data file by doing something like postman.setData('responseCode') = responseCode.
This would be really helpful to store response code in the data file and to record call wise details in same format as the input within csv.
The only solution I figured out is
to populate json objects in the environment with information about the data file name and structure/values of information to be added
to create a separate web service (maybe in node.js) that exposes an http call to write to a file and takes in as parameter a json input as the one created in the environment as mentioned above and writes that to a file / original data file (or a copy of it) in the desired format
to call the above mentioned web service call at the end of each run or desired rest call execution to generate step wise information/debug report
There is no way to write back to data file in postman as of now .
However, you can populate that in your environment file at run time using
pm.environment.set("varname")
keep varname in such a way that you understand this is the variable you wanted to write back into data file.
Related
I'm having a Django stack process issue and am wondering if there's a way to request user input.
To start with, the user is loading Sample data (Oxygen, CHL, Nutrients, etc.) which typically comes from an excel file. The user clicks on a button indicating what type of sample is being loaded on the webpage and gets a dialog to choose the file to load. The webpage passes the file to python via VueJS/Django where python passes the file down to the appropriate parser to read that specific sample type. The file is processed and sample data is written to a database.
Issues (I.E sample ID is outside an expected range of IDs because it was keyed in wrong when the sample was taken) get passed back to the webpage as well as written to the database to tell the user when something happened:
(E.g "No Bottle exists for sample with ID 495619, expected range is 495169 - 495176" or "356 is an unreasonable salinity, check data for sample 495169"). Maybe there's fifty samples without bottle IDs because the required bottle file wasn't loaded before the sample data. Generally you have one big 10L bottle with water in it, the ocean depth (pressure) and bottle ID where the bottle was closed is in a bottle file, and samples are placed into different vials with that bottle's unique id and the vials are run thought different machines and tests to produce the sample files.
My issue occurs when the user picks a file that contains data that has already been loaded. If the filename matches the existing file data was loaded from I clear data associated with that file and reload the database with the new data, sometimes data is spread over several files that were already loaded and uploader will merge all the files, including some that weren't uploaded, together.
A protocol for uploading data is for the uploader to append/prepend their initials onto a copy of a file if corrections were made and not to modify the original file; a chain of custody. Sometimes a file can be modified by multiple people and each person will create a copy and append/prepend their initials so people will know who all touched the data. (I don't make the rules I just work with what I have)
So we get all the way back to the parser and it's discovered the data already exists (for a given sample ID), but the filename is different. At this point I want to ask the user, do you want to reload all the data loaded from the other file, update existing data with the new file or ignore existing data and only append new data.
Is there a way for Django to make a request to the webpage to ask the user how it should handle this data without having to terminate the current request? - which the webpage is waiting for a response from the server to say the data was loaded and what errors with the data might have been found -
My current thoughts are to:
Ask the user before every file upload how a collision should be handled, if it happens
Or
Abort the data load, pass an error with a code back to the webpage, the error code indicates to the webpage that the user has to decide what to do. Upon the user answering, the load process is restarted with the same file, but with a flag to tell the parser what to do when the issue is eventually encountered again.
Nothing is written to the database until a whole file is read so no problem aborting the process and restarting if the parser doesn't know what to do, but I feel like there might be a better way.
I have a use case where I want to read the filename from a metadata table, I have written a pipeline function to read the metadata table, but I am not sure how can I pass this information to ReadFromText as it only takes string as input, Is it possible to assign this value to ReadFromText(). Please suggest some workarounds or ideas how to achieve this, Thanks
code: pipeline | 'Read from a File' >> ReadFromText(I want to pass the file path here?,
skip_header_lines=1)
Note: There will be various folders and files in storage, files are in csv format, but in my use case I can't directly pass the storage location or filename to file path in ReadFromText. I want to read it from metadata and pass the value. Hope I am clear, Thanks
I don't understand why you need to read the metadata. If you want to read all the files inside a folder, you can just provide a blob. This solution working in python, not sure about java.
p|readfromtext("./folder/*.csv")
"*" is the blob here, which allows pipeline to read all the patterns matching .csv. You can also add something at the starting.
What you want is textio.ReadAllFromText which reads from a PCollection instead of taking a string directly.
I have prepared some reports based on the files I prepared. I am wondering is it possible to save this report (measures and visualizations) and also the steps I made while transforming data? I want to be able to load new files (which in the structure are the same as the ones I used creating my report) and the data transformation and report done automatically on this updated data.
Is it possible?
You can save it as a template - file extension pbit. It saves only the structure of the file, without actual data. When opening the report it refreshes it, and if there are parameters in the report (e.g. folder/file path or server address) it will refresh it considering the input values
you can read more here
https://learn.microsoft.com/en-us/power-bi/create-reports/desktop-templates
You can simply save your file as a template.
I am trying to test a webservice's performance, and having a few issues with using and passing variables. There are multiple sequential requests, which depend on some data coming from a previous response. All requests need to be encoded to base64 and placed in a SOAP envelope namespace before sending it to the endpoint. It returns and encoded response which needs to be decoded to see the xml values which need to be used for the next request. What I have done so far is:
1) Beanshell preprocessor added to first sample to encode the payload which is called from a file.
2) Regex to pull the encoded response bit from whole response.
3) Beanshell post processor to decode the response and write to a file (just in case). I have stored the decoded response in a variable 'Output' and I know this works since it writes the response to file correctly.
4) After this, I have added 4 regex extractors and tried various things such as apply to different parts, check different fields, check JMeter variable etc. However, it doesn't seem to work.
This is what my tree is looking like.
JMeter Tree
I am storing the decoded response to 'Output' variable like this and it works since it's writing to file properly:
import org.apache.commons.io.FileUtils;
import org.apache.commons.codec.binary.Base64;
String Createresponse= vars.get("Createregex");
vars.put("response",new String(Base64.decodeBase64(Createresponse.getBytes("UTF-8"))));
Output = vars.get("response");
f = new FileOutputStream("filepath/Createresponse.txt");
p = new PrintStream(f);
this.interpreter.setOut(p);
print(Output);
f.close();
And this is how I using Regex after that, I have tried different options:
Regex settings
Unfortunately though, the regex is not picking up these values from 'Output' variable. I basically need them saved so i can use ${docID} in the payload file for next request.
Any help on this is appreciated! Also happy to provide more detail if needed.
EDIT:
I had a follow up question. I am trying to run this with multiple users. I have a field ${searchuser} in my payload xml file called in the pre-processor here.
The CSV Data set above it looks like this:
However, it is not picking up the values from CSV and substituting in the payload file. Any help is appreciated!
You have 2 problems with your Regular Expression Extractor configuration:
Apply to: needs to be response
Field to check: needs to be Body, Body as a Document is being used for binary file formants like PDF or Word.
By the way, you can do Base64 decoding and encoding using __base64Decode() and __base64Encode() functions available via JMeter Plugins. The plugins in their turn can be installed in one click using Plugin Manager
I have a talend job which is simple like below:
ts3Connection -> ts3Get -> tfileinputDelimeted -> tmap -> tamazonmysqloutput.
Now the scenario here is that some times I get the file in .txt format and sometimes I get it in a zip file.
So I want to use tFileUnarchive to unzip the file if it's in zip or process it bypassing the tFileUnarchive component if the file is in unzipped format i.e only in .txt format.
Any help on this is greatly appreciated.
The trick here is to break the file retrieval and potential unzipping into one sub job and then the processing of the files into another sub job afterwards.
Here's a simple example job:
As normal, you connect to S3 and then you might list all the relevant objects in the bucket using the tS3List and then pass this to tS3Get. Alternatively you might have another way of passing the relevant object key that you want to download to tS3Get.
In the above job I set tS3Get up to fetch every object that is iterated on by the tS3List component by setting the key as:
((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and then downloading it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
The extra bit I've added starts with a Run If conditional link from the tS3Get which links the tFileUnarchive with the condition:
((String)globalMap.get("tS3List_1_CURRENT_KEY")).endsWith(".zip")
Which checks to see if the file being downloaded from S3 is a .zip file.
The tFileUnarchive component then just needs to be told what to unzip, which will be the file we've just downloaded:
"C:/Talend/5.6.1/studio/workspace/S3_downloads/" + ((String)globalMap.get("tS3List_1_CURRENT_KEY"))
and where to extract it to:
"C:/Talend/5.6.1/studio/workspace/S3_downloads"
This then puts any extracted files in the same place as the ones that didn't need extracting.
From here we can now iterate through the downloads folder looking for the file types we want by setting the directory to "C:/Talend/5.6.1/studio/workspace/S3_downloads" and the global expression to "*.csv" in my case as I wanted to read in only the CSV files (including the zipped ones) I had in S3.
Finally, we then read the delimited files by setting the file to be read by the tFileInputDelimited component as:
((String)globalMap.get("tFileList_1_CURRENT_FILEPATH"))
And in my case I simply then printed this to the console but obviously you would then want to perform some transformation before uploading to your AWS RDS instance.