I have a csv file that contains the following:
value
one
two
three
four
five
I also have a variable in my Postman test script that simply takes each value in each of the 5 iterations and stores this value in {{variableValue}}. So this variable gets updated with each of the 5 iterations.
Postman has a nice UI that I can use to import the input.csv file I'm using -- but how do I tell newman to import this file?
You can use the -d <source> or --iteration-data <source> flag, from the command line.
https://github.com/postmanlabs/newman/blob/develop/README.md#command-line-options
Related
I am fairly new to using newman and I am trying to figure out how exactly to create multiple iterations within one report.
I cannot find the htmlextra.js file anywhere locally on my laptop (Win 10) to just change that field stated on: https://hub.docker.com/r/dannydainton/htmlextra
image
Can anyone please help me out on how to add more than 1 iteration to a collection for the reporter?
Thank you very much and sorry to bother you all with this basic question, but I just cannot figure it out.
Iteration is set through newman and not htmlextra report , you can add iteration count through -n flag.
newman run collection.json -n 5 -r htmlextra
This will run collection 5 times
https://www.npmjs.com/package/newman will show all newman specific flag and
https://www.npmjs.com/package/newman-reporter-htmlextra shows all htmlextra specific flags
-n , --iteration-count Specifies the number of times the collection has to be run when used in conjunction with iteration
data file.
I am building a command-line program using the argparse module, and have been trying to design two separate, mutually exclusive groups of arguments that perform completely different tasks. I decided to use separate the two sets of arguments by creating subparsers, and I have tried to follow the formatting that is specified in the following link (https://pymotw.com/2/argparse/) as well as numerous stackoverflow threads, but whenever I try to run the script with one of the subparsers in the terminal, an attribute error is always yielded.
My code is set up in the following way (note: I have slightly condensed and simplified the following from my original code for the purpose of brevity):
import argparse
def check_input_file(filename):
###validates input file
return validated_file
if __name__ == "__main__":
parser = argparse.ArgumentParser(prog= "MyProg")
parser.description= "This gives an overview of the two different workflows A and B, what they can do, and what arguments they require or can use."
A_or_B_subparsers = parser.add_subparsers(help="A processes a certain input to create a .csv spreadsheet, B takes an existing .csv spreadsheet and sends out a certain number to be printed")
A_parser= A_or_B_subparsers.add_parser("A", help= "A needs the name of an existing input file to make the .csv spreadsheet")
A_parser.add_argument("-input", required= True, type= check_input_file, help= "validates input file as being suitable for creating the .csv spreadsheet")
B_parser = A_or_B_subparsers.add_parser("B", help="B will tell the computer to print a certain number of existing spreadsheets in a certain format")
B_parser.add_argument("-num", type= int, default= 4, help= "number of times to do print existing .csv spreadsheet")
B_parser.add_argument("-csv", help= "specify the existing .csv spreadsheet that must be formatted and then printed)
args= MyProg.parse_args()
if args.A:
input= open(args.input, 'r')
###create .csv spreadsheet of input file in working directory
if args.B:
x_number= args.num
file= args.csv
###format existing .csv spreadsheet
###print .csv spreadsheet file x_number of times
Now, if I try to run this code in the terminal with, for example, the following commands, I get the following error:
$python MyProg_user_interface.py A -input someinputfilename.txt
AttributeError: 'Namespace' object has no attribute 'B'
How can I run my command line program so that only one subparser (and its required arguments) can run at a time?
UPDATE
After having found this source (https://allthingstechilike.blogspot.co.uk/2013/07/python-argparse-example-using-subparser.html), I decided to set dest= 'mode' in the line where A_or_B_subparsers = parser.add_subparsers(help="A processes a certain input blah blah blah blah") so that, depending on whether subcommand A or B was called in the command line, only the arguments required each subcommand would have to be typed into the command line.
I subsequently then modified my conditional tree after the line args= MyProg.parse_args() to look like this:
if args.mode == "A":
input= open(args.input, 'r')
###create .csv spreadsheet of input file in working directory
elif args.mode== "B":
x_number= args.num
file= args.csv
###format existing .csv spreadsheet
###print .csv spreadsheet file x_number of times
else:
argparse.ArgumentError("too few arguments")
However, this modification does not seem to amend the problem. Although subcommand A can run fine, subcommand B refuses to run at all. Does anyone know if this is because of how my code is set up or because of another internal problem?
Running the sample.sh script in Google Cloud Shell to call the below preprocess on set of images following the steps of flowers example.
https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/flowers/trainer/preprocess.py
Preprocess was successfully on both eval set and train set. But the generated .tfrecord.gz files does not seem matching the image numbers in eval/train_set.csv.
i.e. eval-00000-of-00157.tfrecord.gz says there are 158 tfrecord while there are 35227 rows in eval_set.csv. Each record include a valid image_url (all of them are uploaded to Storage), each record has valid label tagged.
Would like to know if there is a way to monitor and control the number of images per tfrecord in preproces.py config.
Thanks
Update, got this work out right:
import tensorflow as tf
import os
from tensorflow.python.lib.io import file_io
options = tf.python_io.TFRecordOptions(
compression_type=tf.python_io.TFRecordCompressionType.GZIP)
sum(1 for f in file_io.get_matching_files(os.path.join(url/path, '*.tfrecord.gz'))
for example in tf.python_io.tf_record_iterator(f, options=options))
The filename eval-00000-of-00157.tfrecord.gz means that this is the first file out of 158. There should be 157 similarly named files. Within each file, there can be any number of records.
If you want to manually count each record, try something like:
import tensorflow as tf
from tensorflow.python.lib.io import file_io
files = os.path.join('gs://my_bucket/my_dir', 'eval-*.tfrecord.gz')
print(sum(1 for f in tf.python_io.file_io.get_matching_files(files)
for tf.python_io.tf_record_iterator(f)))
Note that there is no guarantee from Dataflow as to the relationship between the number of files and ordering of records (inter- and intra-file) between input files and output files. However, the counts should be the same.
I have a script that generates command line invocations of rrdtool xport based on input provided in a domain specific language. This works well, until the number of DEFs in the command line exceed a certain number - it seems to be around 50. At that point the command simply returns without any output or error information.
Is there a limit on the number of DEFs in rrdtool export? If so, then can it be raised or circumvented?
The issue turned out to be the character limit on the command line sent to the shell via Python's os.system method call. The issue can be worked around by creating a temporary executable script, writing the command line to the script and executing it.
I've got an old RRD file that was only set up to track 1 year of history. I decided more history would be nice. I did rrdtool resize, and the RRD is now bigger. I've got old backups of this RRD file and I'd like to merge the old data in so that the up-to-date RRD also has the historical data.
I've tried the rrd contrib "merged-rrd.py" but it gives:
$ python merged-rrd.py ../temperature-2010-12-06.rrd ../temperature-2011-05-24.rrd merged1.rrd
merging old:../temperature-2010-12-06.rrd to new:../temperature-2011-05-24.rrd. creating merged rrd: merged1.rrd
Traceback (most recent call last):
File "merged-rrd.py", line 149, in <module>
mergeRRD(old_path, new_path, mer_path)
File "merged-rrd.py", line 77, in mergeRRD
odict = getXmlDict(oxml)
File "merged-rrd.py", line 52, in getXmlDict
cf = line.split()[1]
IndexError: list index out of range
Also tried "rrd_merger.pl":
$ perl rrd_merger.pl --oldrrd=../temperature-2010-12-06.rrd --newrrd=../temperature-2011-05-24.rrd --mergedrrd=merged1.rrd
Dumping ../temperature-2010-12-06.rrd to XML: /tmp/temperature-2010-12-06.rrd_old_8615.xml
Dumping ../temperature-2011-05-24.rrd to XML: /tmp/temperature-2011-05-24.rrd_new_8615.xml
Parsing ../temperature-2010-12-06.rrd XML......parsing completed
Parsing ../temperature-2011-05-24.rrd XML...
Last Update: 1306217100
Start processing Round Robin DB
Can't call method "text" on an undefined value at rrd_merger.pl line 61.
at rrd_merger.pl line 286
at rrd_merger.pl line 286
Is there a tool to combine or merge RRDs that works?
I ended up putting together a really simple script that works well enough for my case, by examining the existing python script.
http://gist.github.com/2166343
That fixed rrdtool-merge.pl for me:
< my $xff = $new_rra->first_child( 'xff' )->text;
---
> my $xff = $new_rra->first_child_text( 'xff' );
From XML::Twig documentation:
first_child_text ($optional_condition)
Return the text of the first child of the element, or the first child
matching the $optional_condition If there is no first_child then returns ''. This
avoids getting the child, checking for its existence then getting the text for trivial
cases.
The rrdmerge.pl utility, included with Routers2 in the /extras directory, can do this. Collect the latest version of Routers2 from http://www.steveshipway.org/software/rrd/
This is a utility I wrote for the purpose of merging multiple archived MRTG RRD files which sounds exactly like the situation you are mentioning.
This is probably too late for the OP but will hopefully be useful to later people who come here. It can merge any RRD files, even with different DS, RRA or intervals, and can generate XML or RRD, and will pick the best available data from the component RRD files to make the output.
Example:
rrdmerge.pl --rrd --libpath $RRDLIBDIR --output /tmp/merge.rrd --rows 12000 $FROMDIR/file.rrd $ARCHIVE/*.rrd
Looking at the XML file generated by rrdtool, there is a simple logic error in the Perl script. The elements AVERAGE and are simple enough but the tag is contained within tag with the text inside.
<cf> AVERAGE </cf>
<pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds -->
<params>
<xff> 5.0000000000e-01 </xff>
</params>
The parsing just has to be tweaked a bit and when it is working, the fix fed back here (where it is easy to 'Google') and also to the script's author for a fix.