How to start youtube-dl autonumber from a different number than 1? - youtube-dl

I was downloading a playlist from youtube using youtube-dl. I used autonumbering feature to number the videos which could be achieved by formatting the name of output file as follows -o "%(autonumber)s-%(title)s.%(ext)s". The download failed in between. Now I wish to start the autonumber from the video next to failed video and not 1. But autonumber resets itself to 1 everytime. How can I set it to a different number greater than 1 ?

For playlists you should use playlist_index instead: -o '%(playlist_index)s-%(title)s.%(ext)s'

Instead of %(playlist_index)s, you should use youtube-dl --autonumber-start 5 $URL -o "%(autonumber)s_stuff"
It will start %(autonumber)s at 5 in this case.

Related

exiftool shows incorrect Duration for MP3. How does that happen?

Downloaded seven MP3 files from a website. exiftool says the Duration is two minutes.
Opened it in an audio editor and find that it is actually four minutes.
Opened a (non-downloaded) MP3 file in the same editor, duration different from two or four.
Copied all audio from the downloaded file and pasted over the other audio. Editor shows the other changing to four minutes.
exiftool shows the second file has a duration of four minutes.
Same behavior (different numbers) for the other six downloaded files. First one is the only one where the difference was approximately a factor of two (so it's not stereo vs. mono)
Is Duration an ID3 tag that can be falsified, as opposed to being measured from the actual audio?
There should be an (approx) in the exiftool output after the Duration value. Duration is not an embedded tag, it's a value that's calculated on the fly by exiftool. If you add the -G (-groupNames) option to your command, you'll see that it is part of the Composite tag group. If you check the listing there, you'll see the tags that exiftool uses to calculate the Duration. It's most likely the group that includes ID3Size and MPEG:AudioBitrate.
Exiftool doesn't read and parse the stream data, which the audio editor will do and get a more accurate result. Odds are there is something incorrect about the header for your file.
Related post on the exiftool forums.

Partial results on Weka attribute selection

When I run PCA in WEKA GUI using "Select Attribute", I dont get a complete results instead a partial results with dots at the end.
0.8205 1 -0.493Capacity at 10th Cycle-0.483Capacity at 5th Cycle-0.473Capacity at 50th Cycle-0.261S [M]in Electrolyte -0.256C wt %...
Is there any way to solve this particular issue ?
By default, a maximum of 5 attribute names are included in the generated names.
If you want all of them, use -1 for the -A option (or maximumAttributeNames property in the GOE).

Multiple iterations in the htmlextra report with newman

I am fairly new to using newman and I am trying to figure out how exactly to create multiple iterations within one report.
I cannot find the htmlextra.js file anywhere locally on my laptop (Win 10) to just change that field stated on: https://hub.docker.com/r/dannydainton/htmlextra
image
Can anyone please help me out on how to add more than 1 iteration to a collection for the reporter?
Thank you very much and sorry to bother you all with this basic question, but I just cannot figure it out.
Iteration is set through newman and not htmlextra report , you can add iteration count through -n flag.
newman run collection.json -n 5 -r htmlextra
This will run collection 5 times
https://www.npmjs.com/package/newman will show all newman specific flag and
https://www.npmjs.com/package/newman-reporter-htmlextra shows all htmlextra specific flags
-n , --iteration-count Specifies the number of times the collection has to be run when used in conjunction with iteration
data file.

sd_journal_send to send binary data. How can I retrieve the data using journalctl?

I'm looking at systemd-journal as a method of collecting logs from external processors. I'm very interested in it's ability to collect binary data when necessary.
I'm simply testing and investigating journal right now. I'm well aware there are other, probably better, solutions.
I'm logging binary data like so:
// strData is a string container containing binary data
strData += '\0';
sd_journal_send(
"MESSAGE=test_msg",
"MESSAGE_ID=12345",
"BINARY=%s", strData.c_str(),
NULL);
The log line shows up when using the journalctl tool. I can find the log line like this from the terminal:
journalctl MESSAGE_ID=12345
I can get the binary data of all logs in journal like so from the terminal:
journalctl --field=BINARY
I need to get the binary data to a file so that I can access from a program and decode it. How can I do this?
This does not work:
journalctl --field=BINARY MESSAGE_ID=12345
I get there error:
"Extraneous arguments starting with 'MESSAGE_ID=1234567890987654321"
Any suggestions? The documentation on systemd-journal seems slim. Thanks in advance.
You just got the wrong option. See the docs for:
-F, --field=
Print all possible data values the specified field can take in all entries of the journal.
vs
--output-fields=
A comma separated list of the fields which should be included in the output.
You also have to specify the plain output format (-o cat) to get the raw content:
journalctl --output-fields=BINARY MESSAGE_ID=12345 -o cat

Weka NominalToBinary makes test and training sets incompatible

So I have a training and testing sets and they contain multi-valued nominal values. As long as I need to train & test NaiveBayesMultinomial classifier, which doesn't support multi-valued nominal values, I do the following:
java weka.filters.supervised.attribute.NominalToBinary -i train.arff -o train_bin.arff -c last
java weka.filters.supervised.attribute.NominalToBinary -i test.arff -o test_bin.arff -c last
Then i run this:
java weka.classifiers.bayes.NaiveBayesMultinomial -t train_bin.arff -T test_bin.arff
And the following error arises:
Weka exception: Train and test files not compatible!
As far as I understood after I examined both .arff files, they became incompatible after I ran NominalToBinary, since train and test sets are different and thus different binary variables are generated.
Is it possible to perform NominalToBinary conversion in a way that sets keep being compatible?
Concatenate the two sets into one, perform the NominalToBinary conversion, then split them again. This way, they should be normalized the same way.
But are you sure the files were compatible before? Or does maybe your test and/or training set contain attribute cases that the other doesn't have?