opencv_traincascade.Unspecified error (No element name has been given) - c++

I'm trying to train my own cascade, but get the following error:
Unspecified error (No element name has been given) in cv::operator
<<, file C:\builds\2_4_PackSlave-win64-vc11-shared\opencv\modules\core\include\
opencv2/core/operations.hpp, line 2910
I made ​​these steps:
I cut 20 photos of the object so that they remained only the desired object
Resize to 30x18
Make objectSamples.dat file like this :
object(1).jpg 1 0 0 30 18
object(10).jpg 1 0 0 30 18
object(11).jpg 1 0 0 30 18
And negatives.dat like :
negatives\1.jpeg
negatives\10.jpg
negatives\11.jpg
size of pic ~ 500x500
4.Make vec file:
opencv_createsamples -info objectSamples.dat -vec objectSamples.vec -w 30 -h 18 -num 20
5.Show samples ( my pictures are shown entirely) : opencv_createsamples -vec objectSamples.vec -w 30 -h 18
6.Try to train : opencv_traincascade -data Cascade -vec objectSamples.vec -bg negatives.dat -numPos 10 -numNeg 10 -numStages 2 -featureType HAAR -w 30 -h 18
But get an error:
What am I doing wrong?
I read these articles and the answer, but I didn't understand, in what a problem:
trouble-when-use-opencv_traincascadeexe
haartraining tutorial
docs.opencv traincascade
Increased the number of images to 1000 positive and 2000 negatives
opencv_traincascade -data Cascade -vec boobsSamples.vec -bg negativesBig/negatives.txt -numPos 400 -numNeg 1000 -numStages 2 -featureType HAAR -w 30 -h 18 -mode ALL
Geting the same error.

Problem solved!
I copied the opencv_traincascade.exe to a folder of images. When I ordered the full path to the opencv_traincascade.exe in the library, the problem disappeared.
F:\OpenCV\opencv\build\x64\vc11\bin\opencv_traincascade -data Cascade -vec positives.vec -bg negativesBig/negatives.txt -numPos 400 -numNeg 1000 -numStages 2 -featureType HAAR -w 30 -h 18 -mode ALL

Related

Compress gifs with vips on the command line

I try to use libvips to compress my gifs, but I can't find the relevant documentation
# vips --version
vips-8.12.2-Tue Jan 25 09:34:32 UTC 2022
This gif is very big, I want to try to compress it with vips
# ls -ahl *.gif
-rw-r--r--. 1 root root 34M 7月 4 10:25 a.gif

no .vec files after createsamples OpenCV - Haar Training

i need your help :) today i trained possitive image and negative image for HAAR detection. I followed this command below :
opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt\ -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 1000\ -numNeg 600 -w 80 -h 40 -mode ALL -precalcValBufSize 1024\ -precalcIdxBufSize 1024
(from CodingRobin) on my terminal, it seems "Done".
But there's no .vec files in samples directory. Can anyone help me ??
opencv_traincascade.exe USES the vec file. You must call the opencv_createsamples.exe instead.
The command line should look like:
opencv_createsamples.exe -info positives.txt -vec samples.vec -w 24 -h 24 -num 4455
for your number of samples and width/height
positives.txt should look like (each line):
#path #numberOfObjects #xObj1 #yObj1 #widthObj1 #heightObj1 #xObj2 #...
for example:
image1.png 1 0 0 84 84
image2.jpg 1 100 130 128 128
image3.png 2 10 30 50 50 300 100 101 101
etc.
After that you can train by calling opencv_traincascade with your parameter list and you'll get a .xml file as result.

Opencv Train Cascade error

I am trying to train a haar cascade using opencv_traincascade executable in opencv 3.1.0.
For the moment I want to do this using only one positive even though the result is inconsistent, in order to be sure that I am passing the right parameters to opencv_createsamples and opencv_traincascade.
bg.txt content:
negatives/img_04436_c1.pgm
negatives/img_04437_c1.pgm
Negatives resolution: width: 1176 height: 640
Positives resolution: width: 40 height: 70
I am using the following command parameters:
For opencv_createsamples:
./opencv_createsamples -img img_04569_c1.pgm -vec samples -bg bg.txt -maxxangle 0.1 -maxyangle 0.1 -maxzangle 0.1 -w 40 -h 70 -num 30
Info file name: (NULL)
Img file name: img_04569_c1.pgm
Vec file name: samples.vec
BG file name: bg.txt
Num: 30
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 0.1
Max y angle: 0.1
Max z angle: 0.1
Show samples: FALSE
Width: 40
Height: 70
Create training samples from single image applying distortions...
Open background image: negatives/img_04436_c1.pgm
Done
For opencv_traincascade:
./opencv_traincascade -data cascade -vec samples -bg bg.txt -w 40 -h 70 -numPos 30 -numStages 1 -numNeg 2
PARAMETERS:
cascadeDirName: cascade
vecFileName: samples.vec
bgFileName: bg.txt
numPos: 30
numNeg: 2
numStages: 2
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 40
sampleHeight: 70
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC
===== TRAINING 0-stage =====
<BEGIN
POS count : consumed 28 : 28
*** Error in `./opencv_traincascade': double free or corruption (out): 0x00000000016749b0 ***
Aborted (core dumped)
My problem is the following:
I am able to create the sample.vec file.
When I run opencv_traincascade I get the following error:
*** Error in `./opencv_traincascade': double free or corruption (out): 0x0000000001e0e9b0 ***
Sometimes I also get a Segmentation Fault error.
I tried to resize the negatives to a lower resolution and I am able to generate the xml file but when I am trying to use it nothing happens. (the classifier is stucked and runs continuously without returning any rectangles)
I want to use my original negatives size.
Can anybody help me to solve this problem ?
If more details are required please leave a comment and I will update my question.

Error while creating haar Classifier using OpenCV in Ubuntu 14.04

I'm refererring http://abhishek4273.wordpress.com/2014/02/10/opencv-haar-training/ to create the Haar Classifier in OpenCV, I have got the error in,
$ perl createtrainsamples.pl positives.txt negatives.txt samples 200 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 20 -h 20"
opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 20 -h 20 -img ./pos/5.jpg -bg tmp -vec samples/5.jpg.vec -num 40
Info file name: (NULL)
Img file name: ./pos/5.jpg
Vec file name: samples/5.jpg.vec
BG file name: tmp
Num: 40
BG color: 0
BG threshold: 0
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 20
Height: 20
Create training samples from single image applying distortions...
OpenCV Error: Bad argument (Quadrangle is nonconvex or degenerated.) in cvWarpPerspective, file /home/project/OpenCV/opencv-2.4.9/apps/haartraining/cvsamples.cpp, line 217
terminate called after throwing an instance of 'cv::Exception'
what(): /home/project/OpenCV/opencv-2.4.9/apps/haartraining/cvsamples.cpp:217: error: (-5) Quadrangle is nonconvex or degenerated. in function cvWarpPerspective
Above error generated for all the samples in positive folder.
Eventhough it generates error, it has created 8 samples.jpg.vec files in the samples folder.
Actually i have passed 200 as my samples, but it is creating only 8 of those in samples folder and throwing above error, and if try to merge using,
$./mergevec samples.txt samples.vec
It shows following error,
OpenCV Error: Assertion failed (elements_read == 1) in icvGetHaarTraininDataFromVecCallback, file cvhaartraining.cpp, line 1859
terminate called after throwing an instance of 'cv::Exception'
what(): cvhaartraining.cpp:1859: error: (-215) elements_read == 1 in function icvGetHaarTraininDataFromVecCallback
Aborted (core dumped)
If any one knows the answer for the following please do post it.

parsing ns2 trace file [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm using NS 2.35 and am trying to determine the end-to-end delay of my routing algorithm.
I think anyone with some good scripting experience should be able to answer this question, sadly that person is not me.
I have a trace file, that looks something like this:
- -t 0.548 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1052 -a 0 -x {2.0 17.0 6 ------- null}
h -t 0.548 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1052 -a 0 -x {2.0 17.0 -1 ------- null}
+ -t 0.55 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1056 -a 0 -x {2.0 17.0 10 ------- null}
+ -t 0.555 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1057 -a 0 -x {2.0 17.0 11 ------- null}
r -t 0.556 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1047 -a 0 -x {2.0 17.0 1 ------- null}
+ -t 0.556 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1047 -a 0 -x {2.0 17.0 1 ------- null}
- -t 0.556 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1047 -a 0 -x {2.0 17.0 1 ------- null}
But here is what I need to do.
A line that starts with + is when a new packet is added to the network.
A line starting with r is when a packet has been received by the destination. the double-typed number after the -t is the time at which that event happened. And finally, after the -i is the identity of the packet.
For me to calculate average end-to-end delay, I need to find every line that has a certain id after the -i. from there I need to calculate the timestamp of the r minus the timestamp of the +
So I figure there could be a regular expression separated by spaces. I could put each of the segements into their own variables. Then I would check the 15th (the packet ID).
But I'm not sure where to go from there, or how to put it all together.
I know there are some AWK scripts on the web for doing this, but they are all outdated and don't fit the current format (and I'm not sure how to change them).
Any help would be greatly appreciated.
EDIT:
Here is an example of a full packet route that I'm looking to find.
I've taken out a lot of lines in between these ones, so that you can see a single packets events.
# a packet is enqueued from node 2 going to node 7. It's ID is 1636. this was at roughly 1.75sec
+ -t 1.74499999999998 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# at 2.1s, it left node 2.
- -t 2.134 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# at 2.134 it hopped from 2 to 7 (not important)
h -t 2.134 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 -1 ------- null}
# at 2.182 it was received by node 7
r -t 2.182 -s 2 -d 7 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# it was the enqueued by node 7 to be sent to node 12
+ -t 2.182 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# slightly later it left node 7 on its was to node 12
- -t 2.1832 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# it hopped from 7 to 12 (not important)
h -t 2.1832 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 -1 ------- null}
# received by 12
r -t 2.2312 -s 7 -d 12 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# added to queue, heading to node 17
+ -t 2.2312 -s 12 -d 17 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# left for node 17
- -t 2.232 -s 12 -d 17 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
# hopped to 17 (not important)
h -t 2.232 -s 12 -d 17 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 -1 ------- null}
# received by 17 notice the time delay
r -t 2.28 -s 12 -d 17 -p cbr -e 500 -c 0 -i 1636 -a 0 -x {2.0 17.0 249 ------- null}
The ideal output of the script would recognize 2.134 as the start time, and 2.28 as the end, and then give me the delay of 0.146sec. It would do this for all packet IDs and only report the average.
It was requested that I expand a bit on how the file works, and what I am expecting.
The file is listing descriptions of about 10,000 packets. Each packet can be in a different state. The important states are + which means a packet has been enqueued at a router, and r which means the packet has been received by its destination.
It is possible that a packet that is enqueued (so a + entry) is not actually received and is instead dropped. This means we cannot assume that for every + entry there will be a r entry.
What I'm trying to measure is the average end to end delay. What this means, is that if you look at a single packet, it will have a time it was enqueued, and a time it was received. I need to make this calculation to find its end-to-end delay. But I also need to do it for 9,999 other packets to get an average.
I've thought about it more, and heres generally how I think the algorithm needs to work.
remove all lines that don't begin with a + or an r because they are unimportant.
go through all of the packet IDs (that is the numbers after -i, such as 1052 in the example), and put them into some sort of groups (multiple arrays perhaps).
each group should now contain all of the information about a particular packet.
inside the group, check if there is a +, ideally we want the very first +. Record its time.
look for any more + lines. Look at their time. It's possible the log is slightly jumbled. So its possible there is a + line later on that is actually earlier in the simulation.
If this new + line has an earlier time, then update the time variable with that.
assuming there are no more + lines, look for an r line.
if there is no r line, the packet was dropped so don't worry about it.
for every r line you find, all we need to do is find the one who has the lastest timestamp
The r line with the latest timestamp is where the packet was finally received.
subtract the + time from the r time, this gives us the time it took for the packet to travel.
Add this value to an array so that later it can be averaged.
repeat this process on every packet ID group, and then finally average the created array of delays.
Thats a lot of typing, but I think its as clear as I can be in what I want. I wish i was a regex master, but I just don't have time to learn it well enough to pull this off.
Thanks for all your help, and let me know if you have any questions.
There's not much to work with here, as Iain said in the comments to your question, but if I understand what you want to do correctly, something like this should work:
awk '/^[+r]/{$1~/r/?r[$15]=$2:r[$15]?d[$15]=r[$15]-$2:1} END {for(p in d){sum+=r[p];num++}print sum/num}' trace.file
It skips all lines not starting with '+' or 'r'. If the line starts with 'r' it adds time to the r array. Otherwise, it calculates the delay and adds it to the d array if the element is found in the r array. Finally it loops over the elements in the d array, adds up the total delay and number of elements and calculates the average from this. In your case the average is 0.
The :1 at the end of the main block is just in there so I can get away with a ternary expression instead of the significantly more verbose if statement.
EDIT: New expression to work with the added conditions:
awk '/^[+r]/{$1~/r/?$3>r[$15]?r[$15]=$3:1:!a[$15]||$3<a[$15]?a[$15]=$3:1} END {for(i in r){sum+=r[i]-a[i];num++}print "Average delay", sum/num}'
or as an awk-file
/^[+r]/ {
if ($1 ~ /r/) {
if ($3 > received[$15])
received[$15] = $3;
} else {
if (!added[$15] || $3 < added[$15])
added[$15] = $3;
}
} END {
for (packet in received) {
sum += received[packet] - added[packet];
num++
}
print "Average delay", sum/num
}
According to your algorithm it seems like 1.745 would be the start time, while you write that 2.134 is.