I'm trying to train object detector based on train_object_detector.cpp example using dlib library.
time ./train_object_detector -tv -u 0 --threads 4 --flip data/training.xml
Loading image dataset from metadata file data/training.xml
Number of images loaded: 78
objective: 45.1205
objective gap: 45.1167
risk: 45.1167
risk gap: 45.1167
num planes: 3
iter: 1
...
objective: 3.44535
objective gap: 0.00889926
risk: 2.99319
risk gap: 0.00889926
num planes: 60
iter: 157
Saving trained detector to object_detector.svm
Testing detector on training data...
Test detector (precision,recall,AP): 1 0 0
Parameters used:
threads: 4
C: 1
eps: 0.01
target-size: 6400
detection window width: 80
detection window height: 80
upsample this many times : 0
trained using left/right flips.
real 3m17.072s
user 9m54.928s
sys 0m4.328s
Test detector (precision,recall,AP): 1 0 0 means that true positives = 0 and false positives = 0?
And also when apply detector to any image from training set it can't detect any objects Number of detections: 0.
How to fix this?
Related
I have a solution for solving an MIP problem of graph which works fine and gives the following output when I run it for smaller graphs. I'm using Gurobi solver with Pyomo.
Problem:
- Name: x73
Lower bound: 192.0
Upper bound: 192.0
Number of objectives: 1
Number of constraints: 10
Number of variables: 37
Number of binary variables: 36
Number of integer variables: 36
Number of continuous variables: 1
Number of nonzeros: 37
Sense: minimize
Solver:
- Status: ok
Return code: 0
Message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Termination condition: optimal
Termination message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Wall time: 0.03206682205200195
Error rc: 0
Time: 0.09361410140991211
Solution:
- number of solutions: 0
number of solutions displayed: 0
But I am getting the following error while running the code with larger graphs.
ERROR: Solver (gurobi) returned non-zero return code (137)
ERROR: Solver log: Using license file /opt/shared/gurobi/gurobi.lic Set
parameter TokenServer to value gurobi.lm.udel.edu Set parameter TSPort to
value 40100 Read LP format model from file /tmp/tmpaud9ogrn.pyomo.lp
Reading time = 0.01 seconds x1101: 56 rows, 551 columns, 551 nonzeros
Changed value of parameter TimeLimit to 600.0
Prev: inf Min: 0.0 Max: inf Default: inf
Gurobi Optimizer version 9.0.1 build v9.0.1rc0 (linux64) Optimize a model
with 56 rows, 551 columns and 551 nonzeros Model fingerprint: 0xafe0319a
Model has 15400 quadratic objective terms Variable types: 1 continuous,
550 integer (550 binary) Coefficient statistics:
Matrix range [1e+00, 1e+00] Objective range [0e+00, 0e+00]
QObjective range [4e+00, 8e+01] Bounds range [1e+00, 1e+00] RHS
range [1e+00, 1e+00]
Found heuristic solution: objective 22880.000000 Presolve removed 1 rows
and 1 columns Presolve time: 0.01s Presolved: 55 rows, 550 columns, 550
nonzeros Presolved model has 15400 quadratic objective terms Variable
types: 0 continuous, 550 integer (550 binary)
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
130920 9.8490000e+02 1.610955e+03 0.000000e+00 5s 263917
1.0999000e+03 1.710649e+03 0.000000e+00 10s 397157
1.0999000e+03 2.243077e+03 0.000000e+00 15s 529512
1.0999000e+03 1.910603e+03 0.000000e+00 20s 662404
1.0999000e+03 1.584650e+03 0.000000e+00 25s 791296
1.0999000e+03 1.812443e+03 0.000000e+00 30s 906473
1.3475000e+03 0.000000e+00 0.000000e+00 34s
Root relaxation: objective 1.347500e+03, 906473 iterations, 34.32 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node
Time
H 0 0 1730.0000000 0.00000 100% - 52s
H 0 0 1654.0000000 0.00000 100% - 52s
H 0 0 1578.0000000 0.00000 100% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 53s
H 0 0 1540.0000000 1347.50000 12.5% - 53s
0 2 1347.50000 0 145 1540.00000 1347.50000 12.5% - 55s
101 118 1396.92351 10 140 1540.00000 1347.50000 12.5% 157 61s
490 591 1416.40484 18 128 1540.00000 1347.50000 12.5% 63.6 65s
2136 2347 1440.09938 42 100 1540.00000 1347.50000 12.5% 42.9 70s
3847 3402 1461.55736 81 80 1540.00000 1347.50000 12.5% 37.0 82s
/opt/shared/gurobi/9.0.1/bin/gurobi.sh: line 17: 23890 Killed
$PYTHONHOME/bin/python3.7 "$#"
Traceback (most recent call last):
File "/home/2925/EdgeColoring/main.py", line 91, in <module>
qubo_coloring, qubo_time = qubo(G, colors, edge_list, solver)
File "/home/2925/EdgeColoring/qubo.py", line 59, in qubo
result = solver.solve(model)
File "/home/2925/.conda/envs/qubo/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 596, in solve
raise ApplicationError(
pyomo.common.errors.ApplicationError: Solver (gurobi) did not exit normally
Using TimeLimit upto 2 minutes breaks the model early without any error but doesn't always give any optimal solution for larger graphs. Memory or processing power is not an issue here. I need to run the code without any interruption for at least 10 minutes if not for hours.
I am trying to use the ZeroR algorithm in Weka in order to make baseline performance for my classification problem. However, Weka is displaying weird results for precision and F-measure, it is showing a question mark '?' instead of any number. Anyone knows how can I fix this ?
=== Classifier model (full training set) ===
ZeroR predicts class value: label 1
Time taken to build model: 0 seconds
=== Stratified cross-validation ===
=== Summary ===
Correctly Classified Instances 431 53.607 %
Incorrectly Classified Instances 373 46.393 %
Kappa statistic 0
Mean absolute error 0.4974
Root mean squared error 0.4987
Relative absolute error 100 %
Root relative squared error 100 %
Total Number of Instances 804
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure MCC ROC Area PRC Area Class
0.000 0.000 ? 0.000 ? ? 0.488 0.457 label 0
1.000 1.000 0.536 1.000 0.698 ? 0.488 0.530 label 1
Weighted Avg. 0.536 0.536 ? 0.536 ? ? 0.488 0.496
=== Confusion Matrix ===
a b <-- classified as
0 373 | a = label 0
0 431 | b = label 1
It's not wrong. Note that you have no cases classified as "a", so the precision (etc.) are indeterminable for "a". Evidently Weka propagates incalculatable values (like Excel does), so the overall precision isn't calculated, either.
Your real problem here is that you have a model that is classifying everything as "b", which is unlikely to be useful. But that's ZeroR, so that's just your starting point.
I want to measure the height and width of each individual pole in pixel.
But because the poles are not always stand straight, but i need the height of pole from the horizontal ground. Can anyone guide me how to handle this?
Note: I might need to get the angle it has slanted later on. Not sure I can ask so many question in here. But greatly appreciate if someone can help.
The image sample i have is at below link:
This should give you a good idea how to do it:
#!/usr/local/bin/python3
import cv2
# Open image in greyscale mode
img = cv2.imread('poles.png',cv2.IMREAD_GRAYSCALE)
# Threshold image to pure black and white AND INVERT because findContours looks for WHITE objects on black background
_, thresh = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV)
# Find contours
_, contours, _ = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# Print the contours
for c in contours:
x,y,w,h = cv2.boundingRect(c)
print(x,y,w,h)
The output is this, where each line corresponds to one vertical bar in your image:
841 334 134 154 <--- bar 6 is 154 pixels tall
190 148 93 340 <--- bar 2 is 340 pixels tall
502 79 93 409 <--- bar 4 is 409 pixels tall
633 55 169 433 <--- bar 5 is 433 pixels tall
1009 48 93 440 <--- bar 7 is 490 pixels tall
348 48 93 440 <--- bar 3 is 440 pixels tall
46 46 93 442 <--- bar 1 is 442 pixels tall (leftmost bar)
The first column is the distance from the left edge of the image and the last column is the height of the bar in pixels.
As you seem unsure about whether you want to do this in Python or C++, you may prefer not write any code at all - in which case you can simply use ImageMagick which is included in most Linux distros and is available for macOS and Windows.
Basically, you use "Connected Component" analysis by typing this into the Terminal:
convert poles.png -colorspace gray -threshold 50% \
-define connected-components:verbose=true \
-connected-components 8 null:
Output
Objects (id: bounding-box centroid area mean-color):
0: 1270x488+0+0 697.8,216.0 372566 srgb(255,255,255)
1: 93x442+46+46 92.0,266.5 41106 srgb(0,0,0)
2: 93x440+348+48 394.0,267.5 40920 srgb(0,0,0)
3: 93x440+1009+48 1055.0,267.5 40920 srgb(0,0,0)
4: 169x433+633+55 717.3,271.0 40269 srgb(0,0,0)
5: 93x409+502+79 548.0,283.0 38037 srgb(0,0,0)
6: 93x340+190+148 236.0,317.5 31620 srgb(0,0,0)
7: 134x154+841+334 907.4,410.5 14322 srgb(0,0,0)
That gives you a header line which tells you what all the fields are, then a line for each of the blobs it found in the image. Disregard the first one because that is the white background - you can see that from the last field which is rgb(255,255,255).
So, if we look at the last line, it is a blob that is 134 pixels wide and 154 pixels tall, starting at x=841 and y=334 from the top-left corner, i.e. it corresponds to the first contour that OpenCV found.
I am trying to train a haar cascade using opencv_traincascade executable in opencv 3.1.0.
For the moment I want to do this using only one positive even though the result is inconsistent, in order to be sure that I am passing the right parameters to opencv_createsamples and opencv_traincascade.
bg.txt content:
negatives/img_04436_c1.pgm
negatives/img_04437_c1.pgm
Negatives resolution: width: 1176 height: 640
Positives resolution: width: 40 height: 70
I am using the following command parameters:
For opencv_createsamples:
./opencv_createsamples -img img_04569_c1.pgm -vec samples -bg bg.txt -maxxangle 0.1 -maxyangle 0.1 -maxzangle 0.1 -w 40 -h 70 -num 30
Info file name: (NULL)
Img file name: img_04569_c1.pgm
Vec file name: samples.vec
BG file name: bg.txt
Num: 30
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 0.1
Max y angle: 0.1
Max z angle: 0.1
Show samples: FALSE
Width: 40
Height: 70
Create training samples from single image applying distortions...
Open background image: negatives/img_04436_c1.pgm
Done
For opencv_traincascade:
./opencv_traincascade -data cascade -vec samples -bg bg.txt -w 40 -h 70 -numPos 30 -numStages 1 -numNeg 2
PARAMETERS:
cascadeDirName: cascade
vecFileName: samples.vec
bgFileName: bg.txt
numPos: 30
numNeg: 2
numStages: 2
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 40
sampleHeight: 70
boostType: GAB
minHitRate: 0.995
maxFalseAlarmRate: 0.5
weightTrimRate: 0.95
maxDepth: 1
maxWeakCount: 100
mode: BASIC
===== TRAINING 0-stage =====
<BEGIN
POS count : consumed 28 : 28
*** Error in `./opencv_traincascade': double free or corruption (out): 0x00000000016749b0 ***
Aborted (core dumped)
My problem is the following:
I am able to create the sample.vec file.
When I run opencv_traincascade I get the following error:
*** Error in `./opencv_traincascade': double free or corruption (out): 0x0000000001e0e9b0 ***
Sometimes I also get a Segmentation Fault error.
I tried to resize the negatives to a lower resolution and I am able to generate the xml file but when I am trying to use it nothing happens. (the classifier is stucked and runs continuously without returning any rectangles)
I want to use my original negatives size.
Can anybody help me to solve this problem ?
If more details are required please leave a comment and I will update my question.
I'm refererring http://abhishek4273.wordpress.com/2014/02/10/opencv-haar-training/ to create the Haar Classifier in OpenCV, I have got the error in,
$ perl createtrainsamples.pl positives.txt negatives.txt samples 200 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 20 -h 20"
opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 20 -h 20 -img ./pos/5.jpg -bg tmp -vec samples/5.jpg.vec -num 40
Info file name: (NULL)
Img file name: ./pos/5.jpg
Vec file name: samples/5.jpg.vec
BG file name: tmp
Num: 40
BG color: 0
BG threshold: 0
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 20
Height: 20
Create training samples from single image applying distortions...
OpenCV Error: Bad argument (Quadrangle is nonconvex or degenerated.) in cvWarpPerspective, file /home/project/OpenCV/opencv-2.4.9/apps/haartraining/cvsamples.cpp, line 217
terminate called after throwing an instance of 'cv::Exception'
what(): /home/project/OpenCV/opencv-2.4.9/apps/haartraining/cvsamples.cpp:217: error: (-5) Quadrangle is nonconvex or degenerated. in function cvWarpPerspective
Above error generated for all the samples in positive folder.
Eventhough it generates error, it has created 8 samples.jpg.vec files in the samples folder.
Actually i have passed 200 as my samples, but it is creating only 8 of those in samples folder and throwing above error, and if try to merge using,
$./mergevec samples.txt samples.vec
It shows following error,
OpenCV Error: Assertion failed (elements_read == 1) in icvGetHaarTraininDataFromVecCallback, file cvhaartraining.cpp, line 1859
terminate called after throwing an instance of 'cv::Exception'
what(): cvhaartraining.cpp:1859: error: (-215) elements_read == 1 in function icvGetHaarTraininDataFromVecCallback
Aborted (core dumped)
If any one knows the answer for the following please do post it.