How do i create my own clasiifier - c++

Now I am creating my own classifier for face detection.I have two folder one for storing positive images and other for storing negative images. And I make .txt files for both. Now I want to create training samples of positive imgaes. So I give command 'opencv_createsamples -info positives.txt -vec myvec.vec -w 24 -h 24 '. But It shows like this.It doesn't create any samples.What is the reason?Could any one help me. Thanks in advance.
Info file name: positives.txt
Img file name: (NULL)
Vec file name: myvec.vec
BG file name: (NULL)
Num: 1000
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 24
Height: 24
Create training samples from images collection...
positives.txt(1) : parse errorDone. Created 0 samples

The info file should not contain only file names, but also ROI specification.
each line should look like this:
path/to/image.bmp num_rois x y width height x y width height ...
For example if you have files that are exactly as big as the sample size, each line should be:
path/to/image.bmp 1 0 0 24 24
note that the path to the image file should be relative to the location of the info file. also the default number of samples is 1000, if you want to include all the samples in your info file you should specify it through the command line.
a good guide can be found on the opencv web site: http://docs.opencv.org/doc/user_guide/ug_traincascade.html#positive-samples

Related

How can I generate a square wave plot of a pulse train of multiple signals from the data in a csv file (in Linux)?

For instance, given the data in a text file:
10:37:18.459 1
10:37:18.659 0
10:37:19.559 1
How could this be displayed as an image that looked like a square wave that correctly represented the high time and low time? I am trying both gnuplot and scipy. The result should ultimately include more than one sensor, and all plots would have to be displayed above one another so as to show a time delta.
The code in the following link creates a square wave from the formulas listed,
link to waveforms. How can the lower waveform (pwm) be driven by the numbers above if they were in a file (to show a high state for 200 ms, then a low state for 100 ms, and finally a high state)?
If I understood your question correctly you want to plot a step function based on timedata. To avoid further guessing please specify in more detail.
In gnuplot there is the plotting style with steps. Check help steps.
Code:
### display waveform as steps
reset sesion
$Data <<EOD
10:37:18.459 1
10:37:18.659 0
10:37:19.559 1
10:37:19.789 0
10:37:20.123 1
10:37:20.456 0
10:37:20.789 1
EOD
set yrange [-0.05:1.2]
myTimeFmt = "%H:%M:%S" # input time format
set format x "%M:%.1S" time # output time format on x axis
plot $Data u (timecolumn(1,myTimeFmt)):2 w steps lc rgb "red" lw 2 ti "my square wave"
### end of code
Result:
The answer I ended up with was:
file_info = os.stat( self.__outfile)
if file_info.st_size:
x,y,z,a = np.genfromtxt( self.__outfile, delimiter=',',unpack=True )
fig = plt.figure(self.__outfile)
ax = fig.add_subplot(111)
fig.canvas.draw()
test_array = [(datetime.datetime.utcfromtimestamp(e2).strftime('%d_%H:%M:%S.%f')).rstrip('0') for e2 in x]
plt.xticks(x, test_array)
l1, = plt.plot(x,y, drawstyle='steps-post')
l2, = plt.plot(x,a-2, drawstyle='steps-post')
l3, = plt.plot(x,z-4, drawstyle='steps-post')
ax.grid()
ax.set_xlabel('Time (s)')
ax.set_ylabel('HIGH/LOW')
ax.set_ylim((-6.5,1.5))
ax.set_title('Sensor Sequence')
fig.autofmt_xdate()
ax.legend([l1,l2, l3],['sprinkler','lights', 'alarm'], loc='lower left')
plt.show()
I had a input file that had convertDateToFloat values in it. That was passed in to this function. The name is perhaps misleading (__outfile), but on the previous function, it was the output.

How to detect water level in a transparent container?

I am using opencv-python library to do the liquid level detection. So far I was able to convert the image to gray scale and applying canny edge detection the container has been identified.
import numpy as np
import cv2
import math
from matplotlib import pyplot as plt
from cv2 import threshold, drawContours
img1 = cv2.imread('botone.jpg')
kernel = np.ones((5,5),np.uint8)
#convert the image to grayscale
imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(imgray,120,230)
I need to know how to find water level from this stage.
Should I try machine learning, or is there any other option or algorithm available?
I took an approach of finding out the horizontal line in the edge detected image. If the horizontal line crosses certain threshold I can consider it as level.But the result is not consistent.
I want to know if there are any other approaches i can go with or white papers for reference?
I don't know how you would do that with numpy and opencv, because I use ImageMagick (which is installed on most Linux distros and is avilable for OSX and Windows), but the concept should be applicable.
First, I would probably go for a Sobel filter that is rotated to find horizontal edges - i.e. a directional filter.
convert chemistry.jpg -morphology Convolve Sobel:90 sobel.jpg
Then I would probably look at adding in a Hough Transform to find the lines within the horizontal edge-detected image. So, my one-liner looks like this in the Terminal/shell:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 level.jpg
If I add in some debug, you can see the coefficients of the Sobel filter:
convert chemistry.jpg -define showkernel=1 -morphology Convolve Sobel:90 -hough-lines 5x5+30 sobel.jpg
Kernel "Sobel#90" of size 3x3+1+1 with values from -2 to 2
Forming a output range from -4 to 4 (Zero-Summing)
0: 1 2 1
1: 0 0 0
2: -1 -2 -1
If I add in some more debug, you can see the coordinates of the lines detected:
convert chemistry.jpg -morphology Convolve Sobel:90 -hough-lines 5x5+30 -write lines.mvg level.jpg
lines.mvg
# Hough line transform: 5x5+30
viewbox 0 0 86 196
line 0,1.52265 86,18.2394 # 30 <-- this is the topmost, somewhat diagonal line
line 0,84.2484 86,82.7472 # 40 <-- this is your actual level
line 0,84.5 86,84.5 # 40 <-- this is also your actual level
line 0,94.5 86,94.5 # 30 <-- this is the line just below the surface
line 0,93.7489 86,95.25 # 30 <-- so is this
line 0,132.379 86,124.854 # 32 <-- this is the red&white valve(?)
line 0,131.021 86,128.018 # 34
line 0,130.255 86,128.754 # 34
line 0,130.5 86,130.5 # 34
line 0,129.754 86,131.256 # 34
line 0,192.265 86,190.764 # 86
line 0,191.5 86,191.5 # 86
line 0,190.764 86,192.265 # 86
line 0,192.5 86,192.5 # 86
As I said in my comments, please also think about maybe lighting your experiment better - either with different coloured lights, more diffuse lights, different direction lights. Also, if your experiment happens over time, you could consider looking at differences between images to see which line is moving...
Here are the lines on top of your original image:

Adding and retrieving some Meta data to a png Image

I need to add some metadata to a lot of images. For example I need to add the position of the right eye and left eye to the metadata.
That is righteye(291,493), lefteye(453,491) like that. I am working with png files now. Example with gimp is given bleow.
Is there any good way to add these information to image metadata? I have seen keywords and strings in png file. Is that the solution to my problem? Also I need a tool to edit the metadata. I have a lot of facial images, and I need a tool to add these metadata to each image. And also I need to retrieve these data from each image programmatically. Please suggest a proper way to solve all these tasks.
It seems that you can set a comment in a PNG image using either exiv2 or ImageMagick both of which have command-line versions and C++ library bindings.
So, if you do:
# Set image description using "exiv2"
exiv2 -M"add Exif.Image.ImageDescription Ascii 'left eye (200,201) right eye(202,203)'" image.png
# Set image comment usimg "ImageMagick"
convert -comment "left eye(76,77) right eye(78,79)" image.png image.png
You can now look at the result with exiftool
exiftool image.png
ExifTool Version Number : 10.00
File Name : image.png
Directory : .
File Size : 496 bytes
File Modification Date/Time : 2015:09:20 20:45:29+01:00
File Access Date/Time : 2015:09:20 20:49:22+01:00
File Inode Change Date/Time : 2015:09:20 20:45:29+01:00
File Permissions : rw-r--r--
File Type : PNG
File Type Extension : png
MIME Type : image/png
Image Width : 1
Image Height : 1
Bit Depth : 1
Color Type : Grayscale
Compression : Deflate/Inflate
Filter : Adaptive
Interlace : Noninterlaced
White Point X : 0.3127
White Point Y : 0.329
Red X : 0.64
Red Y : 0.33
Green X : 0.3
Green Y : 0.6
Blue X : 0.15
Blue Y : 0.06
Background Color : 1
Modify Date : 2015:09:20 20:45:29
Exif Byte Order : Little-endian (Intel, II)
Image Description : left eye (200,201) right eye(202,203) <--- HERE
Comment : left eye(76,77) right eye(78,79) <--- HERE
Datecreate : 2015-09-20T20:45:29+01:00
Datemodify : 2015-09-20T20:45:29+01:00
Exif Image Description : left eye (200,201) right eye(202,203)
Image Size : 1x1
Megapixels : 0.000001
or look at the result with ImageMagick's identify command:
identify -verbose image.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: PseudoClass
Geometry: 1x1+0+0
Units: Undefined
Type: Bilevel
Base type: Bilevel
Endianess: Undefined
Colorspace: Gray
Depth: 8/1-bit
Channel depth:
gray: 1-bit
Channel statistics:
Pixels: 1
Gray:
min: 0 (0)
max: 0 (0)
mean: 0 (0)
standard deviation: 0 (0)
kurtosis: 0
skewness: 0
entropy: nan
Colors: 1
Histogram:
1: ( 0, 0, 0) #000000 gray(0)
Colormap entries: 2
Colormap:
0: ( 0, 0, 0) #000000 gray(0)
1: (255,255,255) #FFFFFF gray(255)
Rendering intent: Undefined
Gamma: 0.454545
Chromaticity:
red primary: (0.64,0.33)
green primary: (0.3,0.6)
blue primary: (0.15,0.06)
white point: (0.3127,0.329)
Background color: gray(255)
Border color: gray(223)
Matte color: gray(189)
Transparent color: gray(0)
Interlace: None
Intensity: Undefined
Compose: Over
Page geometry: 1x1+0+0
Dispose: Undefined
Iterations: 0
Compression: Zip
Orientation: Undefined
Properties:
comment: left eye(76,77) right eye(78,79) <--- HERE
date:create: 2015-09-20T20:45:29+01:00
date:modify: 2015-09-20T20:45:29+01:00
exif:ImageDescription: left eye (200,201) right eye(202,203) <--- HERE
png:bKGD: chunk was found (see Background color, above)
png:cHRM: chunk was found (see Chromaticity, above)
png:IHDR.bit-depth-orig: 1
png:IHDR.bit_depth: 1
png:IHDR.color-type-orig: 0
png:IHDR.color_type: 0 (Grayscale)
png:IHDR.interlace_method: 0 (Not interlaced)
png:IHDR.width,height: 1, 1
png:text: 5 tEXt/zTXt/iTXt chunks were found
png:text-encoded profiles: 1 were found
png:tIME: 2015-09-20T20:45:29Z
signature: 709e80c88487a2411e1ee4dfb9f22a861492d20c4765150c0c794abd70f8147c
Profiles:
Profile-exif: 70 bytes
Artifacts:
filename: image.png
verbose: true
Tainted: False
Filesize: 496B
Number pixels: 1
Pixels per second: 1PB
User time: 0.000u
Elapsed time: 0:01.000
Version: ImageMagick 6.9.1-10 Q16 x86_64 2015-09-08 http://www.imagemagick.org
Or using:
exiftool -s -Comment image.png
Comment : left eye(76,77) right eye(78,79)
Or set a comment with exiftooland read with ImageMagick in either of two ways:
exiftool -comment="Crazy Comment" image.png
identify -verbose image.png | grep Crazy
comment: Crazy Comment
identify -format "%[c]" image.png
Crazy Comment

How to detect an inclination of 90 degrees or 180?

In my project I deal with images which I don't know if they are inclined or not.
I work with C++ and OpenCV. I try with Hough transformation to determine the angle of inclination: if it is 90 or 180. But it doesn't give a result.
A link to example image (full resolution TIFF) here.
The following illustration is the full-res image scaled down and converted to PNG:
If I want to attack your image with the Hough lines method, I would do a Canny edge detection first, then find the Hough lines and then look at the generated lines. So it would look like this in ImageMagick - you can transform to OpenCV:
convert input.jpg \
\( +clone -canny x10+10%+30% \
-background none -fill red \
-stroke red -strokewidth 2 \
-hough-lines 9x9+150 \
-write lines.mvg \
\) \
-composite hough.png
And in the lines.mvg file, I can see the individual detected lines:
# Hough line transform: 9x9+150
viewbox 0 0 349 500
line 0,-3.74454 349,8.44281 # 160
line 0,55.2914 349,67.4788 # 206
line 1,0 1,500 # 193
line 0,71.3012 349,83.4885 # 169
line 0,125.334 349,137.521 # 202
line 0,142.344 349,154.532 # 156
line 0,152.351 349,164.538 # 155
line 0,205.383 349,217.57 # 162
line 0,239.453 349,245.545 # 172
line 0,252.455 349,258.547 # 152
line 0,293.461 349,299.553 # 163
line 0,314.464 349,320.556 # 169
line 0,335.468 349,341.559 # 189
line 0,351.47 349,357.562 # 196
line 0,404.478 349,410.57 # 209
line 349.39,0 340.662,500 # 187
line 0,441.484 349,447.576 # 198
line 0,446.484 349,452.576 # 165
line 0,455.486 349,461.578 # 174
line 0,475.489 349,481.581 # 193
line 0,498.5 349,498.5 # 161
I resized your image to 349 pixels wide (to make it fit on Stack Overflow and process faster), so you can see there are lots of lines that start at 0 on the left side of the image and end at 349 on the right side which tells you they go across the image, not up and down it. Also, you can see that the right end of the lines is generally 16 pixels lower than the left, so the image is rotated tan inverse (16/349) degrees.
Here is a fairly simple approach that may help you get started, or give you ideas that you can adapt. I use ImageMagick, but the concepts and techniques should be readily applicable in OpenCV.
First, I note that the image is rotated a few degrees and that gives the black triangle at top right, so the first thing I would consider is cropping the middle out of the image - i.e. removing around 10-15% off each side.
The next thing I note is that, the image is poorly scanned with lots of noisy, muddy grey areas. I would tend to want to blur these together so that they become a bit more uniform and can be thresholded.
So, if I want to do those two things in ImageMagick, I would do this:
convert input.tif \
-gravity center -crop 75x75%+0+0 \
-blur x10 -threshold 50% \
-negate \
stage1.jpg
Now, I can count the number of horizontal black lines that run the full width of the image (without crossing anything white). I do this by squidging the image till it is just a single pixel wide (but still the full original height) and counting the number of black rows:
convert stage1.jpg -resize 1x! -threshold 1 txt: | grep -c black
1368
And I do the same for vertical black lines that run the full height of the image from top to bottom, uninterrupted by white. I do that by squidging the image till it is a single pixel tall and the full original width:
convert stage1.jpg -resize x1! -threshold 1 txt: | grep -c black
0
Therefore there are 1,368 lines across the image and none up and down it, so I can say the dark lines in the original image tend to run left-right across the image rather than top-bottom up and down the image.

OpenCV - TranCascade error (bad_alloc)

I've got a problem training the cascade classifier.
First, I create samples using opencv_createsamples:
./opencv_createsamples -vec test.vec -img ./positive/speed_50.jpeg -bg /home/boris/src/cascade/neg.txt -num 50 -w 150 -h 150
It works nice, and the output is following:
Info file name: (NULL)
Img file name: ./positive/speed_50.jpeg
Vec file name: test.vec
BG file name: /home/boris/src/cascade/neg.txt
Num: 50
BG color: 0
BG threshold: 80
Invert: FALSE
Max intensity deviation: 40
Max x angle: 1.1
Max y angle: 1.1
Max z angle: 0.5
Show samples: FALSE
Width: 150
Height: 150
Create training samples from single image applying distortions...
Done
Then the cascade trainer:
./opencv_traincascade -data /home/boris/src/cascade -vec ./test.vec -bg ./neg.txt -numPos 50 -numNeg 2 -w 150 -h 150
And I get the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Emergency stop (core dumped)
neg.txt file:
img/img1.jpeg
img/img2.jpeg
both files exist in the img directory.
OpenCV version is 2.4.2, OS is Ubuntu 12.04.
Thanks for any help.
you need to set w and h to the same size than the sample generated, 150 in your case. I am having he same problem for any sample size too big. It works for 48x48 not for 96x96. Let me know if you sloved the problem with bigger samples.
Regards