Train fire detection using opencv SVM [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm using SVM since I need ML to train my classifier, and I saw on several papers on fire detection that they used SVM and logistic regression but since there is no logistic regression in 2.4.9 I'm planning on using SVM. I'm using opencv 2.4.9 since people said opencv 3 is buggy.
Im new to this so it will be helpful if we start from basic
I have prepared several fire and non-fire videos ready to be extracted into frames. I'm new to opencv and everything about classifiers. My question is what are the basics in training a classifier specifically SVM, What format do I need my images to be and how do I train them? Are there any good links for a tutorial? I found one in opencv documentation but it doesn't teach on training using image. What do I need in determining parameters and what are the parameters for? Thanks in advance

This is a conceptual question that calls for lots of papers and tutorials to explain. However, As it was explained in the comments I try to elaborate on feature extraction. Feature descriptors should be robust against scaling, translate and rotation. This robustness is literally referred to as invariant features. For instance, moments and its derivatives are one of the most famous type of invariant against rotation, scaling and translate. You may find the usage of Hu moments as explained in this paper.
Flame or fire detection is something different. The feature corresponds to flame can be extracted from fire's dynamic texture. For instance, fire has a special color texture that segregates it from the background. The conventional flame detectors make use of infrared sensors to detect a flame. In image processing, or RGB world, we can do the same by considering the nature of flame itself. Flame emits a significant portion of its energy through heat and infrared ray. So, one can expect a major portion of red channel to be devoted to flame. See the following image for example.
In the processed image, the red channel is converted to BW image by imposing a threshold. To be more clear, I have separated the 3 channels as below.
R: G: B:
It is evident the Red channel has more to say about the flame. Therefore, it can be concluded that the flame is where R channel has a portion of its information, then G and finally B channel. See this.
Your feature vector, then, will be a three dimensional vector about, for example, the contour of the flame in three RGB channels. SVM classifiers are now ready to be used. Sometimes, the video may contain flam-liked segments that should be avoided and they lead to false alarms otherwise. SVM, assists you to accept or reject a flame candidate. To train your support vector machine, collect some true flames and some images that may be misjudged by your feature extractor. Then, label them with positive and negative features. Finally, let opencv do the magic and train it. For more information about SVM, please watch this video by Patrick Winston, MIT on youtube.
UPDATE ----
As you are curious about creating feature vector, I brought to you the following example. Assume that R,G,B channels are finely segregated so as to one can call them statistically independent as the follow; This is not true in real images wherein R,G,B planes are not statistically independent.
Therefore, a point in the RGB image will have 3 representations in RGB channels. For instance, a flame will make 3 spots on all the R,G,B planes. The area of each spot, for example, is being traced here. Label the flame spot in RGB image as "A".
The representations of the area A were depicted above in R , G , B images. A_r , A_g , A_b denote the corresponding area of the area A on R,G,B planes, respectively.
Therefore, point A will be represented by a triplet (Ar,Ag,Ab) in xyz plane. SVM, now accepts this vector as input and decide if it signifies a real flame.
The areas, normalized format, is one of the many geometrical features that you can involve in decision making process. Other useful features of this kind are aspect ratios, moments and so on.
In a nutshell, you have to do the following:
1 - Find the flame-liked areas.
2 - Trace the candidate spot in all of the R,G,B planes.
3 - Extract the feature( I suggest moments) in every plane.
4 - Form the feature vector
5 - Feed the SVM with this vector
I hope you find this useful.

Yes right so your job now is to make a .txt file with data on each image that you gonna process.
The ones which are true will be denoted by +1 followeed with feature set and end it with a -1
and the ones which are false images of fires will start with -1 followed with feature sets and end it with -1 again before starting with feature set of next image
its gonna b a tedious job but I am sure ul manage that
and finally save that file with extension of .train and not .txt
so your training file name gonna be filename.train

Related

Image segmentation: identify spots and scratches with irregular borders

A quick introduction: I do physics research which includes experimental measurements and numerical simulations.
Below is the image which is the result of our theoretical model
.
Without going into details, I just say that the intensity and color here represent a simulated physical quantity.
Experimental results are below
The measurement has more features and details but it also has a lot of "invalid" data which are represented by darker spots, scratches and marks which have irregular borders and can vary in size and shape. Nonetheless by comparing these two pictures we can visually identify "invalid" pixels on the second figure which is the problem I am trying to solve using a computer.
Simple thresholding by intensity won't work because the valid data also can vary in intensity. I was thinking about using CNN but then I realized that it would be very tedious to prepare a training dataset because there a lot of small marks/spots needs to be marked and manually marking them will take a lot of time.
Is there any other solution for this problem? Or may be there is a pretrained neural network ( maybe SVM?) which handles a similar problem?
Let's check all options one by one taking into account the following:
you have a very specific physical process
you need accurate results
(both process-wise and geometry-wise)
CNNs
It will be hard to find a "ready-to-be-used" model for your specific process. Moreover, there will be a need to take some specific actions to get an accurate geometry out of it:
https://ai.facebook.com/blog/using-a-classical-rendering-technique-to-push-state-of-the-art-for-image-segmentation/
Background subtraction
Background subtraction will require a threshold, so for your examples and conditions it has no sense. I produced two masks based on subtracted background, find the difference:
Color-based segmentation
With a properly defined threshold (let's assume we use delta_E) you can segment several areas of interest. For example, lets define three:
bright red
red
black/dark red
Let's compare:
Before:
After:
Additional area:
Before:
After:
So color-based segmentation seems to be an option, but it is better to improve input if possible. I hope it makes any sense.

Image classification in video stream with contours with Opencv

Please I need your help with this problem, I want to create a program to differentiate between the two forms(2 images), with a camera in real time, here are the methods. I found but I’m not sure they’re going to work because I want the detection to be feasible if the object is inclined by 90 degrees or 180 degrees by example, I have to use machine learning in this problem but I am open to any proposition, also I do not have many images in the database.
Here are the methods I found but I'm not sure they will work;
1 - Apply Canny filter to extract contours.
2 - Use a features extractors such SIFT, Fourier Descriptors, Haralick's Features, Hough Transform to extract more details which could be summarised in a short vector.
3-Then train SVM or ANN with this vector.
The goal is to detect two cases : Open or Close
Also i dont know that contours are the best way to solve this problem because the background changes a lot
The original images are valves with different shape, here is an example :

What is the best method to train the faces with FaceRecognizer OpenCV to have the best result?

Here I say that I have tried many tutorials to implement face recognition in OpenCV 3.2 by using the FaceRecognizer class in face module. But I did not get the accepted result as I wish.
Here I want to ask and I want to know, that what is the best way or what are the conditions to be care off during training and recognizing?
What I have done to improve the accuracy:
Create (at least) 10 faces for training each person in the best quality, size, and angle.
Try to fit the face in the image.
Equalize the HIST of the images
And then I have tried all the three face recognizer (EigenFaceRecognizer, FisherFaceRecognizer, LBPHFaceRecognizer), the result all was the same, but the recognition rate was really very low, I have trained only for three persons, but also cannot recognize very well (the fist person was recognized as the second and so on problems).
Questions:
Do the training and the recognition images must be from the same
camera?
Do the training images cropped manually (photoshop -> read images then train) or this task
must be done programmatically (detect-> crop-> resize then train)?
And what are the best parameters for the each face recognizer (int num_components, double threshold)
And how to set training Algorithm to return -1 when it is an unknown
person.
Expanding on my comment, Chapter 8 in Mastering OpenCV provides really helpful tips for pre-processing faces to make aid the recognition process, such as:
taking a sample only when both eyes are detected (via haar cascade)
Geometrical transformation and cropping: This process would include scaling, rotating, and translating the images so that the eyes are aligned, followed by the removal of the forehead, chin, ears, and background from the face image.
Separate histogram equalization for left and right sides: This process standardizes the brightness and contrast on both the left- and right-hand sides of the face independently.
Smoothing: This process reduces the image noise using a bilateral filter.
Elliptical mask: The elliptical mask removes some remaining hair and background from the face image.
I've added a hacky load/save to my fork of the example code, feel free to try it/tweak it as you need it. Currently it's very limited, but it's a start.
Additionally, you should also check OpenFace and it's DNN face recognizer.
I haven't played with that yet so can't provide details, but it looks really cool.

OpenCV detect image against a image set

I would like to know how I can use OpenCV to detect on my VideoCamera a Image. The Image can be one of 500 images.
What I'm doing at the moment:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
self.videoCamera = [[CvVideoCamera alloc] initWithParentView:imageView];
self.videoCamera.delegate = self;
self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
self.videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetHigh;
self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
self.videoCamera.defaultFPS = 30;
self.videoCamera.grayscaleMode = NO;
}
-(void)viewDidAppear:(BOOL)animated{
[super viewDidAppear:animated];
[self.videoCamera start];
}
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(cv::Mat&)image;
{
// Do some OpenCV stuff with the image
cv::Mat image_copy;
cvtColor(image, image_copy, CV_BGRA2BGR);
// invert image
//bitwise_not(image_copy, image_copy);
//cvtColor(image_copy, image, CV_BGR2BGRA);
}
#endif
The images that I would like to detect are 2-5kb small. Few got text on them but others are just signs. Here a example:
Do you guys know how I can do that?
There are several things in here. I will break down your problem and point you towards some possible solutions.
Classification: Your main task consists on determining if a certain image belongs to a class. This problem by itself can be decomposed in several problems:
Feature Representation You need to decide how you are gonna model your feature, i.e. how are you going to represent each image in a feature space so you can train a classifier to separate those classes. The feature representation by itself is already a big design decision. One could (i) calculate the histogram of the images using n bins and train a classifier or (ii) you could choose a sequence of random patches comparison such as in a random forest. However, after the training, you need to evaluate the performance of your algorithm to see how good your decision was.
There is a known problem called overfitting, which is when you learn too well that you can not generalize your classifier. This can usually be avoided with cross-validation. If you are not familiar with the concept of false positive or false negative, take a look in this article.
Once you define your feature space, you need to choose an algorithm to train that data and this might be considered as your biggest decision. There are several algorithms coming out every day. To name a few of the classical ones: Naive Bayes, SVM, Random Forests, and more recently the community has obtained great results using Deep learning. Each one of those have their own specific usage (e.g. SVM ares great for binary classification) and you need to be familiar with the problem. You can start with simple assumptions such as independence between random variables and train a Naive Bayes classifier to try to separate your images.
Patches: Now you mentioned that you would like to recognize the images on your webcam. If you are going to print the images and display in a video, you need to handle several things. it is necessary to define patches on your big image (input from the webcam) in which you build a feature representation for each patch and classify in the same way you did in the previous step. For doing that, you could slide a window and classify all the patches to see if they belong to the negative class or to one of the positive ones. There are other alternatives.
Scale: Considering that you are able to detect the location of images in the big image and classify it, the next step is to relax the toy assumption of fixes scale. To handle a multiscale approach, you could image pyramid which pretty much allows you to perform the detection in multiresolution. Alternative approaches could consider keypoint detectors, such as SIFT and SURF. Inside SIFT, there is an image pyramid which allows the invariance.
Projection So far we assumed that you had images under orthographic projection, but most likely you will have slight perspective projections which will make the whole previous assumption fail. One naive solution for that would be for instance detect the corners of the white background of your image and rectify the image before building the feature vector for classification. If you used SIFT or SURF, you could design a way of avoiding explicitly handling that. Nevertheless, if your input is gonna be just squares patches, such as in ARToolkit, I would go for manual rectification.
I hope I might have given you a better picture of your problem.
I would recommend using SURF for that, because pictures can be on different distances form your camera, i.e changing the scale. I had one similar experiment and SURF worked just as expected. But SURF has very difficult adjustment (and expensive operations), you should try different setups before you get the needed results.
Here is a link: http://docs.opencv.org/modules/nonfree/doc/feature_detection.html
youtube video (in C#, but can give an idea): http://www.youtube.com/watch?v=zjxWpKCQqJc
I might not be qualified enough to answer this problem. Last time I seriously use OpenCV it was still 1.1. But just some thought on it, and hope it would help (currently I am interested in DIP and ML).
I think it will probably an easier task if you only need to classify an image, if the image is just one from (or very similar to) your 500 images. For this you could use SVM or some neural network (Felix already gave an excellent enumeration on that).
However, your problem seems to be that you need to first find this candidate image in your webcam, the location of which you have little clue beforehand. (let us know whether it is so. I think it is important.)
If so, the harder problem is the detection/localization of your candidate image.
I don't have a general solution for that. The first thing I would do is to see if there is some common feature in your 500 images (e.g., whether all of them enclosed by a red circle, or, half of them have circle and half of them have rectangle). If this can be done, the problem will be simpler (it would be similar to face detection problem, which have good solution).
In other words, this means that you first classify the 500 images to a few groups with common feature (by human), and detect the group first, then scale and use above mentioned technique to classify them into fine result. In this way, it will be more computationally acceptable than trying to detect 500 images one by one.
BTW, this ppt would help to give a visual clue of what is going on for feature extraction and image matching http://courses.cs.washington.edu/courses/cse455/09wi/Lects/lect6.pdf.
Detect vs recognize: detecting the image is just finding it on the background and from your comments I realized you may have your sings surrounded by the background. It might facilitate your algorithm if you can somehow crop your signs from the background (detect) before trying to recognize them. Recognizing is a next stage that presumes you can classify the cropped image correctly as the one seen before.
If you need real time speed and scale/rotation invariance neither SIFT no SURF will do this fast. Nowadays you can do much better if you shift the burden of image processing to a learning stage as was done by Lepitit. In short, he subjected each pattern to a bunch of affine transformations and trained a binary classification tree to recognize each point correctly by doing a lot of binary comparison tests. Trees are extremely fast and a way to go not to mention that most of the processing is done offline. This method is also more robust to off-plane rotations than SIFT or SURF. You will also learn about tree classification which may facilitate you last processing stage.
Finally a recognition stage is based not only on the number of matches but also on their geometric consistency. Since your signs look flat I suggest finding either affine or homography transformation that has most inliers when calculated between matched points.
Looking at your code though I realized that you may not follow any of these recommendations. It may be a good starting point for you to read about decision trees and then play with some sample code (see mushroom.cpp in the above mentioned link)

Target Detection - Algorithm suggestions

I am trying to do image detection in C++. I have two images:
Image Scene: 1024x786
Person: 36x49
And I need to identify this particular person from the scene. I've tried to use Correlation but the image is too noisy and therefore doesn't give correct/accurate results.
I've been thinking/researching methods that would best solve this task and these seem the most logical:
Gaussian filters
Convolution
FFT
Basically, I would like to move the noise around the images, so then I can use Correlation to find the person more effectively.
I understand that an FFT will be hard to implement and/or may be slow especially with the size of the image I'm using.
Could anyone offer any pointers to solving this? What would the best technique/algorithm be?
In Andrew Ng's Machine Learning class we did this exact problem using neural networks and a sliding window:
train a neural network to recognize the particular feature you're looking for using data with tags for what the images are, using a 36x49 window (or whatever other size you want).
for recognizing a new image, take the 36x49 rectangle and slide it across the image, testing at each location. When you move to a new location, move the window right by a certain number of pixels, call it the jump_size (say 5 pixels). When you reach the right-hand side of the image, go back to 0 and increment the y of your window by jump_size.
Neural networks are good for this because the noise isn't a huge issue: you don't need to remove it. It's also good because it can recognize images similar to ones it has seen before, but are slightly different (the face is at a different angle, the lighting is slightly different, etc.).
Of course, the downside is that you need the training data to do it. If you don't have a set of pre-tagged images then you might be out of luck - although if you have a Facebook account you can probably write a script to pull all of yours and your friends' tagged photos and use that.
A FFT does only make sense when you already have sort the image with kd-tree or a hierarchical tree. I would suggest to map the image 2d rgb values to a 1d curve and reducing some complexity before a frequency analysis.
I do not have an exact algorithm to propose because I have found that target detection method depend greatly on the specific situation. Instead, I have some tips and advices. Here is what I would suggest: find a specific characteristic of your target and design your code around it.
For example, if you have access to the color image, use the fact that Wally doesn't have much green and blue color. Subtract the average of blue and green from the red image, you'll have a much better starting point. (Apply the same operation on both the image and the target.) This will not work, though, if the noise is color-dependent (ie: is different on each color).
You could then use correlation on the transformed images with better result. The negative point of correlation is that it will work only with an exact cut-out of the first image... Not very useful if you need to find the target to help you find the target! Instead, I suppose that an averaged version of your target (a combination of many Wally pictures) would work up to some point.
My final advice: In my personal experience of working with noisy images, spectral analysis is usually a good thing because the noise tend to contaminate only one particular scale (which would hopefully be a different scale than Wally's!) In addition, correlation is mathematically equivalent to comparing the spectral characteristic of your image and the target.