Unable to calculate pixel difference with opencv - c++

I am forced to calculate pixel difference pixel by pixel between frames of a video.
I initialize the pixeldifference variable to 0. (assumeall the frames are valid frames in the video)
The problem is lastFrame and frame are always identical which means the code with the cout statement "Pixel count incremented" is never triggered. I know a couple of frames can be identical but i never see that output statement even once. Which leads me to believe the two frames are identical always. Should i do something else? I'd appreciate any guidance. I'm very new to opencv (excus a little bad coding practice inside it was for debugging purposes)
Mat lastFrame;
Mat frame;
capture.read(lastFrame);
capture.read(frame);
while(counter<tofind)
{
for(int cur_row=0_PX;cur_row<max_PX;cur_row++)
{
for(int cur_cols=0;cur_cols<frame.cols;cur_cols++)
{
Vec3b pixels_currentFrame = frame.at<cv::Vec3b>(cur_row, cur_cols);
Vec3b pixels_lastFrame = lastFrame.at<cv::Vec3b>(cur_row, cur_cols);
bCur=int(pixels_currentFrame.val[0]);
gCur=int(pixels_currentFrame.val[1]);
rCur=int(pixels_currentFrame.val[2]);
bPrev=int(pixels_lastFrame.val[0]);
gPrev=int(pixels_lastFrame.val[1]);
rPrev=int(pixels_lastFrame.val[2]);
bDiff=abs(bCur-bPrev);
gDiff=abs(gCur-gPrev);
rDiff=abs(rCur-rPrev);
if (abs(bCur-bPrev) > checkval ||
abs(rCur-rPrev) > checkval ||
abs(gCur-gPrev) > checkval)
{
pixeldifference++;
cout<<"Pixel count incremented"<<endl;
}
}
}
lastFrame=frame;
capture.read(frame);
/*
some other stuff happens here
*/
counter++;
}

Related

My code adds the same frame to a vector while it doesn't when the frame is being rotated

I have an extremely strange situation in which the code adds the same frame to a vector while it doesn't when there is a rotation before the addition. Let me show you :
#include <chrono>
#include <opencv2/opencv.hpp>
#include <vector>
/* Write all the images in a certain directory. All the images with the same name present
in the directory will be overwritten. */
void CameraThread::writeAllFrames(std::vector<cv::Mat> vectorFrame) {
std::string path;
for (size_t i = 0; i < vectorFrame.size(); ++i) {
path = "./Images/image" + std::to_string(i) + ".png";
imwrite(path, vectorFrame.at(i));
}
capturing = 0;
}
int main(){
std::string window_name = "Left Cam";
cv::VideoCapture* videoCapture = new cv::VideoCapture(0);
cv::namedWindow(window_name, CV_WINDOW_NORMAL); //create a window for the camera
std::vector <cv::Mat> capturedFrame; // Vector in which the frames are going to be saved
int i = 0; // Counts how many images are saved
bool capturing = 0;
int amountFramesCaptured = 10;
int periodCapture = 250; // ms
while(1){
bool bSuccess = videoCapture->read(frame); // It captures the frame.
/*The next 2 lines take around 25ms. They turn the frame 90° to the left.*/
cv::transpose(frame, frame);
cv::flip(frame, frame, 0);
if (capturing == 0) {
/* If there is no frame capture, we just display the frames in a window.*/
imshow(window_name, frame);
} else if (capturing == 1) { // We capture the frames here.
capturedFrame.push_back(frame);
Sleep(periodCapture);
++i;
if (i == amountFramesCaptured) {
writeAllFrames(capturedFrame); // Write all frames in a directory.
puts("Frames copied in the directory.");
capturedFrame.clear(); // Clear the vector in case we recapture an other time.
i = 0;
capturing = 0;
}
}
}
return 0;
}
Here, we capture a frame thanks to videoCapture->read(frame). I wanted to rotate the frame so i used the next two lines. Then I tested the capture of the images and it worked well (I know it because I have a motion object in front of the camera). Lastly, I decided not to rotate the frames after some tests because the rotation takes too much resources (around 25 ms) and I needed to synchronize the capture with some blinking LEDs. So I took off the two lines that permitted the rotation and that's when suddenly, the code decided to add the same frame to the vector.
In conclusion, the writing on the hard drive works well when there is a rotation and it doesn't when there isn't (because of the vector). It confuses me so much, tell me if you see something I don't.

Why can't I access the Object in my function?

I have a function that detects motion between two frames and stores a cropped image of only the moving Object in the variable cv::Mat result_cropped. Now I want to add a function that checks the result_cropped for black pixels. I wrote the code for that easely but I'm completly stuck on trying to implement it in my class.
For some reason my blackDetection(Mat & cropped) can't access the cropped image which results in the program crashing.
Heres my simplified code:
void ActualRec::run(){
while (isActive){
//...code to check for motion
//if there was motion a cropped image will be stored in result_cropped
number_of_changes = detectMotion(motion, result, result_cropped, region, max_deviation, color);
if(number_of_changes>=there_is_motion) {
if(number_of_sequence>0){
// there was motion detected, store cropped image - this works
saveImg(pathnameThresh, result_cropped);
if (blackDetection(result_cropped)==true){
//the cropped image has black pixels
}
else {
//the cropped image has no black pixels
}
number_of_sequence++;
}
else
{
// no motion was detected
}
}
}
bool ActualRec::blackDetection(Mat & result_cropped){
//...check for black pixels, program crashes since result_cropped is empty
//if i add imshow("test",result_cropped) I keep getting an empty window
if (blackPixelCounter>0){
return true;
}
else return false;
}
Again, the problem is that I can't manage to access result_cropped in blackDetection(Mat & result_cropped).
\\edit: my complete code for this class http://pastebin.com/3i0WdLG0 . Please someone help me. This problem doesn't make any sense for me..
You don't have a cv::waitKey() in blackDetection(), so you will crash before you get to the cvWaitKey() in run(). You are jumping to conclusions that result_cropped is "empty".
You have not allocated croppedBlack anywhere, so you will crash on croppedBlack.at<Vec3b>(y,x)[c] =.
Add this at the start of blackDetection() (e.g.):
croppedBlack.create(result_cropped.size(), result_cropped.type());
To make it faster see How to scan images ... with OpenCV : The efficient way
bool ActualRec::blackDetection(Mat& result_cropped)
{
croppedBlack.create(result_cropped.size(), result_cropped.type());
int blackCounter = 0;
for(int y = 0; y < result_cropped.rows; ++y)
{
Vec3b* croppedBlack_row = croppedBlack.ptr<Vec3b>(y);
Vec3b* result_cropped_row = result_cropped.ptr<Vec3b>(y);
for(int x = 0; x < result_cropped.cols; ++x)
{
for(int c = 0; c < 3; ++c)
{
croppedBlack_row[x][c] =
saturate_cast<uchar>(alpha * result_cropped_row[x][c] + beta);
}
}
}
}

OpenCV floodFill() fills unconnected regions

I have implemented the connected component identification algorithm from here, but it seems, that the cv::floodFill(...) fills unconnected regions in some cases.
First of all, here is the code:
void ImageMatchingOpenCV::getConnectedComponents(const cv::Mat& binImg, vector<vector<cv::Point>>& components, vector<vector<cv::Point>>& contours, const int minSize)
{
cv::Mat ccImg;
binImg.convertTo(ccImg, CV_32FC1);
int gap=startPointParams.gap;
int label = 1;
for(int y=gap; y<binImg.rows-gap; ++y)
{
for(int x=gap; x<binImg.cols-gap; ++x)
{
if((int)ccImg.at<float>(y, x)!=255) continue;
cv::Rect bBox;
cv::floodFill(ccImg, cv::Point(x, y), cv::Scalar(label), &bBox, cv::Scalar(0), cv::Scalar(0), 4 /*| cv::FLOODFILL_FIXED_RANGE*/);
if(bBox.x<gap || bBox.y<gap || bBox.x+bBox.width>=binImg.cols-gap || bBox.y+bBox.height>=binImg.rows-gap) continue;
components.push_back(vector<cv::Point>()); contours.push_back(vector<cv::Point>());
for(int i=bBox.y; i<bBox.y+bBox.height; ++i)
{
for(int j=bBox.x; j<bBox.x+bBox.width; ++j)
{
if((int)ccImg.at<float>(i, j)!=label) continue;
components.back().push_back(cv::Point(j, i));
if( (int)ccImg.at<float>(i+1, j)!=label
|| (int)ccImg.at<float>(i-1, j)!=label
|| (int)ccImg.at<float>(i, j+1)!=label
|| (int)ccImg.at<float>(i, j-1)!=label) contours.back().push_back(cv::Point(j, i));
}
}
if(components.back().size()<minSize)
{
components.pop_back();
contours.pop_back();
}
else
{
++label;
if(label==255) ++label;
break;
}
}
if(label!=1) break;
}
}
The input cv::Mat contains 2448x2050 pixels of size CV_8U. The pixel values are either 0 (background) or 255 (foreground). There are 17 connected components in the image. All components but the first are identified correctly. The erroneous component is by far largest one (~1.5 million pixels) and contains some small disconnected pixel-groups. It encompasses all of the other components. The small disconnected pixel-groups, which are wrongly assigned to the first component are all connected to the top of the components bounding box.
EDIT: I added some images to visualize the problem. The first image shows all identified connected components. The second image shows only the erroneous component (notice the small disconnected pixel groups at the top). The third images zooms a part of the second image:
If someone has an idea, where the error might be, I would be thankful.
I found the bug myself. At the end of the method small components are thrown away. In this case the component's number (label) is not increased:
if(components.back().size()<minSize)
{
components.pop_back();
contours.pop_back();
}
else
{
++label;
if(label==255) ++label;
}
This means, the label number is used again to mark the next component in the image. Hence, several small components and a sufficiantly large component might have the same label number. If now the bounding box of the large component is iterated, then this bounding box might contain some small previously identified, but unused components with the same label number.
The solution is to remove the else branch and instead increase the label number always.

Using time in OpenCV for frame processes and other tasks

I want to count the vehicles from a video. After frame differencing I got a gray scale image or kind of binary image. I have defined a Region of Interest to work on a specific area of the frames, the values of the pixels of the vehicles passing through the Region of Interest are higher than 0 or even higher than 40 or 50 because they are white.
My idea is that when a certain number of pixels in a specific interval of time (say 1-2 seconds) are white then there must be a vehicle passing so I will increment the counter.
What I want is, to check whether there are still white pixels coming or not after a 1-2 seconds. If there are no white pixels coming it means that the vehicle has passed and the next vehicle is gonna come, in this way the counter must be incremented.
One method that came to my mind is to count the frames of the video and store it in a variable called No_of_frames. Then using that variable I think I can estimate the time passed. If the value of the variable No_of_frames is greater then lets say 20, it means that nearly 1 second has passed, if my videos frame rate is 25-30 fps.
I am using Qt Creator with windows 7 and OpenCV 2.3.1
My code is something like:
for(int i=0; i<matFrame.rows; i++)
{
for(int j=0;j<matFrame.cols;j++)
if (matFrame.at<uchar>(i,j)>100)//values of pixels greater than 100
//will be considered as white.
{
whitePixels++;
}
if ()// here I want to use time. The 'if' statement must be like:
//if (total_no._of_whitepixels>100 && no_white_pixel_came_after 2secs)
//which means that a vehicle has just passed so increment the counter.
{
counter++;
}
}
Any other idea for counting the vehicles, better than mine, will be most welcomed. Thanks in advance.
For background segmentation I am using the following algorithm but it is very slow, I don't know why. The whole code is as follows:
// opencv2/video/background_segm.hpp OPENCV header file must be included.
IplImage* tmp_frame = NULL;
CvCapture* cap = NULL;
bool update_bg_model = true;
Mat element = getStructuringElement( 0, Size( 2,2 ),Point() );
Mat eroded_frame;
Mat before_erode;
if( argc > 2 )
cap = cvCaptureFromCAM(0);
else
// cap = cvCreateFileCapture( "C:\\4.avi" );
cap = cvCreateFileCapture( "C:\\traffic2.mp4" );
if( !cap )
{
printf("can not open camera or video file\n");
return -1;
}
tmp_frame = cvQueryFrame(cap);
if(!tmp_frame)
{
printf("can not read data from the video source\n");
return -1;
}
cvNamedWindow("BackGround", 1);
cvNamedWindow("ForeGround", 1);
CvBGStatModel* bg_model = 0;
for( int fr = 1;tmp_frame; tmp_frame = cvQueryFrame(cap), fr++ )
{
if(!bg_model)
{
//create BG model
bg_model = cvCreateGaussianBGModel( tmp_frame );
// bg_model = cvCreateFGDStatModel( temp );
continue;
}
double t = (double)cvGetTickCount();
cvUpdateBGStatModel( tmp_frame, bg_model, update_bg_model ? -1 : 0 );
t = (double)cvGetTickCount() - t;
printf( "%d. %.1f\n", fr, t/(cvGetTickFrequency()*1000.) );
before_erode= bg_model->foreground;
cv::erode((Mat)bg_model->background, (Mat)bg_model->foreground, element );
//eroded_frame=bg_model->foreground;
//frame=(IplImage *)erode_frame.data;
cvShowImage("BackGround", bg_model->background);
cvShowImage("ForeGround", bg_model->foreground);
// cvShowImage("ForeGround", bg_model->foreground);
char k = cvWaitKey(5);
if( k == 27 ) break;
if( k == ' ' )
{
update_bg_model = !update_bg_model;
if(update_bg_model)
printf("Background update is on\n");
else
printf("Background update is off\n");
}
}
cvReleaseBGStatModel( &bg_model );
cvReleaseCapture(&cap);
return 0;
A great deal of research has been done on vehicle tracking and counting. The approach you describe appears to be quite fragile, and is unlikely to be robust or accurate. The main issue is using a count of pixels above a certain threshold, without regard for their spatial connectivity or temporal relation.
Frame differencing can be useful for separating a moving object from its background, provided the object of interest is the only (or largest) moving object.
What you really need is to first identify the object of interest, segment it from the background, and track it over time using an adaptive filter (such as a Kalman filter). Have a look at the OpenCV video reference. OpenCV provides background subtraction and object segmentation to do all the required steps.
I suggest you read up on OpenCV - Learning OpenCV is a great read. And also on more general computer vision algorithms and theory - http://homepages.inf.ed.ac.uk/rbf/CVonline/books.htm has a good list.
Normally they just put a small pneumatic pipe across the road (soft pipe semi filled with air). It is attached to a simple counter. Each vehicle passing over the pipe generates two pulses (first front, then rear wheels). The counter records the number of pulses in specified time intervals and divides by 2 to get the approx vehicle count.

OpenCV: in search for less CPU intensive frame capture+resize and into buffer way: how to optimize my code?

So I created a function (C++)
void CaptureFrame(char* buffer, int w, int h, int bytespan)
{
/* get a frame */
if(!cvGrabFrame(capture)){ // capture a frame
printf("Could not grab a frame\n\7");
//exit(0);
}
CVframe =cvRetrieveFrame(capture); // retrieve the captured frame
/* always check */
if (!CVframe)
{
printf("No CV frame captured!\n");
cin.get();
}
/* resize buffer for current frame */
IplImage* destination = cvCreateImage(cvSize(w, h), CVframe->depth, CVframe->nChannels);
//use cvResize to resize source to a destination image
cvResize(CVframe, destination);
IplImage* redchannel = cvCreateImage(cvGetSize(destination), 8, 1);
IplImage* greenchannel = cvCreateImage(cvGetSize(destination), 8, 1);
IplImage* bluechannel = cvCreateImage(cvGetSize(destination), 8, 1);
cvSplit(destination, bluechannel, greenchannel, redchannel, NULL);
for(int y = 0; y < destination->height; y++)
{
char* line = buffer + y * bytespan;
for(int x = 0; x < destination->width; x++)
{
line[0] = cvGetReal2D(redchannel, y, x);
line[1] = cvGetReal2D(greenchannel, y, x);
line[2] = cvGetReal2D(bluechannel, y, x);
line += 3;
}
}
cvReleaseImage(&redchannel);
cvReleaseImage(&greenchannel);
cvReleaseImage(&bluechannel);
cvReleaseImage(&destination);
}
So generally it captures a frame from device, creates a frame to resize into and copies it into buffer (RGB or YUV420P is requirement for me).
So I wonder what I do wrong, because my function is way 2 cpu intensive, and what can be done to fix it?
Update:
My function is runed in thread:
void ThreadCaptureFrame()
{
while(1){
t.restart();
CaptureFrame((char *)frame->data[0], videoWidth, videoHeight, frame->linesize[0]);
AVFrame* swap = frame;
frame = readyFrame;
readyFrame = swap;
spendedTime = t.elapsed();
if(spendedTime < desiredTime){
Sleep(desiredTime - spendedTime);
}
}
}
which is started at the beginning of int main ( after some initialization):
boost::thread workerThread(ThreadCaptureFrame);
So if it can it runs 24 times per second, it eats 28% of core quad. cam resolution I capture is like 320x240. So: how to optimize it?
Things you can do:
Instead of taking images from the camera at the default resolution, choose what resolution you want.
I think you can simply set buffer = destination->imageData
These articles might be helpful:
http://aishack.in/tutorials/efficiently-accessing-matrices/
http://aishack.in/tutorials/memory-layout-of-matrices-of-multidimensional-objects/
First, don't allocate and the release the images per every frame!
That probably takes the most time. Have all your IplImages pre-allocated and release them only when your app is done.
You can use boost::shared_ptr with a custom deleter to avoid needing to remember to release the images.
I don't get why you're splitting and why you're copying like that.
If you must copy, then just copy the whole of destination->imageData into buffer.
If it is the padding that is buggung you then do it in a loop like you did, but directly from destination->imageData. You dont need to separate the color channels.
Use cvResize with CV_INTER_NN. That will reduce the image quality but is faster.
I'm not familiar with OpenCV, but if I'm reading your code correctly, you're:
reading from camera's buffer to memory (1 copying)
resizing the image (1 copying)
splitting the image into RGB channel (3 copying)
re-merge the channels to buffer (1 copying)
I think that's a lot of unnecessary copying, for each frame you made 6 copies of the image (i.e. if your image is 320x240 on 24-bit color and 24fps you'd be moving around at least 32MB/sec, with 1000x1000 frame you're talking about half gigabyte per second; note that this is a very crude back-of-the-envelope underestimate, depending on the resizing algorithm, extra copying may be done, reading/writing to non-aligned memory location may incur some overhead, etc, etc).
You can probably skip step #3 and/or #4, though I'm not familiar enough with OpenCV to suggest how.