This is my first time posting here and hoping for a positive result since my research is near its conclusion.
I want to add in my code a function that will process only the defined region of interest of a video file.
(I can't post image since I don't have yet a reputation but same question is posted here --->
http://answers.opencv.org/question/18619/region-of-interest-in-video-file/)
Storyboard:
I'm making a program in C++/OpenCV that will make the pedestrians and vehicles look that they are not in the scene/disappear by getting the running average of the videos frame. I already made that. Now my problem is I do want only the portion of the video that is under the region of interest to be processed because I want to preserve the Lighting/Illumination of the Christmas lights while they are blinking.
Why? I want to use this method to capture only the blinking lights this coming yuletide season without the disturbance of vehicle and people in the scene.
How can I do that? I mean getting a region of interest in a video file.
Thanks in advance.
Fix your ROI Position.
Take the region from each frame of the video.
Then process it.
Apply for all frames.
Like this:
cv::Rect ROI(startX,startY,width,height);
while(1)
{
cap.read(frame);
temp = frame(ROI);
process(temp);
}
Related
Is there a way to produce a glare on an image? Given an image with an object, I want to produce a glare on a portion of an image. If I have an image that is 256x256, I want to produce glare on the first 64x64 patch. Is there a function in opencv I can use for that? If not, what is a good way to go about this problem?
I think that this example does what you need. Each time it saves a face, it gives a flash in the part of the screen where the face was recognised. So, the glares changes every time of place and size.
You can found it here:
https://github.com/MasteringOpenCV/code/tree/master/Chapter8_FaceRecognition
Seek this part in the main.cpp:
// Make a white flash on the face, so the user knows a photo has been taken.
Mat displayedFaceRegion = displayedFrame(faceRect);
displayedFaceRegion += CV_RGB(90,90,90);
Am trying to use OPENCV to detect the shift in consecutive video frames when the camera is unstable and moving real time as shown in the picture.. To compensate the effect of shaking or changing in the angle I want to match some objects in the image as example the clock and from the center of the same object in the consecutive frames I can detect the shift value and compensate its effect. I don't know the way to do this real time or how many ways are available and accurate to do this.
Thank you in advance and I hope my question is clear.
This is a fairly standard operation, as it's actively used in MPEG-4 compression. It's called "motion estimation" and you don't do it on objects (too hard, requires image segmentation). In OpenCV, it's covered under Video Stabilization
If you want to try writing code yourself then one method is to first of all crop the frame to produce a sub image of your actual image slightly smaller than your actual image along each dimension. This will give you some room to move.
Next you want to be able to find and track shapes in OpenCV - an example of code is here - http://opencv-srf.blogspot.co.uk/2011/09/object-detection-tracking-using-contours.html - Play around until you get a few geometric primitive shapes coming up on each frame.
Next you want to build some vectors between the centres of each shape - these are what will determine the movement of the camera - if in the next frame most of the vectors are displaced but parallel that is a good indicator that the camera has moved.
The last step is to calculate the displacement, which should is matter of measuring the distance between detected parallel vectors. If this is smaller than your sub-image cropping then you can crop the original image to negate the displacement.
The pseudo code for each iteration would be something like -
//Variables
image wholeFrame1, wholeFrame2, subImage, shapesFrame1, shapesFrame2
vectorArray vectorsFrame1, vectorsFrame2; parallelVectorList
vector cameraDisplacement = [0,0]
//Display image
subImage = cropImage(wholeFrame1, cameraDisplacement)
display(subImage);
//Find shapes to track
shapesFrame1 = findShapes(wholeFrame1)
shapesFrame2 = findShapes(wholeFrame2)
//Store a list of parallel vectors
parallelVectorList = detectParallelVectors(shapesFrame1, shapesFrame2)
//Find the mean displacement of each pair of parallel vectors
cameraDisplacement = meanDisplacement(parallelVectorList)
//Crop the next image accounting for camera displacement
subImage = cropImage(wholeFrame1, cameraDisplacement)
There are better ways of doing it but this would be easy enough for someone doing their first attempt at image stabilisation with experience of OpenCV.
I'm working on a project which consist to stabilize a video.
Therefore, during the process of stabilization, i had to convert my frames to gray in order to use some methods like goodFeaturesToTrack() or opticalFlow().
But at the end of my process, after applying my last transformation using warpAffine(), i would like to recover the colour information of the frame but i'm not able to do this. I tried some things.
I try cvtColor(outFrame,outFrame,CV_GRAY2BGR) but not working (obviously). Still back and white
At the beginning of the loop, i picked up the three colour channels B G R of my original picture like that:
Mat channel[3];
split(frameColor, channel);
And then at the end of the process, i'm doing that:
merge(channel,3,outFrame);
So i have the colour of my frame but not stabilized , that is to say it's like merging channels has removed all the transformation.
I also try to use the warpAffine() function with the colour picture but i have the same result of above.
Please help me.
Thank.
I solved my problem.
Actually when you apply a transformation like warpAffine(), you have to apply it on the previous frame and not the current. I didn't notice that i applied it to my current color frame and not the previous colour frame. And therefore, there was no changement.
By applying it on my previous colour frame, the image is in colour and stabilized.
I am using the CL Eye Multicam C++ API to obtain frames from a PSEye camera and I found something interesting I hope someone can explain to me this behaviour.
Following this example if I use the regular code (around line 108) :
while(_running)
{
cvGetImageRawData(pCapImage, &pCapBuffer);
CLEyeCameraGetFrame(_cam, pCapBuffer);
cvShowImage(_windowName, pCapImage);
}
The pCapBuffer is updated, BUT if I just do:
while(_running)
{
CLEyeCameraGetFrame(_cam, pCapBuffer);
}
pCapBuffer remais NULL! So for what I see CLEyeCameraGetFrame() just updates pCapBuffer when someone "consumes" it...what I don't get is how does CLEyeCameraGetFrame() knows that the buffer was read? I was expecting the pCapBuffer to be updated everytime I called CLEyeCameraGetFrame()....is this the regular behaviour in camera frame reads?
Also if someone could point me out how to make a QImage out of this pCapBuffer it will be very helpful!
I finally understood what's going on...cvGetImageRawData() copies the image pCapImage raw data to pCapBuffer and thus gives it an address, making it point to the image class internal data representation. So everytime CLEyeCameraGetFrame() is called it changes the data inside pCapBuffer, which is the same data inside pCapImage. The designer of this code simply used the OpenCV functions to initialize a buffer with the right amount of space and used it to acquire the frame image.
I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.