I am trying to click an image from Webcam using OpenCV. My code is as follows.
VideoCapture cap0(0);
cap0.set(CV_CAP_PROP_FRAME_WIDTH,320);
cap0.set(CV_CAP_PROP_FRAME_HEIGHT,240);
cap0 >> frame;
string fileName = "/0.jpg";
cout << fileName << endl;
imwrite(fileName, frame);
I am getting this image as output
You can see some weird lines in output., What is the possible reason and how i can eliminate these Please point me to the right direction.
Thanks
This looks like a problem in the acquisition driver, which doesn't transfer the right row data on every 41th row (43rd for the first pair !?), maybe using block transfers.
It seems that there is valid image data in these rows, but I can't identify where they could be coming from.
It may be electromagnetic interference in your case, try check it in usual conditions without electric line near camera, or make sheld for camera.
Related
I'm looking to make a program that once run, will continuously look for an template image (stored in directory of program) to match in realtime with the screen. Once found, it will click on the image (ie the center of the coords of the best match). The images will be exact copies (size/color) so finding the match should not be very hard.
This process then continues with many other images and then resets to start again with the first image again but once I have the first part working I can just copy the code.
I have downloaded the OpenCV library as it has image matching tools but I am lost. Any help with writing some stub code or pointing me to a helpful resource is much appreciated. I have checked a lot of the OpenCV docs with no luck.
Thanks you.
If you think that the template image would not be very different in the current frame then you should use matchTemplate() of openCV. Its very easy to use and will give you good results.
Have a look here for complete explanation http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
void start()
{
VideoCapture cap(0);
Mat image;
namedWindow(wndname,1);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
"Load your template image here"
"Declare a result image here"
"Perform the templateMatching() between template image and frame and store the results of correlation in result image declared above"
char c = cvWaitKey(33);
if( c == 27 ) break;
}
}
This is my first time posting here and hoping for a positive result since my research is near its conclusion.
I want to add in my code a function that will process only the defined region of interest of a video file.
(I can't post image since I don't have yet a reputation but same question is posted here --->
http://answers.opencv.org/question/18619/region-of-interest-in-video-file/)
Storyboard:
I'm making a program in C++/OpenCV that will make the pedestrians and vehicles look that they are not in the scene/disappear by getting the running average of the videos frame. I already made that. Now my problem is I do want only the portion of the video that is under the region of interest to be processed because I want to preserve the Lighting/Illumination of the Christmas lights while they are blinking.
Why? I want to use this method to capture only the blinking lights this coming yuletide season without the disturbance of vehicle and people in the scene.
How can I do that? I mean getting a region of interest in a video file.
Thanks in advance.
Fix your ROI Position.
Take the region from each frame of the video.
Then process it.
Apply for all frames.
Like this:
cv::Rect ROI(startX,startY,width,height);
while(1)
{
cap.read(frame);
temp = frame(ROI);
process(temp);
}
I'm currently working on a project and at the moment I need to pull successive frames from a video then find and match features on them. The problem Is that when I call VideoCapture::read(Mat &image) It overwrites both images that I want to compare with the same image. I think It could be because the same buffer is being used and therefore both values are pointing to the same space. I'm just not certain how to get around this.
Here's the problem code: (don't worry about the poor exception handling)
Mat m1, m2;
VideoCapture cap(argv[1]);
if(!cap.isOpened()){
throw std::exception("Could not open the file");
}
int num = 0;
while(num < 20){
try{
cap.read(m1);
cap.read(m2);
num++;
match(m1,m2,num);
}catch(std::exception){
std::cout << "Oh no!";
}
}
match(m1,m2,num) does the feature detection business and outputs an image "Image_%d.jpg" , num. This image is both images side by side with matches displayed. This image is the same image twice in a row though. match() does work because I have tested it with still images, so I am confident the problem lies in the cap.read code.
Any help/suggestions would be greatly appreciated.
Well it was as easy as making sure each image was a deep copy of the captures image.
m1 >> cap
m1 = m1.clone();
did the trick, although less elegantly than I hoped for.
I am currently planning on splitting my image into 3 channels so i can get the RGB values of an image to plot a scatter graph so i can model is using a normal distribtion calculating the covariance matrix, mean, etc.
then calculate distance between the background points and the actual image to segment the image.
Now in my first task, i have wrote the following code.
VideoCapture cam(0);
//int id=0;
Mat image, Rch,Gch,Bch;
vector<Mat> rgb(3); //RGB is a vector of 3 matrices
namedWindow("window");
while(1)
{
cam>>image;
split(image,rgb);
Bch = rgb[0];
Gch = rgb[1];
Rch = rgb[2];
but as soon as it reaches the split function, i step through it, it causes a unhandled exception error. access violation writing location 0xfeeefeee
i am still new to opencv, so am not used to dealing with unhandled exception error.
thanks
It sounds as if split expects there to be three instances of Mat in the rgb vector.
But you have only prepared it to hold three items - it is actually empty.
Try adding three items to the vector and run again.
Although this is an old issue I would like to share the solution that worked for me. Instead of vector<Mat> rgb(3); I used Mat channels[3];. I realized there is something wrong with using vector when I was not able to use split even on an image loaded with imread. Unfortunately, I cannot explain why this change works, but if someone can that would be great.
I am writing C++ code with OpenCV where I'm trying to detect a chessboard on an image (loaded from a .jpg file) to warp the perspective of the image. When the chessboard is found by findChessboardCorners(), the rest of my code is working perfectly. But sometimes the function does not detect the pattern, and this behavior seems to be random.
For example, there is one image that works on it's original resolution 2560x1920, but not if I scale it down with GIMP first to 800x600. However, another image seems to do the opposite: doesn't work in original resolution, but does work scaled down.
Here's the bit of my code that does the detection:
Mat grayimg = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
if (img.data == NULL) {
printf("Unable to read image");
return 0;
}
bool patternfound = findChessboardCorners(grayimg, patternsize, corners,
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK);
if (!patternfound) {
printf("Chessboard not found");
return 0;
}
Is there some kind of bug in opencv causing this behavior? Does anyone has any tips on how to pre-process your image, so the function will work more consistently?
I already tried playing around with the parameters CALIB_CB_ADAPTIVE_THRESH, CALIB_CB_NORMALIZE_IMAGE, CALIB_CB_FILTER_QUADS and CALIB_CB_FAST_CHECK. I'm also having the same results when I pass in a color image.
Thanks in advance
EDIT: I'm using OpenCV version 2.4.1
I had a very hard time getting findChessboardCorners to work until I added a white boarder around the chessboard.
I found that as hint somewhere in the more recent documenation.
Before adding the border, it would sometimes be impossible to recognize the keyboard, but with the white border it works every time.
Welcome to the joys of real-world computer vision :-)
You don't post any images, and findChessboardCorners is a bit too high-level to debug. I suggest to display (in octave, or matlab, or with more OpenCV code) the location of the detected corners on top of the image, to see if enough are detected. If none, try to run cvCornerHarris by itself on the image.
Sometimes the cause of the problem is the excessive graininess of the image: try to blur is just a little and see if it helps.
Actually, try to remove the CALIB_CB_FAST_CHECK option, and give it a try.
CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_FAST_CHECK is not the same as CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_FAST_CHECK, you should use | (binary or)