How to read more than 1 image using highgui(opencv) - c++

I am developing a program to display batch of image and make record while clicking the selected position of the image.
I would like to load set of images( named in incremental order)
and open it 1 after the previous one is closed.
What I wanted the program like step by step:
A file with batch of images named in order (including JPEG, TIFF and PNG formats.)
e.g. IMG_00000001.JPG to IMG_00000003.JPG...
When I run my program, it will display the first image (IMG_00000001.JPG)
then I will click the image and the cmd will show the position that I clicked.
After closing the window, the next image will be displayed(IMG_00000002.JPG).
Continue until the last image in the folder.
Thanks a lot !! I have been searching through the internet for the past few weeks, there are examples but get errors every single time while running it, I was so frustrated and desperate for answer!
Here is my code
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
void CallBackFunc(int event, int x, int y, int flags, void* userdata)
{
if ( event == EVENT_LBUTTONDOWN )
{
cout << "Clicked position is: (" << x << ", " << y << ")" << endl;
}
}
int main(int argc, char** argv)
{
// Read image from file
Mat img = imread("cube_0.JPG");
Mat img1 = imread("cube_1.JPG");
//if fail to read the image
if ( img.empty() )
{
cout << "Error loading the image" << endl;
return -1;
}
//Create a window
namedWindow("My Window", 1);
//set the callback function for any mouse event
setMouseCallback("My Window", CallBackFunc, NULL);
//show the image
imshow("My Window", img);
// Wait until user press some key
waitKey(0);
return 0;
}

Related

Overlay using opencv using C++

My question is about trying to fixing the italized line so that my overlay will work properly and instead of black pixels there are white pixels based on my conditional statement. I have tried several things such as using different types such as:
out1.at<Vec3b>(i,j)[0]=image.at<Vec3b>(i,j)[0];
out1.at<Vec3b>(i,j)[1]=image.at<Vec3b>(i,j)[1];
out1.at<Vec3b>(i,j)[2]=image.at<Vec3b>(i,j)[2];
But I got a heap error. I believe I am really close but I need some advice or guidance. Please excuse any errors that I have made posting for this is my first post.
Here is my code.
#include <iostream>
#include <stdint.h>
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/core/core.hpp"
using namespace std;
using namespace cv;
int main(int argv, char** argc)
{
Mat image; // new blank image
Mat image2;
Mat out1;
image2 = cv::imread("test2.bmp",CV_LOAD_IMAGE_GRAYSCALE); // read the file
image = cv::imread("test1.bmp",CV_LOAD_IMAGE_GRAYSCALE);
if (!image.data) // error handling if file does not load
{
cout<< "Image 1 not loaded";
return -1;
}
if (!image2.data)
{
cout << "Image 2 not loaded";
return -1;
}
// resize images to make sure all images is the same size
cv::resize(image2, image2, image.size());
cv::resize(image2, out1, image.size());
// copying content of overlay image to output file
//image2.copyTo(out1);
out1 = image2.clone();
// for loop comparing pixels to original image
for (int i =0; i < out1.rows; i++)
{
for(int j =0; j < out1.cols; j++)
{
//Vec3b color = image.at<Vec3b>(Point(i,j));
if(out1.at<uchar>(i,j)==0 && out1.at<uchar>(i,j) ==0 &&
out1.at<uchar>(i,j)==0)
{
out1.at<Vec3b>(i,j)[0]=255; // blue channel
out1.at<Vec3b>(i,j)[1]=255; // green channel
out1.at<Vec3b>(i,j)[2]=255; // red channel
}
else
*out1.at<uchar>(i,j) = image.at<uchar>(i,j);*
}
}
cv::imwrite("out1.bmp",out1); // save to output file
namedWindow("Display window", CV_WINDOW_AUTOSIZE);// creat a window to
display w/label
imshow("Display window",out1); // show image inside display window
waitKey(0);
return 0;
}
My image is close to being overlayed correctly. My issue is that the
pixels shows up black instead of white due to a certain line in my
program

C++ openCV waitKey(0) not working?

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace std;
int isSquare(String fileName);
int main() {
String imgName = "C:/A.jpg";
isSquare(imgName);
}
int isSquare(String fileName) {
Mat img;
img = cv::imread(fileName, IMREAD_COLOR);
if (img.empty()) {
cout << "Could not open or find the image" << endl;
return -1;
}
//namedWindow("display", WINDOW_AUTOSIZE);
imshow("display", img);
waitKey(0);
cout << "hi";
destroyWindow("display");
return 0;
}
Hi, I'm currently messing around with openCV 3.30, C++. Now I'm trying to open an image, but the display window just kept disappear when I execute above code. I commented namedWindow("display", WINDOW_AUTOSIZE); because openCV document says cv:imshow() will automatically create one, and if I un-comment that line I got one gray window, one image window like this.
I don't want to got that gray window, and key input for waitKey(0) works only when I focus on gray window, not on image window.
So I made that line as a comment. but if I do that, when I execute that code the image window disappears instantly as if I don't have waitKey(0) code. Clearly waitKey(0) is not working because cout<<"hi"; after waitKey(0) was executed.
Am I missing something? Is the document wrong and using namedWindow necessary? All I wanted was to get rid of that gray window... any words of wisdom is appreciated, thanks.

Inter-laying a video sequence to another video in OpenCV

How can I add a small video sequence to another video using OpenCV?
To elaborate, let's say I have a video playing which is to be interactive where let's say the user viewing the video gestures something and a short sequence plays at the bottom or at the corner of the existing video.
For each frame, you need to copy an image with the content you need inside the video frame. The steps are:
Define the size of the overlay frame
Define where to show the overlay frame
For each frame
Fill the overlay frame with some content
Copy the overlay frame in the defined position in the original frame.
This small snippet will show a random noise overlay window on bottom right of the camera feed:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
// Video capture frame
Mat3b frame;
// Overlay frame
Mat3b overlayFrame(100, 200);
// Init VideoCapture
VideoCapture cap(0);
// check if we succeeded
if (!cap.isOpened()) {
cerr << "ERROR! Unable to open camera\n";
return -1;
}
// Get video size
int w = cap.get(CAP_PROP_FRAME_WIDTH);
int h = cap.get(CAP_PROP_FRAME_HEIGHT);
// Define where the show the overlay frame
Rect roi(w - overlayFrame.cols, h - overlayFrame.rows, overlayFrame.cols, overlayFrame.rows);
//--- GRAB AND WRITE LOOP
cout << "Start grabbing" << endl
<< "Press any key to terminate" << endl;
for (;;)
{
// wait for a new frame from camera and store it into 'frame'
cap.read(frame);
// Fill overlayFrame with something meaningful (here random noise)
randu(overlayFrame, Scalar(0, 0, 0), Scalar(256, 256, 256));
// Overlay
overlayFrame.copyTo(frame(roi));
// check if we succeeded
if (frame.empty()) {
cerr << "ERROR! blank frame grabbed\n";
break;
}
// show live and wait for a key with timeout long enough to show images
imshow("Live", frame);
if (waitKey(5) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}

Why does the function cv::subtract() returns the error "size of input arguments do not match"?

I want to subtract the two successive images taken from the webcam.
as you can see I am doing this inside a while loop. In the last line of the while loop I am setting frame2 = frame and so I can subtract them from the next iteration. But the function cv::subtract returns the above error in the terminal.
what am I doing wrong?
#include <iostream>
#include "core.hpp"
#include "highgui.hpp"
#include "imgcodecs.hpp"
#include "cv.h"
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
VideoCapture cap(0); ///open the video camera no. 0 (laptop's default camera)
///make a writer object:
cv::VideoWriter writer;
if (!cap.isOpened()) /// if not success, exit program
{
cout << "ERROR INITIALIZING VIDEO CAPTURE" << endl;
return -1;
}
char* windowName = "Webcam Feed(diff image)";
namedWindow(windowName,WINDOW_NORMAL); ///create a window to display our webcam feed
///we need to define 4 arguments for initializing the writer object:
//filename string:
string filename = "C:\\Users\\PEYMAN\\Desktop\\ForegroundExtraction\\openCV_tutorial\\2.writing from video to file\\Payman.avi";
//fourcc integer:
int fcc = CV_FOURCC('D','I','V','3');
//frame per sec integer:
int fps = 10;
//frame size:
cv::Size framesize(cap.get(CV_CAP_PROP_FRAME_WIDTH),cap.get(CV_CAP_PROP_FRAME_HEIGHT));
///initialize the writet object:
writer = VideoWriter(filename,fcc,fps,framesize);
if(!writer.isOpened()){
cout << "Error opening the file" << endl;
getchar();
return -1;
}
int counter = 0;
while (1) {
Mat frame,frame2,diff_frame;
///read a new frame from camera feed and save it to the variable frame:
bool bSuccess = cap.read(frame);
if (!bSuccess) ///test if frame successfully read
{
cout << "ERROR READING FRAME FROM CAMERA FEED" << endl;
break;
}
/// now the last read frame is stored in the variable frame and here it is written to the file:
writer.write(frame);
if (counter > 0){
cv::subtract(frame2,frame,diff_frame);
imshow(windowName, diff_frame ); ///show the frame in "MyVideo" window
}
///wait for 10ms for a key to be pressed
switch(waitKey(1)){
///the writing from webcam feed will go on until the user presses "esc":
case 27:
///'esc' has been pressed (ASCII value for 'esc' is 27)
///exit program.
return 0;
}
frame2 = frame;
counter++;
}
return 0;
}
Every time you execute the while loop frame2 is created and default initialized. When you call
cv::subtract(frame2,frame,diff_frame);
You are trying to subtract a default constructed Mat from a Mat that has an image in it. These two Mats will not be the same size so you get the error.
You need to move the declaration of frame and frame2 outside of the while loop if you want them to retain their values after each execution of the while loop. You also need to initialize frame2 to the same size or capture a second image into it so you can use subtract the first time through.
You need to declare frame2 outside the scope of the while loop like you did with counter. Right now, you get a fresh, empty frame2 with each iteration of the loop.
You might as well move all the Mats outside the while loop so that memory doesn't have to be de-allocated at the end of each iteration and re-allocated the next, although this isn't an error and you likely won't see the performance penalty in this case.
Also, #rhcpfan is right in that you need to be careful about shallow vs deep copies. Use cv::swap(frame, fram2).

Opencv Unhandled Exception whe use cvCretateImage()

I have the code below. Is a open realtime edge detection, but i had an error on line: pProcessedFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
"Unhandled exception at 0x00007FF6CAF1284C in opencv2.exe: 0xC0000005: Access violation reading location 0x000000000000002C."
Anybody can resolve this insue?
My configuration is Visual Studio 2013 and Opencv 2.4.10
#include <iostream>
#include "opencv/cv.h"
#include "opencv/highgui.h"
using namespace std;
// Define the IplImage pointers we're going to use as globals
IplImage* pFrame;
IplImage* pProcessedFrame;
IplImage* tempFrame;
// Slider for the low threshold value of our edge detection
int maxLowThreshold = 1024;
int lowSliderPosition = 150;
// Slider for the high threshold value of our edge detection
int maxHighThreshold = 1024;
int highSliderPosition = 250;
// Function to find the edges of a given IplImage object
IplImage* findEdges(IplImage* sourceFrame, double thelowThreshold, double theHighThreshold, double theAperture)
{
// Convert source frame to greyscale version (tempFrame has already been initialised to use greyscale colour settings)
cvCvtColor(sourceFrame, tempFrame, CV_RGB2GRAY);
// Perform canny edge finding on tempframe, and push the result back into itself!
cvCanny(tempFrame, tempFrame, thelowThreshold, theHighThreshold, theAperture);
// Pass back our now processed frame!
return tempFrame;
}
// Callback function to adjust the low threshold on slider movement
void onLowThresholdSlide(int theSliderValue)
{
lowSliderPosition = theSliderValue;
}
// Callback function to adjust the high threshold on slider movement
void onHighThresholdSlide(int theSliderValue)
{
highSliderPosition = theSliderValue;
}
int main(int argc, char** argv)
{
// Create two windows
cvNamedWindow("WebCam", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Processed WebCam", CV_WINDOW_AUTOSIZE);
// Create the low threshold slider
// Format: Slider name, window name, reference to variable for slider, max value of slider, callback function
cvCreateTrackbar("Low Threshold", "Processed WebCam", &lowSliderPosition, maxLowThreshold, onLowThresholdSlide);
// Create the high threshold slider
cvCreateTrackbar("High Threshold", "Processed WebCam", &highSliderPosition, maxHighThreshold, onHighThresholdSlide);
// Create CvCapture object to grab data from the webcam
CvCapture* pCapture;
// Start capturing data from the webcam
pCapture = cvCaptureFromCAM(CV_CAP_V4L2);
// Display image properties
cout << "Width of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_WIDTH) << endl; // Width of the frames in the video stream
cout << "Height of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_HEIGHT) << endl; // Height of the frames in the video stream
cout << "Image brightness: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_BRIGHTNESS) << endl; // Brightness of the image (only for cameras)
cout << "Image contrast: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_CONTRAST) << endl; // Contrast of the image (only for cameras)
cout << "Image saturation: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_SATURATION) << endl; // Saturation of the image (only for cameras)
cout << "Image hue: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_HUE) << endl; // Hue of the image (only for cameras)
// Create an image from the frame capture
pFrame = cvQueryFrame(pCapture);
// Create a greyscale image which is the size of our captured image
pProcessedFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
// Create a frame to use as our temporary copy of the current frame but in grayscale mode
tempFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
// Loop controling vars
char keypress;
bool quit = false;
while (quit == false)
{
// Make an image from the raw capture data
// Note: cvQueryFrame is a combination of cvGrabFrame and cvRetrieveFrame
pFrame = cvQueryFrame(pCapture);
// Draw the original frame in our window
cvShowImage("WebCam", pFrame);
// Process the grame to find the edges
pProcessedFrame = findEdges(pFrame, lowSliderPosition, highSliderPosition, 3);
// Showed the processed output in our other window
cvShowImage("Processed WebCam", pProcessedFrame);
// Wait 20 milliseconds
keypress = cvWaitKey(20);
// Set the flag to quit if escape was pressed
if (keypress == 27)
{
quit = true;
}
} // End of while loop
// Release our stream capture object to free up any resources it has been using and release any file/device handles
cvReleaseCapture(&pCapture);
// Release our images
cvReleaseImage(&pFrame);
cvReleaseImage(&pProcessedFrame);
// This causes errors if you don't set it to NULL before releasing it. Maybe because we assign
// it to pProcessedFrame as the end result of the findEdges function, and we've already released pProcessedFrame!!
tempFrame = NULL;
cvReleaseImage(&tempFrame);
// Destory all windows
cvDestroyAllWindows();
}
Thank you all. I found solution, my cam not capturing image, I change to another camera and now the code is running fine.