I am using opencv and c++. I am doing a very simple program. What is does is it takes 3 images in a vector mat,convert those images to hsv and again stores the hsv's of the original image in a vector. I want to display all the 3 hsv images obtained. But when my program loops, it displays only the last hsv image in the vector. Here is my code: http://pastebin.com/z7FBrtxs.
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(){
vector<Mat> imgs;
Mat left=imread("left.jpg");
Mat front=imread("front.jpg");
Mat right=imread("right.jpg");
imgs.push_back(left);
imgs.push_back(front);
imgs.push_back(right);
vector<Mat> hsvs;
Mat left_hsv;
Mat front_hsv;
Mat right_hsv;
cvtColor(left,left_hsv,CV_BGR2HSV);
cvtColor(front,front_hsv,CV_BGR2HSV);
cvtColor(right,right_hsv,CV_BGR2HSV);
hsvs.push_back(left_hsv);
hsvs.push_back(front_hsv);
hsvs.push_back(right_hsv);
for(int i=0;i<3;i++){
imshow("hsv",hsvs[i]);
}
waitKey(0);
return 0;
}
This
for(int i=0;i<3;i++){
imshow("hsv",hsvs[i]);
}
waitKey(0);
means that you are displaying all images in the window named "hsv". And after displaying the last one, you wait for user input. Thus, the images are actually all showed in the window, in sequence, it's just that they switch so fast you never see it.
Change it to
for(int i=0;i<3;i++){
imshow("hsv",hsvs[i]);
waitKey(0);
}
and you should be good.
This change means each image will be shown in the "hsv" window, and then wait for you to push a button before showing the next image.
You could also show multiple windows at once by just renaming the windows to "hsv1", "hsv2", etc.
Either use:
for(int i=0;i<3;i++){
imshow("hsv",hsvs[i]);
waitKey(0); //Note: wait for user input for every image
}
or show them in three different named windows (see documentation).
As there are no coordinates provided, I guess all images are drawn in the same default place (a viewport origin), and the last image was painted over all previous ones.
Related
I'm trying to take and display the most current frame on any key event. It is important for me that the photo will be taken only once (without using infinity loop). I have a problem because displayed photo is the previous one not the most current. It looks like some buffor with captured frames in VideoCapture object.
I'm using OpenCV 4.6.0 and laptop camera.
#include <iostream>
#include <opencv2/opencv.hpp>
int main() {
cv::VideoCapture cap0(0);
cv::Mat frame0;
for (;;)
{
cap0.read(frame0);
if (!frame0.empty())
{
imshow("cap0", frame0);
}
cv::waitKey(0);
}
return 0;
}
Is it possible to get and display current photo without doing infinity loop whenever I click any key?
i'm newbie with opencv. Just managed to install and set it up to Visual 2013. I tested it with a sample of live stream for my laptop's camera and it works. Now i want to calculate the distance with the webcam to a red laser spot that will be in the middle of the screen(live_stream). Tell me from where can i start? I know that i must find the R(red) pixel from the middle of the screen but i dont know how to do that and what functions can i use. Some help, please?
The live stream from webcam that works is shown below:
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
int main()
{
//Data Structure to store cam.
CvCapture* cap=cvCreateCameraCapture(0);
//Image variable to store frame
IplImage* frame;
//Window to show livefeed
cvNamedWindow("Imagine Live",CV_WINDOW_AUTOSIZE);
while(1)
{
//Load the next frame
frame=cvQueryFrame(cap);
//If frame is not loaded break from the loop
if(!frame)
printf("\nno");;
//Show the present frame
cvShowImage("Imagine Live",frame);
//Escape Sequence
char c=cvWaitKey(33);
//If the key pressed by user is Esc(ASCII is 27) then break out of the loop
if(c==27)
break;
}
//CleanUp
cvReleaseCapture(&cap);
cvDestroyAllWindows();
}
Your red dot is most likely going to show up as total white in the camera stream, so I would suggest
Convert to grayscale using cvtColor().
Threshold using threshold(), for parameters use something like thresh=253, maxval=255 and mode THRESH_BINARY. That should give you an image that is all black with a small white dot where your laser is.
Then you can use findContours() to locate your dot in the image. Get the boundingRect() of a contour and then you can calculate its center to get the precise coordinates of your dot.
Also as has been already mentioned, do not use the deprecated C API, use the C++ API instead.
I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}
You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.
i'm working on a project using OpenCV243, I need to get the foreground during a stream, my Problem is that I use the cv::absdiff to get it doesn't really help, here is my code and the result .
#include <iostream>
#include<opencv2\opencv.hpp>
#include<opencv2\calib3d\calib3d.hpp>
#include<opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
int main (){
cv::VideoCapture cap(0);
cv::Mat frame,frame1,frame2;
cap >> frame;
frame.copyTo(frame1);
cv::imwrite("background.jpeg",frame1);
int key = 0;
while(key!=27){
cap >> frame;
cv::absdiff(frame, frame1, frame2); // frame2 = frame -frame1
cv::imshow("foreground", frame2);
if(key=='c'){
//frame.copyTo(frame2);
cv::imwrite("foreground.jpeg", frame2);
key = 0;
}
cv::imshow("frame",frame);
key = cv::waitKey(10);
}
cap.release();
return 0;
}
as you can see the subtraction work but what I want to get is only the values of that changed for example if have a Pixel in the background with [130,130,130] and the same pixel has [200,200,200] in the frame I want to get exactly the last values and not [70,70,70]
I've already seen this tutorial : http://mateuszstankiewicz.eu/?p=189
but I can't understand the code and I have problems setting cv::BackgroundSubtractorMOG2 with my openCV version
thanks in advance for you help
BackgroundSubtractorMOG2 should work with #include "opencv2/video/background_segm.hpp"
The samples with OpenCV have two nice c++ examples (in the samples\cpp directory).
bgfg_segm.cpp shows how to use the BackgroundSubtractorMOG2
bgfg_gmg.cpp uses BackgroundSubtractorGMG
To get the last values (and asuming you meant to get the foreground pixel values) you could copy the frame using the foreground mask. This is also done in the first example, in the following snippet:
bg_model(img, fgmask, update_bg_model ? -1 : 0);
fgimg = Scalar::all(0);
img.copyTo(fgimg, fgmask);
I am using OpenCV and trying to apply a Gaussian Blur to an incoming video stream. I basically use cvQueryFrame to remove a frame, blur it and display the frame onto the screen. The thing is though, my video gets stuck on the first frame after I apply the blur....anyone know why? its basically showing one frame instead of a video. The second I remove the blur it starts outputting video again.
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include <iostream>
using namespace std;
int main()
{
//declare initial data
IplImage *grabCapture= 0; //used for inital video frame capture
IplImage *process =0; //used for processing
IplImage *output=0; //displays final output
CvCapture* vidStream= cvCaptureFromCAM(0);
cvNamedWindow ("Output", CV_WINDOW_AUTOSIZE);
int createimage=1;
while (1)
{
grabCapture= cvQueryFrame (vidStream);
if (createimage==1)
{
process= cvCreateImage (cvGetSize(grabCapture), IPL_DEPTH_16U, 3);
createimage=0;
}
*process=*grabCapture;
cvSmooth (process,process,CV_GAUSSIAN,7,7); //line that makes it display frame instead of video
cvShowImage("Output",process);
}
//clean up data
cvReleaseImage (&grabCapture);
cvReleaseImage (&process);
cvReleaseImage (&output);
cvReleaseCapture (&vidStream);
return 0;
}
You are missing a call to cvWaitKey. This is the only way to tell OpenCV to process events and thus prevent the GUI from freezing.
Try adding this line:
cvWaitKey(10);
after cvShowImage("Output",process);.
Edit: here is the documentation for cvWaitKey