C++: OpenCV2.4.11(!) access to webcam parameters - c++

This is a direct follow-up of the last question I asked which was aptly named "C++: OpenCV2.3.1(!) access to webcam parameters" and where I was told to install OpenCV2.4.11 instead (OpenCV3.0 did not work)... which I did. And yes, most of this text is an exact copy&paste of the last thread since my problem hasn't actually vanished...
Again, I've searched here, on other forums (Google, OpenCV etc), looked at the code of the videoInput library, the different header files and especially OpenCV's highgui_c.h and still seem to be unable to find an answer to this very simple question:
How do I change exposure and gain (or, to be general, any webcam property) in my Logitech C310 webcam with OpenCV2.4.11 the same way I was able to with OpenCV2.1.0? (using Win7 64-bit, Visual Studio 10)
EDIT: This has been solved. I do not know how but when I tested my code this morning it was able to report and set the exposure using VideoCapture and the set/get method.
There's the nice and easy VideoCapture get and set method, I know, similar to the videoInput's [Set/Get]VideoSetting[Camera/Filter] functions. Here's my short example in OpenCV2.4.11 that doesn't work:
EDIT: It does work now. What I don't understand is that the values of several properties are reported as -8.58993E+008 (namely hue, monocrome, gamma, temperature, zoom, focus, pan, tilt, roll and iris) and that property 6 (fourcc) is -4.66163E+008. I know I don't have these features on my webcam but all other unimplemented features report -1.
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char* CmdArgs, int CmdShow) {
int device0 = 0;
VideoCapture VC(device0);
if(!VC.isOpened()) // check if we succeeded
return -1;
ostringstream oss;
double CamProp;
for(int i=-4; i<27; i++) {
CamProp = VC.get(i);
Sleep(5);
oss << "Item " << i << ": " << CamProp << "\n";
}
MessageBox(NULL, oss.str().c_str(), "Webcam Values", MB_OK);
return 0;
}
It compiles, it runs, it accesses the webcam alright (and even shows a picture with imshow if I add it to the code) but it only opens a nice window saying this:
Item -4: 0
Item -3: 0
Item -2: 0
...
Item 2: 0
Item 3: 640
Item 4: 480
Item 5: 0
...
Item 25: 0
Item 26: 0
EDIT: See above, this works now. I get values for all supported parameters like exposure, gain, sharpness, brightness, contrast and so on. Perhaps I was still linking to the 2.3.1 libraries or whatever.
The point is: This was all perfectly settable with this camera under OpenCV 2.1.0 using videoInput. I had a running application doing its own lighting instead of using the Logitech functions (RightLight, Auto Exposure, Auto Whitebalance). Now setting and getting the parameters has been integrated into OpenCV highgui for quite a while but with a strongly reduced feature list (no requesting of parameter ranges, Min/Max/Stepwidth..., no setting of auto exposure, RightLight and similar stuff) and for some reason it's incompatible with my Logitech webcam. I can report the resolution but nothing else.
EDIT: I still miss the Min, Max, Step, Auto/Manual features of videoInput. I can set a value but I don't know whether it's allowed.
The videoInput code is now merged into OpenCV's code in the file cap_dshow.cpp but I can't find a header file that declares the videoInput class and simply using my old code doesn't work. So I have a cpp file which contains all functions I need and which I know did the job for me a while back but which I can't access now. Any clues on how to do that? Has anyone accessed and changed camera parameters in OpenCV2.4.11 using the videoInput/DirectShow interface?
EDIT: Seems this has happened now in a working way, unlike 2.3.1. No direct interaction with videoInput seems to be needed. However it would be nice to have it for the aforementioned reasons.
There's also the funny problem that using e.g.
VideoCapture cam(0)
addresses exactly the same camera as
VideoCapture cam(1)
or
VideoCapture cam(any integer value)
which seems odd to me and hints in the same direction - that CV's VideoCapture does not work properly for me. A similar problem is described here but I also tried the code with a Sleep(1000) after opening the capture - without success.
EDIT: This is also working correctly now. I get my webcam with (0) and and error with (1) which is absolutely OK.

Related

OpenCV destroyWindow() not working with multiple windows

I am using openCV library to open and display multiple images. I am doing this with multiple windows created in order to display each image. In order to achieve display of multiple windows at the same time, I am using waitKey() only after the last image.
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
As can be seen from the code, my goal is to give the user 1s of time to press any key, otherwise I want to destroy one of the windows (for the purpose of this question it can be either one). I want to achieve this by using openCV's function destroyWindow().
Below my entire code can be seen:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window2");
The goal of this code snippet should be that only "Window1" remains displayed, if 1s goes by, with the user not pressing any key.
However, this does not happen. The end result is that none of the windows are destroyed.
I have tested the following code snippet, which results in both windows being closed:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window1");
cv::destroyWindow("Window2");
The same results when I use destroyAllWindows() function (which makes sense).
My question now is, why can't I destroy only one of the windows?
Additional info:
Using Ubuntu 20.04.
OpenCV version is 4.2.
Working in C++
Changing the order of which window I want to destroy changes nothing.
Tried to replicate it, facing this issue in Python as well on Ubuntu. If you are still stuck, you can try a stopgap solution of reshowing only the one you wanted to show provided the user has pressed a key or not by storing the result of waitKey in some variable. If it is -1 then no key has been pressed.
I have provided a sample solution in Python which you shouldn't face any difficulties converting to C++.
import cv2
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
cv2.namedWindow('img1')
cv2.imshow('img1', img1)
cv2.namedWindow('img2')
cv2.imshow('img2', img2)
key = cv2.waitKey(5000)
if key == -1:
cv2.destroyAllWindows()
cv2.imshow('img1', img1)
cv2.waitKey(0)
else:
# do whatever destroy both or keep on showing both using cv2.waitKey(0)
cv2.destroyAllWindows()
I have reached a solution by adding startWindowThread() before adding each of the windows.
An important thing to note is also that I have built openCV using GTK option, so my solution is tested only on GTK not on others.
startWindowThread() is used only with GTK as noted here: https://github.com/opencv/opencv/issues/7562 - for others the function is empty.

Performance issues on different machines

I wrote a C++ program that uses opencv, I compiled it in VisualStudio 2010 using release mode as Win32 application, the opencv library is dynamically linked so I just copied the needed dll's to the root folder of the program(so I can run it on other computers), the program is tracking people in a video and works fine when I run it on my computer, however when I run it on other machines it works but 65% slower, at first I thought that its the machine it self that is slow, but then I wrote another small program(the code is below) whose only purpose is to read a video file and play it approximately at the original video speed. Unfortunately I have the same issue with it as well, it runs fine on my computer but when I run it on other computers it slows down by 65%(more or less), I am new to c++/opencv and I have no real idea why this is happening I'm hoping some one can enlighten me, was the dynamic linking a bad idea? and I should compile opencv as a static library(which I don't yet know how to do and will appreciate any help on the issue). or is it something else?
#include "opencv\cv.h"
#include "opencv\highgui.h"
int main(){
cv::VideoCapture vidBuffer;
if(!vidBuffer.open("res/test.mp4")){
std::cerr << "Cant find \"res/test.mp4\"\n";
system("pause");
return -1;
}
int fps = vidBuffer.get(CV_CAP_PROP_FPS);
int frameTime = 1000/fps;
//video loop
cv::Mat frame;
for(char c=-1;;c=cv::waitKey(frameTime)){
if(!vidBuffer.read(frame)||c==27)
break;
cv::imshow("Vidoe test", frame);
}
//loop emd
vidBuffer.release();
return 0;
}

SDL_RenderCopy() has strange behavior on Raspberry PI

This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.

Function call causes C++ program to freeze unless stepped-through in debugger

I have this short C++ program which takes snapshot images from a camera in a loop and displays them:
void GenericPGRTest::execute()
{
// connect camera
Camera *cam = Camera::Connect();
// query resolution and create view window
const Resolution res = cam->GetResolution();
cv::namedWindow("View");
c = 0;
// keep taking snapshots until escape hit
while (c != 27)
{
const uchar *buf = cam->SnapshotMono();
// create image from buffer and display it
cv::Mat image(res.height, res.width, CV_8UC1, (void*)buf);
cv::imshow("Camera", image);
c = cv::waitKey(1000);
}
}
This uses a class (Camera) for camera control I created using the Point Grey SDK and functions from the OpenCV library to display the images. I'm not necessarily looking for answers relating to the usage of either of these libraries, but rather some insight on how to debug a bizarre problem in general. The problem is that the application freezes (not crashes) on the cam->SnapshotMono() line. Of course, I ran through the function with a debugger. Here is the contents:
const uchar* Camera::SnapshotMono()
{
cam_.StartCapture();
// get a frame
Image image;
cam_.RetrieveBuffer(&image);
cam_.StopCapture();
grey_buffer_.DeepCopy(&image);
return grey_buffer_.GetData();
}
Now, every time I step through the function in the debugger, everything works OK. But the first time I do a "step over" instead of "step into" SnapshotMono(), bam, the program freezes. When I pause it at that time, I notice that it's stuck inside SnapshotMono() at the RetrieveBuffer() line. I know it's a blocking call so it theoretically can freeze (no idea why but it's possible), but why does it block when running normally and not when being debugged? This is one of the weirdest kinds of behaviour under debugging I've seen so far. Any idea why this could happen?
For those familiar with FlyCapture, the code above doesn't break as is, but rather only when I use StartCapture() in callback mode, then terminate it with StopCapture() before it.
Compiled with MSVC2010, OpenCV 2.4.5 and PGR FlyCapture 2.4R10.
Wild guess ... but may it be that StartCapture already starts the process that
ends up with having the buffer in ìmage, and if you step you leave it some
time until you get to RetrieveBuffer. That's not the case if you run it all at once ...

Using cvQueryFrame and boost::thread together

I need to call cvQueryFrame (to capture a frame from a webcam with opencv) instead a thread created with boost. Here is a little example code:
void testCVfunc(){
IplImage* frame;
CvCapture *capture;
capture = cvCreateCameraCapture(CV_CAP_ANY);
if(!capture){
exit(1);
}
frame = cvQueryFrame(capture);
cvNamedWindow("testCV", 1);
while(frame = cvQueryFrame(capture)){
if(!frame){
exit(2);
}
cvShowImage("testCV", frame);
cvWaitKey(1);
}
cvReleaseImage(&frame);
cvReleaseCapture(&capture);
}
int main(){
//Method 1: without boost::thread, works fine
testCVfunc();
//Method 2: with boost::thread, show black screen
char entree;
boost::thread threadTestCV = boost::thread(&testCVfunc);
std::cin >> entree;
}
As the comments say, testCVfunc does its job if I don't call it from a boost::thread, but I get a black screen if I use boost::thread.
I don't get the problem, maybe someone does?
Thank you for your help.
I've seen some problems when OpenCV is being executed from a secondary thread and it's difficult to pinpoint the origin of the problem when the behavior is not consistent on all platforms.
For instance, your source code worked perfectly with OpenCV 2.3.0 on Mac OS X 10.7.2. I don't know what platform you are using, but the fact that it worked on my computer indicates that OpenCV has some implementation issues with the platform you are using.
Now, if you can't move OpenCV's code to the primary thread, then you might want to start thinking about creating a 2nd program to handle all OpenCV related tasks, and use some sort of IPC mechanism to allow this program to communicate with your main application.
I solved the problem by calling
cvCreateCameraCapture(CV_CAP_ANY);
in the main thread, even if it doesn't really answer the question:
why is this not working? question.
Hope this can help someone else.
Try calling cv::startWindowThread(); in the main app and then creating a window within your thread. This worked for me.