OpenCV destroyWindow() not working with multiple windows - c++

I am using openCV library to open and display multiple images. I am doing this with multiple windows created in order to display each image. In order to achieve display of multiple windows at the same time, I am using waitKey() only after the last image.
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
As can be seen from the code, my goal is to give the user 1s of time to press any key, otherwise I want to destroy one of the windows (for the purpose of this question it can be either one). I want to achieve this by using openCV's function destroyWindow().
Below my entire code can be seen:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window2");
The goal of this code snippet should be that only "Window1" remains displayed, if 1s goes by, with the user not pressing any key.
However, this does not happen. The end result is that none of the windows are destroyed.
I have tested the following code snippet, which results in both windows being closed:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window1");
cv::destroyWindow("Window2");
The same results when I use destroyAllWindows() function (which makes sense).
My question now is, why can't I destroy only one of the windows?
Additional info:
Using Ubuntu 20.04.
OpenCV version is 4.2.
Working in C++
Changing the order of which window I want to destroy changes nothing.

Tried to replicate it, facing this issue in Python as well on Ubuntu. If you are still stuck, you can try a stopgap solution of reshowing only the one you wanted to show provided the user has pressed a key or not by storing the result of waitKey in some variable. If it is -1 then no key has been pressed.
I have provided a sample solution in Python which you shouldn't face any difficulties converting to C++.
import cv2
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
cv2.namedWindow('img1')
cv2.imshow('img1', img1)
cv2.namedWindow('img2')
cv2.imshow('img2', img2)
key = cv2.waitKey(5000)
if key == -1:
cv2.destroyAllWindows()
cv2.imshow('img1', img1)
cv2.waitKey(0)
else:
# do whatever destroy both or keep on showing both using cv2.waitKey(0)
cv2.destroyAllWindows()

I have reached a solution by adding startWindowThread() before adding each of the windows.
An important thing to note is also that I have built openCV using GTK option, so my solution is tested only on GTK not on others.
startWindowThread() is used only with GTK as noted here: https://github.com/opencv/opencv/issues/7562 - for others the function is empty.

Related

Qt program using Matlab dll, Initialized successfully in local disk, but failed in My U disk

I am writing a Qt program using matlab dll, the matlab function is very simple, However I encountered some exception caused by initializing problem. So I have changed my part of code as below.
void DicomViewer::on_pushButton_clicked()
{
if(libMyAddInitialize()){
mwArray dicomArray2=Mat2mwArray(dicomMat1);
//c_matlab(1,dicomArray2,Mat2mwArray(dicomMat1));
mwArry2Mat(dicomArray2).copyTo(dicomMat2);
dicomImgShow(TEMPDICOM2);
libMyAddTerminate();
}
}
This part of the code provides a button to change my cv::Mat data. I am using two functions Mat2mwArrayandmwArray2Mat to convert Mat between mwArray. Funciton dicomImgShow is to show the Mat to My program. That I think code is not the key problem.
My Key problem is The button function only work in my local disk (C:/ and D:/). I put it into my U disk and it failed. I also tried to run it in admin mode, but failed again. So I wonder what's wrong in my program.
Thanks for anyone who come to help .

C++: OpenCV2.4.11(!) access to webcam parameters

This is a direct follow-up of the last question I asked which was aptly named "C++: OpenCV2.3.1(!) access to webcam parameters" and where I was told to install OpenCV2.4.11 instead (OpenCV3.0 did not work)... which I did. And yes, most of this text is an exact copy&paste of the last thread since my problem hasn't actually vanished...
Again, I've searched here, on other forums (Google, OpenCV etc), looked at the code of the videoInput library, the different header files and especially OpenCV's highgui_c.h and still seem to be unable to find an answer to this very simple question:
How do I change exposure and gain (or, to be general, any webcam property) in my Logitech C310 webcam with OpenCV2.4.11 the same way I was able to with OpenCV2.1.0? (using Win7 64-bit, Visual Studio 10)
EDIT: This has been solved. I do not know how but when I tested my code this morning it was able to report and set the exposure using VideoCapture and the set/get method.
There's the nice and easy VideoCapture get and set method, I know, similar to the videoInput's [Set/Get]VideoSetting[Camera/Filter] functions. Here's my short example in OpenCV2.4.11 that doesn't work:
EDIT: It does work now. What I don't understand is that the values of several properties are reported as -8.58993E+008 (namely hue, monocrome, gamma, temperature, zoom, focus, pan, tilt, roll and iris) and that property 6 (fourcc) is -4.66163E+008. I know I don't have these features on my webcam but all other unimplemented features report -1.
int __stdcall WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, char* CmdArgs, int CmdShow) {
int device0 = 0;
VideoCapture VC(device0);
if(!VC.isOpened()) // check if we succeeded
return -1;
ostringstream oss;
double CamProp;
for(int i=-4; i<27; i++) {
CamProp = VC.get(i);
Sleep(5);
oss << "Item " << i << ": " << CamProp << "\n";
}
MessageBox(NULL, oss.str().c_str(), "Webcam Values", MB_OK);
return 0;
}
It compiles, it runs, it accesses the webcam alright (and even shows a picture with imshow if I add it to the code) but it only opens a nice window saying this:
Item -4: 0
Item -3: 0
Item -2: 0
...
Item 2: 0
Item 3: 640
Item 4: 480
Item 5: 0
...
Item 25: 0
Item 26: 0
EDIT: See above, this works now. I get values for all supported parameters like exposure, gain, sharpness, brightness, contrast and so on. Perhaps I was still linking to the 2.3.1 libraries or whatever.
The point is: This was all perfectly settable with this camera under OpenCV 2.1.0 using videoInput. I had a running application doing its own lighting instead of using the Logitech functions (RightLight, Auto Exposure, Auto Whitebalance). Now setting and getting the parameters has been integrated into OpenCV highgui for quite a while but with a strongly reduced feature list (no requesting of parameter ranges, Min/Max/Stepwidth..., no setting of auto exposure, RightLight and similar stuff) and for some reason it's incompatible with my Logitech webcam. I can report the resolution but nothing else.
EDIT: I still miss the Min, Max, Step, Auto/Manual features of videoInput. I can set a value but I don't know whether it's allowed.
The videoInput code is now merged into OpenCV's code in the file cap_dshow.cpp but I can't find a header file that declares the videoInput class and simply using my old code doesn't work. So I have a cpp file which contains all functions I need and which I know did the job for me a while back but which I can't access now. Any clues on how to do that? Has anyone accessed and changed camera parameters in OpenCV2.4.11 using the videoInput/DirectShow interface?
EDIT: Seems this has happened now in a working way, unlike 2.3.1. No direct interaction with videoInput seems to be needed. However it would be nice to have it for the aforementioned reasons.
There's also the funny problem that using e.g.
VideoCapture cam(0)
addresses exactly the same camera as
VideoCapture cam(1)
or
VideoCapture cam(any integer value)
which seems odd to me and hints in the same direction - that CV's VideoCapture does not work properly for me. A similar problem is described here but I also tried the code with a Sleep(1000) after opening the capture - without success.
EDIT: This is also working correctly now. I get my webcam with (0) and and error with (1) which is absolutely OK.

wxWidgets wxPen size changes unexpectedly

I've used the following code to draw on a Image using a wxMemoryDC.
To do so I used a wxPen and changed the settings of the pen as in the following code. The code compiles and runs perfectly in windows environment. But in Ubuntu it draws the lines but the pen size stays correctly for a very little time and then the pen size becomes very low.(As shown in the image) It is not an error of the m_pensize variable because it always prints the correct value. Why does this works so strange in ubuntu when it works correctly in windows?.
(m_graphics is the memoryDC here)
if (x<m_backgroundImage.GetWidth() && y< m_backgroundImage.GetHeight()){
m_graphics.SelectObject(m_maskImage);
wxPen* pen;
if (m_isDrawing){
pen = wxThePenList->FindOrCreatePen(*wxRED, m_penSize);
printf("Pen size is %d", m_penSize);
}
else{
pen = wxThePenList->FindOrCreatePen(*wxBLACK, m_penSize);
}
if (m_pentype != Circle){
pen->SetCap(wxCAP_PROJECTING);
}
m_graphics.SetPen(*pen);
m_graphics.DrawLine(m_lastX,m_lastY,x,y);
m_graphics.SelectObject(wxNullBitmap);
}
In windows it is shown Correctly
In linux The pen size is changed unexpectadly.
Your help is greatly appreciated.
If the same code behaves differently in wxMSW and wxGTK, then it's probably a bug in wxWidgets itself, however to fix it it needs to be reproduced in some simple to test way, ideally by making the smallest possible change to the wxWidgets drawing sample and opening a ticket attaching this change as a patch to it.
To simplify the code as much as possible, I'd recommend:
Getting rid of wxThePenList and just creating the pen directly. It's unlikely that the bug is here, but who knows.
Check if it's not due to SetCap() call, this is the most likely candidate IMHO.

Why QCamera::CaptureVideo isn't supported?

I'm trying to create an application which uses camera API, based on an example from Qt.
Problem:
Following call to check if video capture is supported returns false.
camera->isCaptureModeSupported(QCamera::CaptureVideo) //returns false.
If I try to ignore it and start recording - recording does not start and I get no error messages ( Also, QMediaRecorder::errorString() and QCamera::errorString() return empty strings ).
Image from camera correctly showed in QCameraViewFinder.
It basically is a known bug in windows.
https://bugreports.qt.io/browse/QTBUG-30541
https://doc-snapshots.qt.io/qt5-5.5/qtmultimedia-windows.html
It should work in other platforms, though.

SDL_RenderCopy() has strange behavior on Raspberry PI

This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.