Matlab - Closing vision.VideoPlayer handler - computer-vision

First of all, excusme for my bad English. I am working in it.
I am working in a computer vision application. I am using a webcam. The main loop is like this:
while true
get frame
process frame
show frame in figure
end while
And I want something like this:
while figure is open
get frame
process frame
show frame in figure
end while
I used to use figure and imshow for plot the frame, and I used handlers for knowing when the figure is closed by the user.
fig = figure;
set(fig,'KeyPressFcn','exit = true;');
set(fig,'CloseRequestFcn', 'exit = true; delete(gcf)');
But now I am using the vision.VideoPlayer from the Computer System Toolbox because is faster, and I can not find a way to do some similar. I don't want to use a GUI.
The code is this (from this other thread):
vid = videoinput('winvideo', 1, 'RGB24_320x240'); %select input device
hvpc = vision.VideoPlayer; %create video player object
src = getselectedsource(vid);
vid.FramesPerTrigger =1;
vid.TriggerRepeat = Inf;
vid.ReturnedColorspace = 'rgb';
src.FrameRate = '30';
start(vid)
%start main loop for image acquisition
for t=1:500
imgO=getdata(vid,1,'uint8'); %get image from camera
hvpc.step(imgO); %see current image in player
end
Some idea?

You can find the figure handle of vision.VideoPlayer object by turning on "ShowHiddenHandles".
set(0, 'ShowHiddenHandles', 'on') % Revert this back to off after you get the handle
After this gcf can give you the handle. But it is risky to change callbacks for hidden handles. They might already have many of their callbacks set for proper functioning of VideoPlayer object. You might want to check for their validity and visibility to detect whether it is open.
h = gcf;
...
ishandle(h)
get(h, 'Visible') % will return 'off' if the figure is not visible.

Related

OpenCV destroyWindow() not working with multiple windows

I am using openCV library to open and display multiple images. I am doing this with multiple windows created in order to display each image. In order to achieve display of multiple windows at the same time, I am using waitKey() only after the last image.
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
As can be seen from the code, my goal is to give the user 1s of time to press any key, otherwise I want to destroy one of the windows (for the purpose of this question it can be either one). I want to achieve this by using openCV's function destroyWindow().
Below my entire code can be seen:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window2");
The goal of this code snippet should be that only "Window1" remains displayed, if 1s goes by, with the user not pressing any key.
However, this does not happen. The end result is that none of the windows are destroyed.
I have tested the following code snippet, which results in both windows being closed:
cv::namedWindow("Window1");
cv::imshow("Window1", myImage1);
cv::namedWindow("Window2");
cv::imshow("Window2", myImage2);
cv::waitKey(1000);
cv::destroyWindow("Window1");
cv::destroyWindow("Window2");
The same results when I use destroyAllWindows() function (which makes sense).
My question now is, why can't I destroy only one of the windows?
Additional info:
Using Ubuntu 20.04.
OpenCV version is 4.2.
Working in C++
Changing the order of which window I want to destroy changes nothing.
Tried to replicate it, facing this issue in Python as well on Ubuntu. If you are still stuck, you can try a stopgap solution of reshowing only the one you wanted to show provided the user has pressed a key or not by storing the result of waitKey in some variable. If it is -1 then no key has been pressed.
I have provided a sample solution in Python which you shouldn't face any difficulties converting to C++.
import cv2
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
cv2.namedWindow('img1')
cv2.imshow('img1', img1)
cv2.namedWindow('img2')
cv2.imshow('img2', img2)
key = cv2.waitKey(5000)
if key == -1:
cv2.destroyAllWindows()
cv2.imshow('img1', img1)
cv2.waitKey(0)
else:
# do whatever destroy both or keep on showing both using cv2.waitKey(0)
cv2.destroyAllWindows()
I have reached a solution by adding startWindowThread() before adding each of the windows.
An important thing to note is also that I have built openCV using GTK option, so my solution is tested only on GTK not on others.
startWindowThread() is used only with GTK as noted here: https://github.com/opencv/opencv/issues/7562 - for others the function is empty.

Jumping to a single frame in Node created in Cocos Studio

I have a Node named "Fruit" which contains 4 single frames for each fruit. It also contains a shadow, which should be the same for all fruits. I'm creating this node like this:
auto newFruit = CSLoader::createNode("Fruit.csb");
auto fruitAction = CSLoader::createTimeline("Fruit.csb");
newFruit->runAction(fruitAction);
Now when I'm creating this fruit I want to set a random frame:
fruitAction->gotoFrameAndPause(r + 1);
r is from 0 to 3.
However it doesn't work. It doesn't change frame at all. When I run debugging I can see correct frame number.
So I've tried different solution. I've made 4 1-frame animations named "a1", "a2", "a3" and "a4".
Then:
fruitAction->play("a" + to_str(r + 1), false);
Now I'm getting sometimes good sometimes not. Giving constant r continuously is giving me different results.
Only solution I've found is to make all animation 2-frames long (with 1 offset) so "a1": 0->1, "a2": 2->3, "a3":4->5, "a4":6->7, but this is too complicated to be worth using. Also it sometimes blinks first frame for a frame or few (so it looks very bad then).
Is it a bug or I'm doing something wrong?
After digging in cocos2d-x code it seems more troublesome than it looks.
There are 3 problems in current implementation:
1) when you won't play your animation it'll have playing property set to false so ActionTimeline::step won't even pass first if (checking if playing) and it'll never render another frame.
2) when you use gotoFrameAndPause _endFrame is never set (it's 0 by default) and because of that setCurrentFrame will always fail, because of this if:
if (frameIndex >= _startFrame && frameIndex <= _endFrame)
3) when you use "play" _startFrame and _endFrame are set between just this one particular animation and you won't be able to "jump" to frame outside it.
I've made a little workaround and put that in 3 macros:
#define CC_INIT_ACTION(__ACTION__) __ACTION__->gotoFrameAndPlay(0, __ACTION__->getDuration() + 1, 0, false); __ACTION__->pause()
#define CC_JUMP_ACTION_TO_FRAME(__ACTION__, __FRAME__) __ACTION__->setCurrentFrame(__FRAME__); __ACTION__->resume(); __ACTION__->step(0.0001f); __ACTION__->pause()
#define CC_JUMP_ACTION_TO_FRAME_BY_NAME(__ACTION__, __NAME__) int __FRAME__ = __ACTION__->getAnimationInfo(__NAME__).startIndex; CC_JUMP_ACTION_TO_FRAME(__ACTION__, __FRAME__)
CC_INIT_ACTION makes possible to jump to every frame.
CC_JUMP_ACTION_TO_FRAME jumps to particular frame number.
CC_JUMP_ACTION_TO_FRAME_BY_NAME jumps to first frame in particular animation.
Also step(0.0001f) in CC_JUMP_ACTION_TO_FRAME is necessary, because step sometimes calculates current frame wrong (maybe rounding problem, not sure).

How to fully/correctly exit a Qt program from the main form?

I'm writing a Qt program (using Qt 5.4) that reads frames from a webcam based on a QTimer, not a separate thread (interval set to 20 ms, of course it takes much longer than 1/50 of a second to read a frame from the webcam and process it, I'd approximate the frame rate is perhaps 20 fps. Anyhow, the function which runs when the timer cycles is a slot and is as follows:
///////////////////////////////////////////////////////////////////////////////////////////////////
void frmMain::processFrameAndUpdateGUI() {
bool blnFrameReadSuccessfully = capWebcam.read(matOriginal); // get next frame from the webcam
if (!blnFrameReadSuccessfully || matOriginal.empty()) { // if we did not get a frame
QMessageBox::information(this, "", "unable to read from webcam \n\n exiting program\n");
QApplication::quit();
}
// process frame here . . .
The idea being if the webcam can be successfully read at the beginning of the program, but then cannot be (webcam stops working, user accidentally disconnects webcam, etc.) the program should show a message box to this effect and then close itself entirely.
With the above, if I unplug the webcam while the program is running for testing purposes, the message box appears as intended, but after choosing OK, a debug error screen appears. If I choose "Abort" the form is still there and will not respond. After attempting to close the form multiple times Windows asks "the program does not seem to be responding, would you like to close?" at which time I can close the form. Clearly this is not achieving the intended effect.
After various Googling I found the suggestion to modify as follows:
///////////////////////////////////////////////////////////////////////////////////////////////////
void frmMain::closeEvent(QCloseEvent *) {
QApplication::quit();
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void frmMain::processFrameAndUpdateGUI() {
bool blnFrameReadSuccessfully = capWebcam.read(matOriginal); // get next frame from the webcam
if (!blnFrameReadSuccessfully || matOriginal.empty()) { // if we did not get a frame
QMessageBox::information(this, "", "unable to read from webcam \n\n exiting program\n");
closeEvent(new QCloseEvent());
}
// process frame here . . .
When I first saw this code I was optimistic, however it gives me the same result as above (program hangs with the form still open). I'm using OpenCV 2.4.11 for my image processing and my program has 4 files:
frmmain.h (.h for the main form, which is a standard QMainWindow made with Qt Creator)
frmmain.cpp (.cpp for the main form, where the above code resides)
main.cpp (which I have not changed from how Qt Creator made it)
frmmain.ui (typical form with a small number of common widgets added via Qt Creator)
Yes, I realize that I could show an error message on one of the widgets that can show text, return from the function, and leave it to the user to close the program, but I'm looking for a more elegant solution. Can anybody offer further advice as to how to fully close a graphical Qt program? Please advise.
Two things that could possible solve your problem:
Before displaying the messagebox, stop the timer with the stop() method.
After the QApplication::quit(); exit the function with return; Your function might be running to the end one last time and accessing invalid objects.
For anybody else's reference Rafael Monteiro's answer was spot on. Here is the updated code (verified working):
///////////////////////////////////////////////////////////////////////////////////////////////////
void frmMain::closeEvent(QCloseEvent *) {
if(qtimer->isActive()) qtimer->stop(); // had to stop timer here !!!!!!!!
QApplication::quit();
}
///////////////////////////////////////////////////////////////////////////////////////////////////
void frmMain::processFrameAndUpdateGUI() {
bool blnFrameReadSuccessfully = capWebcam.read(matOriginal); // get next frame from the webcam
if (!blnFrameReadSuccessfully || matOriginal.empty()) { // if we did not get a frame
QMessageBox::information(this, "", "unable to read from webcam \n\n exiting program\n");
closeEvent(new QCloseEvent());
return; // had to add return here !!!!!!!!!
}
// rest of function here . . .
I should mention I had to add both the return and stop the timer. Thanks Rafael!

SDL_RenderCopy() has strange behavior on Raspberry PI

This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.

Function call causes C++ program to freeze unless stepped-through in debugger

I have this short C++ program which takes snapshot images from a camera in a loop and displays them:
void GenericPGRTest::execute()
{
// connect camera
Camera *cam = Camera::Connect();
// query resolution and create view window
const Resolution res = cam->GetResolution();
cv::namedWindow("View");
c = 0;
// keep taking snapshots until escape hit
while (c != 27)
{
const uchar *buf = cam->SnapshotMono();
// create image from buffer and display it
cv::Mat image(res.height, res.width, CV_8UC1, (void*)buf);
cv::imshow("Camera", image);
c = cv::waitKey(1000);
}
}
This uses a class (Camera) for camera control I created using the Point Grey SDK and functions from the OpenCV library to display the images. I'm not necessarily looking for answers relating to the usage of either of these libraries, but rather some insight on how to debug a bizarre problem in general. The problem is that the application freezes (not crashes) on the cam->SnapshotMono() line. Of course, I ran through the function with a debugger. Here is the contents:
const uchar* Camera::SnapshotMono()
{
cam_.StartCapture();
// get a frame
Image image;
cam_.RetrieveBuffer(&image);
cam_.StopCapture();
grey_buffer_.DeepCopy(&image);
return grey_buffer_.GetData();
}
Now, every time I step through the function in the debugger, everything works OK. But the first time I do a "step over" instead of "step into" SnapshotMono(), bam, the program freezes. When I pause it at that time, I notice that it's stuck inside SnapshotMono() at the RetrieveBuffer() line. I know it's a blocking call so it theoretically can freeze (no idea why but it's possible), but why does it block when running normally and not when being debugged? This is one of the weirdest kinds of behaviour under debugging I've seen so far. Any idea why this could happen?
For those familiar with FlyCapture, the code above doesn't break as is, but rather only when I use StartCapture() in callback mode, then terminate it with StopCapture() before it.
Compiled with MSVC2010, OpenCV 2.4.5 and PGR FlyCapture 2.4R10.
Wild guess ... but may it be that StartCapture already starts the process that
ends up with having the buffer in ìmage, and if you step you leave it some
time until you get to RetrieveBuffer. That's not the case if you run it all at once ...