GtkImage isn't updating and opencv won't free images - c++

I'm trying to display video from a webcam. I capture the images from the webcam using opencv and then I try to display them on a GtkImage.
This is my code, which runs in a seperate thread.
gpointer View::updateView(gpointer v)
{
IplImage *image;
CvCapture *camera;
GMutex *mutex;
View *view;
view=(View*)v;
camera=view->camera;
mutex=view->cameraMutex;
while(1)
{
g_mutex_lock(view->cameraMutex);
image=cvQueryFrame(camera);
g_mutex_unlock(view->cameraMutex);
if(image==NULL) continue;
cvCvtColor(image,image,CV_BGR2RGB);
GDK_THREADS_ENTER();
g_object_unref(view->pixbuf);
view->pixbuf=gdk_pixbuf_new_from_data((guchar*)image->imageData,GDK_COLORSPACE_RGB,FALSE,image->depth,image->width,image->height,image->widthStep,NULL,NULL);
gtk_image_set_from_pixbuf(GTK_IMAGE(view->image),view->pixbuf);
gtk_widget_queue_draw(view->image);
GDK_THREADS_LEAVE();
usleep(10000);
}
}
What happens is that one image is taken from the webcam and displayed and then the GtkImage stops updating.
In addition, when I try to use cvReleaseImage, I get a seg fault which says that free has been passed an invalid pointer.

GTK is an event-driven toolkit, like many others. What you're doing is queuing the new images to draw in an infinite loop, but never give GTK a chance to draw them. This is not how a message pump works. You need to give a hand back to GTK, so it can draw the updated image. The way to do that is explained in gtk_events_pending documentation.
Moreover, allocating/drawing/deallocating a gdk-pixpuf for each image is sub-optimal. Just allocate the buffer once out of your loop, draw on it in your loop (it will overwrite the previous content), and display it. You only need to reallocate a new buffer if your image size changes.

I don't know how to work with GtkImgae, but my guess is that you are not passing the newer images to the window. You need something like the native cvShowImage to execute inside the loop. If it isn't that I don't know.
Also You shouldn't release the image used for capture. OpenCV allocates and deallocates it itself.
EDIT: Try using the OpenCV functions for viewing image and see if the problem is still there.

Related

gtk image suddenly do not refresh without any errors or warnings

I develop a tool to capture image from a camera and show the captured image in a window. The GUI window is based on GTK 2.4. At begin, the tool is running correctedly, and the image is captured from camera and showed on the window in real-time. After a while, the image suddenly do not refresh any more on the window, but it's still captured from the camera. There is no errors or warnings. Anyone has ever encountered such a case? Thank you.
Ubuntu 18.04, GTK 2.4
// loop to call the following code to refresh the image on the window
pixbuf_ = Gdk::Pixbuf::create_from_data(
final_img_buf_.data, Gdk::COLORSPACE_RGB, false, 8, final_img_buf_.cols,
final_img_buf_.rows, static_cast<int>(final_img_buf_.step));
gtk_image_.set(pixbuf_);
Edit at 2019-02-27
Thanks for all your replies. I have upgraded GTK to GTK+ 3, but this still appears.
// loop to call the following code to refresh the image on the window
pixbuf_ = Gdk::Pixbuf::create_from_data(
final_img_buf_.data, Gdk::COLORSPACE_RGB, false, 8, final_img_buf_.cols,
final_img_buf_.rows, static_cast<int>(final_img_buf_.step));
// ensure the image is normally updating
//std::this_thread::sleep_for (std::chrono::milliseconds(30));
gtk_image_.set(pixbuf_);
Glib::RefPtr<Gdk::Pixbuf> pixbuf = gtk_image_.get_pixbuf();
std::string filename = std::string("./debug/") + std::to_string(CurTimestampMicrosecond()) + std::string(".jpg");
pixbuf->save(filename, "jpeg");
Afther running a while, the window does not refresh image any more, but the image is still saved correctedly.
Edit at 2019-02-28
// The initialization code
gtk_main_.reset(new Gtk::Main(argc, argv));
ctrl_window_.reset(new CtrlWindow(screen, ctrl_rect)); // inherited from Gtk::Window
thread_ = std::thread([this]() {
Gtk::Main::run(*ctrl_window_);
}
Looks like both g_timeout_add and possibly g_idle_add can be delayed due to the processing of other event sources, so is not optimal in contexts requiring precise timing, like drawing an image to a screen before the frame updates. I've noticed using g_timeout_add in a 2D graphics context completely freezes the application if it can't load a frame at the fps specified.
gtk_widget_add_callback() might be more suited to your needs as it will draw the buffer as quickly as the application can, without hangs, basically doing the messy synchronization for you.

How to remove a specific background from a video (OpenCV 3)

I am relatively new to OpenCV. My program will have a fixed camera that will track insects moving passed it. I figured that this would mean that I could remove the background from the video. I have attempted to use the method (which I found in a tutorial - http://docs.opencv.org/3.1.0/d1/dc5/tutorial_background_subtraction.html#gsc.tab=0):
pMOG2 = cv::createBackgroundSubtractorMOG2();
..
pMOG2->apply(frame, background);
However, how does this determine the background?
I have tried another way, which I thought might work, which was to capture the background when the program first starts and then use absDiff() or subtraction() on the background and current frame. Unfortunately, this results in a strange image which has parts of the static background image displayed over the video, this messes up the tracking.
I am a bit confused as to what would be the best way to do things. Is it possible to remove a specific background from each frame?
Thanks!

OpenCV's highgui in C++ Qt application

I am trying to combine OpenCV's webcam access and image display functions with Qt's Network capabilities (Qt 4.8.1 C++ console application using OpenCV 2.4.2 highgui).
I am currently trying to avoid converting a cv::Mat to a Qt format and displaying it by custom Qt GUI for simplicity.
For this, I am creating a wrapper class that does all the OpenCV stuff (cap = new VideoCapture(), namedWindow(), cap->read(), imshow(), destroyAllWindows(), cap->release()) controlled by a QTimer an move it to a QThread.
This mostly works (still garbage characters in the window title) but sometimes OpenCV creates a new window when the thread's parent class is receiving signals from its QTCPServer.
This results in "dead" image windows that are not updated anymore.
The order of creation (OpenCV thread / QTcpServer) does not seem to matter - however, if no client connects, I can see that OpenCV creates a small window first, that consequently gets enlarged to fit the video image size.
If a client connects, the smaller window most of the times remains (Window title garbage like "ét), the newer window receives the image data properly (Window title garbage like ",ét").
Not moving the OpenCV wrapper object to a thread does work as well, but the same problem occurs (even worse, two dead windows are created, the second of which already received an image frame).
What could I generally be doing wrong to cause such behavior?
I suspect that the problem may be caused by the named window being created and accessed in two different methods of my QOBject wrapper class (Constructor and slot).
Or the QTCPServer is blocking the Qt event loop? Or that the OpenCV window handle is "garbage-collected" for some reason, when the Signal-Slot mechanism is triggered by QTCPServer events, and then imshow creates a new window?
There is no obvious way of accessing the window by pointer, so this must be the reason. Removing the initial namedWindow() does away with the small, empty window but still, two windows are created.
It seems I have to convert and display the images myself after all - or is there another way?
I think the problem here is encapsulation, as far as I understand you are trying to display an image on the same window from two different threads, this is wrong. You should be displaying image from single thread and maybe you can create an image buffer at that thread and your other threads put images to that buffer and you can use that buffer to display images. If I understood it wrong can you explain your problem clearer?

Garbage on top of screen when displaying text over image in devkit pro

I am currently using the 16-bit libnds (Whith devkitpro) example as a basis and am trying to display text and the png background image on the same screen (in this example it is the top sceen). I am having a similar issue as this post.
I have garbage on the top of the screen (only ifconsoleInit(...) is called), similar to the first problem in the thread. The only problem is that I am displaying the background image in a different method so the fixes they made in that thread did not apply to this.
All I am looking for is whether there is a way to fix the garbage on the top of the screen. If there is a more efficient/better way to display the image, I am willing to accept it, just I haven't found a detailed enough tutorial on how to load an image as a background without using this method. Any help would be appreciated. I will answer any further questions anyone has about what is not working.
You can find the project attached here.
Sorry for the long delay but there are a few issues with your code. The first is that in Mode 4 the only background that can be set up as a 16 bit bitmap is layer 3. http://answers.drunkencoders.com/what-graphics-modes-does-the-ds-support/
Next, the layers all share a single chunk of background memory and your garbage is coming from you overwriting part of the bitmap in video memory with the characters for the font and the map for the console background. A simple solution is to move the bitmap by settings its map base to 1. This offsets its in graphics memory by 16KB which leaves 16KB of room for your text layer (this only works because we cant display the entire 256x256 image on screen at once due the the resolution of the DS as 256x256x2bytes fills up all of memory bank A...to be more correct we should assign another memory bank to the main background...but since we cant see the bottom 70 or so lines of pixels of our image anyway its okay that they didnt quite make it into video memory).
libnds also has a macro to make finding the memory for your background a bit simpler called "bgGetGfxPtr(id)" which will get a pointer to your background gfx in video memory after you set it up so you dont have to try to calculate it via an offset from BG_GFX.
In all the changes to your code should look like this (I added a version of this to the libnds code faq at : http://answers.drunkencoders.com/wp-admin/post.php?post=289&action=edit&message=1)
int main(void) {
//Top screen pic init
videoSetMode(MODE_4_2D);
vramSetBankA(VRAM_A_MAIN_BG);
int bg = bgInit(3, BgType_Bmp16, BgSize_B16_256x256, 1,0);
decompress(drunkenlogoBitmap, bgGetGfxPtr(bg), LZ77Vram); //Displays/decompresses top image
//videoSetMode(MODE_4_2D);
consoleInit(0,0, BgType_Text4bpp, BgSize_T_256x256, 4,0, true, true);
iprintf("\x1b[1;1HThe garbage is up here ^^^^^.");
iprintf("\x1b[21;1HTesting the text function...");
while(1) {
swiWaitForVBlank();
scanKeys();
if (keysDown()&KEY_START) break;
}
return 0;
}

Using cv::waitKey without having to call cv::namedWindow or cv::imshow first

I am writing a GUI program using Qt and doing some video processing with OpenCV. I am displaying the result of the OpenCV process (which is in a separate thread) in a label in the main GUI thread.
The problem I am having is cv::waitKey doesn't work unless I open a native OpenCV window opened using cv::namedWindow or cv::imshow. Does anybody know how to solve this?
Short example:
void Thread::run()
{
//needed variables
cv::VideoCapture capture(0);
cv::Mat image;
//main loop
//cv::namedWindow("test");
forever
{
capture>> image;
if(!image.data)
break;
emit paintToDisplay(convertToQImage(image));
cv::waitKey(40);
}
}
With //cv::namedWindow("test"); i.e. commented, the program crashes with access violation error.
With cv::namedWindow("test"); i.e. uncommented, the program displays perfect but there's a window (named test) I don't want or need. Anybody?
cv::waitKey() only works with OpenCV windows, which is not what you are using right now.
I suggest you investigate a QT alternative, most probably qSleep(), which is provided by the QTest module:
QTest::qSleep(40);
cv::waitkey is part of opencv's gui loop for show window
If you simply want to wait for a key press see QWaitcondition.
OR you could display another named window with no image in it, or a small 1,1 pixel image and just ignore the window
I found a solution to use msleep(). It's easy to use since it's a member of the class QThread.
Just thought i'd update this in case someone with a similar problem finds this thread.
You can call
qApp->processEvents();
instead of
cv::waitKey(40);
in the loop to make your application responsive and let the rest of the loop do their job.