Having problems displaying images in Code::Blocks using OpenCV - c++

Recently, I changed my PC and when I run the C++ program using OpenCV library inside Code::Blocks in Ubuntu 12.04, there is a message " init done, opengl support available" and the window shape of displaying the images is different from my previous PC with the same C++ program. I want to show two images in one window using opencv but the problem is that the quality of images are low and some times a part of one image is in the other image. I ran the exact program in my old PC and there was no problem at all. Would someone please help me with that? I also uploaded one image from the problem of displaying.

Related

OpenCV cv::imshow() GUI not showing

I am trying to display an image, where it is crucial to be able to zoom in.
On my Ubuntu Gnome 16.04 machine, the GUI always shows and the image is zoomable.
But on my Ubuntu 18.04 machine, the GUI never shows and is not zoomable. I've tried the following ways to create the Window:
cv::namedWindow("Name", CV_WINDOW_AUTOSIZE);
cv::namedWindow("Name", CV_GUI_NORMAL);
cv::namedWindow("Name", CV_GUI_EXTENDED);
using the cv::namedWindow() and the cvNamedWindow() commands. They all work on my 16.04 machine, yet none on my 18.04.
My OpenCV Version is 3.2 and I'm using it in a ROS workspace if that makes any differences.
I guess the flags you are using might be outdated. As in the documentation of OpenCV 3.2.0, usable flags are as follows:
WINDOW_NORMAL or WINDOW_AUTOSIZE: WINDOW_NORMAL enables you to resize the window, whereas WINDOW_AUTOSIZE adjusts automatically the window size to fit the displayed image (see imshow ), and you cannot change the window size manually.
WINDOW_FREERATIO or WINDOW_KEEPRATIO: WINDOW_FREERATIO adjusts the image with no respect to its ratio, whereas WINDOW_KEEPRATIO keeps the image ratio.
These flags might work for you.

image doesn't change size if the window size was adjusted, openCV3.4.1_5 C++

I'm new to openCV and I'm using 3.4.1_5, C++ in Xcode on Mac. I used Homebrew to install opencv.
I'm following an openCV tutorial which is conducted in VS on Windows. Here's a link to the tutorial.
Basically, the tutorial shows how to create/resize a window, which is easy. So I have some code like this which is basically the same as in the tutorial:
#include "opencv2/opencv.hpp"
....
namedWindow("modi", CV_WINDOW_FREERATIO);
imshow("modi", modi);
resizeWindow("modi", modi.cols/2, modi.rows/2); //modi is a Mat
....
Supposedly, the image will be the same except it will be 1/4 of the original size and fit into the resized window. This is the case in the tutorial. However, it's not the case on my mac. My image stays the same size, and only the window shrinks to 1/4. If I drag the border of the window, the "covered" part of the image is revealed.
This actually poses a problem to my project later on, so I want to figure out why. I want to achieve what the tutorial achieves, and I've tried all window parameter, like AUTOSIZE, KEEPRATIO, OPENGL, etc. None of them worked. I'm thinking about if it's due to platform or version problem, but I have no way to test them.
Please help! Any hints would be greatly appreciated!

Kinect v1 integrates with qt

I have been working with Qt and Kinect v1 for couple of weeks. I don't know how to create a screen inside a Qt GUI and show stream video captured by Kinect.
I have searched for a in-depth tutorial in the Internet but it seems like nothing related directly to my problem.
I prefer programming in C++, using Qt 5.6. I know how to use some basic Qt feature but have never experienced with OpenGL in qt before. I can also run Kinect using Developer Toolkit browser and it works perfectly. Then I saw a tutorial using kinect and C++ at the link: https://homes.cs.washington.edu/~edzhang/tutorials/kinect/kinect2.html.
I followed the instructions and I can create a new windows show the stream video captured by Kinect sensor but when I using the same code in the Qt project, it just only show a black video. I don't know how to resolve the problem at all. The code is too long because I write it in the project created last week.

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

Can't get supported resolutions of the camera for image capturing using QCamera in Qt 5.0.2 (Linux)

I am trying to write a simple program for taking photos from the webcam using Qt.
There is an example project in the Qt Creator, where QCamera is used to take photos and record video. But it is not working the right way. I can't get supported resolutions of the camera using method QCameraImageCapture::supportedResolutions(). A null QList object is returned, and camera is always capturing images with 640x480 resolutions.
OS is Ubuntu 11.04. Same problem occurs on Windows XP.
Can anyone help me?
I have answered nearly the same question.
https://stackoverflow.com/a/21140214/2452081
In short:
Portable solution can be gstreamer, but if Windows DirectShow solution is enough for you download my code from here