How can I combine Kinect data with Qt5? - c++

I'm currently trying to create a GUI that will display Kinect data as well as a couple control features (buttons, etc.) and a graph. The control features and the graph is simple enough however I'm a bit stuck on how to continue on with the Kinect.
I've looked at numerous examples for MSVS and Kinect Studio but I can't quite figure out how to import the Kinect data to my Qt script. Also, I need it so that the Qt data can be recorded then replayed when a button is pressed.
How should I go about this? My hope is some how I can extract the Kinect data in its raw (x,y,z) format then reconstruct the data using openGL? I've seen references of OpenNI as well as OpenCV but I'm unfamiliar with either. I'm pretty familar with Qt but have virtually no experience with Kinect development.
I'm currently using Qt5 Creator, MSVS2015, and Kinect SDK2.

Related

Recording desktop using Qt multimedia

I'm trying to record microphones, cameras and monitors using Qt's multimedia module. But I couldn't find anything about screen recording in documentation. So it seemed I'd to do it by myself.
I've searched about that, found that answer:
How to display desktop in windows form using Qt?
Seems I've to use FFMPEG to merge these taken screenshots into mp4 file, also it doesn't seem an efficient way. I want to show real time recording output with QVideoWidget. I was going to use QMediaCaptureSession, QMediaRecorder. But they only work with camera. Is there any way to record desktop screen with Qt's multimedia module?
P.S: Please correct me, if my question is wrong

How to display rgb data in QtQuick 2?

I am writing an Application for Android using Qt (with Android Build Kit), Felgo and QtQuick.
The App should be able to display a video which is decoded by libvlc. The problem is, that I do not know, how to display the frames provided by libvlc (format is RGB 24bit). I looked up several approaches in the internet but they were all outdated (e.g. used some OpenGL functions, which are not available in the Kit).
So does somebody knows how to efficiently render the raw rgb-data into a QtQuick Item?
(I know it is possible with a PaintedItem and Painter, but this uses a QImage which makes the process to inefficient.)
(I do not know much about OpenGL, so if your solution includes it, please be so kind and explain the functions you are using.)

Can I use OpenGL context in React Native for Windows?

I wonder if I can write my own native module, render something with using OpenGL in C++ and finally display rendered picture on react native side ( by simply using component).
If so, can I use that to render an animation in for example 60fps?
My case is that I've got the custom, let's say, game renderer written in OpenGL, and I looking for some fancy solution to create an editor detached from engine code.
I've already analyzed some react-native video libraries and I've discovered that frames are injecting as the texture of components, but I'm not sure is it the best solution (I can't find any documentation of those low-level mechanisms in react native).
Any advice? Thanks in advance!

Kinect v1 integrates with qt

I have been working with Qt and Kinect v1 for couple of weeks. I don't know how to create a screen inside a Qt GUI and show stream video captured by Kinect.
I have searched for a in-depth tutorial in the Internet but it seems like nothing related directly to my problem.
I prefer programming in C++, using Qt 5.6. I know how to use some basic Qt feature but have never experienced with OpenGL in qt before. I can also run Kinect using Developer Toolkit browser and it works perfectly. Then I saw a tutorial using kinect and C++ at the link: https://homes.cs.washington.edu/~edzhang/tutorials/kinect/kinect2.html.
I followed the instructions and I can create a new windows show the stream video captured by Kinect sensor but when I using the same code in the Qt project, it just only show a black video. I don't know how to resolve the problem at all. The code is too long because I write it in the project created last week.

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.