I am writing an Application for Android using Qt (with Android Build Kit), Felgo and QtQuick.
The App should be able to display a video which is decoded by libvlc. The problem is, that I do not know, how to display the frames provided by libvlc (format is RGB 24bit). I looked up several approaches in the internet but they were all outdated (e.g. used some OpenGL functions, which are not available in the Kit).
So does somebody knows how to efficiently render the raw rgb-data into a QtQuick Item?
(I know it is possible with a PaintedItem and Painter, but this uses a QImage which makes the process to inefficient.)
(I do not know much about OpenGL, so if your solution includes it, please be so kind and explain the functions you are using.)
Related
I wonder if I can write my own native module, render something with using OpenGL in C++ and finally display rendered picture on react native side ( by simply using component).
If so, can I use that to render an animation in for example 60fps?
My case is that I've got the custom, let's say, game renderer written in OpenGL, and I looking for some fancy solution to create an editor detached from engine code.
I've already analyzed some react-native video libraries and I've discovered that frames are injecting as the texture of components, but I'm not sure is it the best solution (I can't find any documentation of those low-level mechanisms in react native).
Any advice? Thanks in advance!
I'm currently trying to create a GUI that will display Kinect data as well as a couple control features (buttons, etc.) and a graph. The control features and the graph is simple enough however I'm a bit stuck on how to continue on with the Kinect.
I've looked at numerous examples for MSVS and Kinect Studio but I can't quite figure out how to import the Kinect data to my Qt script. Also, I need it so that the Qt data can be recorded then replayed when a button is pressed.
How should I go about this? My hope is some how I can extract the Kinect data in its raw (x,y,z) format then reconstruct the data using openGL? I've seen references of OpenNI as well as OpenCV but I'm unfamiliar with either. I'm pretty familar with Qt but have virtually no experience with Kinect development.
I'm currently using Qt5 Creator, MSVS2015, and Kinect SDK2.
So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.
I'd like to make a detailed video list in my Qt application using vlc-qt. Other playback engines such as QtAV or QtMultimedia are not an option. It should be vlc-qt (libvlc). That's why I need to get a small picture of a video, a preview, but can't find anything suitable for this task, except libvlc_video_take_snapshot. This method will save a picture locally, and I guess it needs a real render window to exist. That's not a good variant for me, maybe there's some better solution?
I'm writing a plugin in firebreath, C++.
I don't have any experience with both, so my question may be very basic.
How do I place a JPEG image inside my plugin window?
Or at least, how do I do it in C++ simple program?
Thanks,
RRR
There are a couple of other questions that may help you better understand this:
How to write a web browser plugin for IE, Firefox and Chrome
Directx control in browser plugin
Basically you'll get a drawing model from FireBreath with the AttachedEvent. Depending on your platform, you will draw to that window using platform-specific drawing APIs. On Windows, for example, you would get the HWND from the PluginWindow (cast it to a PluginWindowWin) and then draw to that. Just make sure you stop drawing when DetachedEvent shows up.
For more information, you'll need to be a lot more specific; but follow those links and do some reading, then you'll know better what questions to ask.
FireBreath 1.5.2 was just released, btw! Good luck!
Good luck!
You can also use OpenGL to display images in plugin. You can get several tutorials to load jpeg image in OpenGL as texture. Same code can be ported into the Firebreath plugin using the already given OpenGL sample plugin for windows. Though OpenGL context creation will vary from one OS to the other. If you want to load jpeg images from web, you'll have to download image before converting it into opengl texture.