OpenGL - Display video a stream of the desktop on Windows - c++

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.

There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

Related

Recording desktop using Qt multimedia

I'm trying to record microphones, cameras and monitors using Qt's multimedia module. But I couldn't find anything about screen recording in documentation. So it seemed I'd to do it by myself.
I've searched about that, found that answer:
How to display desktop in windows form using Qt?
Seems I've to use FFMPEG to merge these taken screenshots into mp4 file, also it doesn't seem an efficient way. I want to show real time recording output with QVideoWidget. I was going to use QMediaCaptureSession, QMediaRecorder. But they only work with camera. Is there any way to record desktop screen with Qt's multimedia module?
P.S: Please correct me, if my question is wrong

what's the best approach to design this simple ReactNative AR app?

I'm trying to write a simple AR app in ReactNative, it should simply see 4 predefined markers and draw a rectangle as a boundary on the live preview of the camera, the thing is I'm trying to do the processing in C++ using opencv so as to have the logic of the app in one place accessible to both Android & IOS.
here's what I've been thinking
write the OS dependent code to open the camera and get permissions in (java/ObjC) & the C++ part to do processing on each frame.
call the C++ code (from within the native code) on each frame, and that should return lets say coordinates for the markers.
draw the rect if 4 markers found on the preview in native code (No idea how to achieve this so far but I think it will be native code).
expose that preview (the live preview with the drawn view) to ReactNative (Not sure about that or how to achieve it)
I've looked at the react native camera component but it doesn't provide access to frames & if that's even possible, I'm not sure if it would be a good idea to send frames over the bridge between JS & java/ObjC.
the problem is that I'm not sure of the performance or if that is even possible.
if you know of any ReactNative library that would be great.
Your steps seem sound. After processing the frame in C++, you will need to set the application properties RCTRootView.appProperties in iOS, and emit an event using RCTDeviceEventEmitter on Android. So, you will need an Objective-C wrapper for your C++ code on iOS and a Java wrapper on Android. In either case, you should be able to use the same React Native code for actually drawing the rectangle on top of the camera preview. You're right that the React Native camera component does not have an API for getting individual frames from the camera, so you'll need to write that code natively for each platform.

hot to play Hd videos in multiple monitors

I'm looking for a solution to play HD videos on a multimonitor OSX environment for a projector/desktop application. It could be one huge video, or a video split in parts.
So far I've been using Flash StageVideo successfully to play 1080p and 720p on single monitors. This works great with flash projectors. The problem with flash projectors is you can't span multiple monitors, or multiple windows. I still haven't tried opening multiple projectors, because I wouldn't know how to position each projector in a different monitor consistently.
In Adobe AIR you can have multiple windows and control their position, but AFAIK you can't use StageVideo to decode videos with the GPU... and using the classic Video class is really out of the question.
With C++ there are multiple frameworks (cinder/openFrameworks) but AFAIK opening multiple windows, or spaning multiple monitors is not such a good idea because of bad performance. I stil haven't figured out if it's possible or even a good idea to open one app per monitor and control it's position.
Has anyone succeeded in this problem using AS3 or any other language/framework like C++ with openFrameworks?
With AIR you can have a single window span multiple windows.
I have a 1920x1080 video scaled by 2 on stage of 3840x2160. I haven't yet tried using StageVideo to up the resolution, but I am hopeful it will work.

C++ DirectShow Video and Audio capture - beginning

I have finally managed to drop working with VFW after several problems I have encountered during the application development.
Thanks to StackOverflow, I am now aware that VFW is obsolete and wish to switch to DShow, to let my application work with Vista/W7.
Unfortunately, the work has been made and application has been shipped to the client, but as soon as we realized we have troubles with frame rates on Vista / W7 - we decided to rewrite the video class and use DirectShow to establish a good audio/video capture engine for webcameras.
This will be tricky, as we never coded with DShow, and right now we are looking for few specific examples of how to:
Connect to a selected webcamera
similar to: capDriverConnect
Set camera resolution to 640x480 and RGB24 format ( we need to do RGB24 to YUV420 for each frame )
similar to: capSetVideoFormat / capCaptureSetSetup
Set audio capturing for this webcamera
similar to: capSetAudioFormat
Register two callbacks:
One for video frame ( we will pass frames to video encoder )
similar to: capSetCallbackOnVideoStream
One for wave buffer ( we will pass wave buffer to audio encoder )
similar to: capSetCallbackOnWaveStream
Be able to show a preview window somewhere on parent window
similar to: capPreview
Perform Start/Stop operation when needed
Start - would mean, connect and start capturing audio/video frames
Disconnect - would mean, stop capturing audio video frames
Perform drawing to the actual frame
similar to:
SetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);
// draw something with gdi+
GetBitmapBits(CameraInput.GetFrameBitmap(),w*h*3,vdhdr->lpData);//set back the frame with data
All of the above was already made with VFW, but as I wrote before we unfortunately need to switch do Direct Show.
Is there anyone who could help us out achieving a class that could rescue us from months of studying Direct Show ?
Your best bet for examples will be the ones from Microsoft.
Your questions are still phrased in terms of VFW so it's hard to answer them as written. For example, in DirectShow you wouldn't register a callback for to encode a video frame. Instead, you'd develop an encoder filter that would receive data from the capture source.
As an alternative, if you're only targeting Vista and later, there is the Microsoft Media Foundation. I have no experience with it so I don't know how the learning curve compares to DirectShow.
I'd suggest you to build a graph on GraphEdit using FFDshow filters.
EditGraph is making a demonstration of building a graph on DirectShow
I don't think you need you build the filter class by your own. After you'll build the graph and you'd be able to watch the video using GraphEdit. Implementing the graph is a very simple task.

Firebreath placing JPEG image inside the plugin window

I'm writing a plugin in firebreath, C++.
I don't have any experience with both, so my question may be very basic.
How do I place a JPEG image inside my plugin window?
Or at least, how do I do it in C++ simple program?
Thanks,
RRR
There are a couple of other questions that may help you better understand this:
How to write a web browser plugin for IE, Firefox and Chrome
Directx control in browser plugin
Basically you'll get a drawing model from FireBreath with the AttachedEvent. Depending on your platform, you will draw to that window using platform-specific drawing APIs. On Windows, for example, you would get the HWND from the PluginWindow (cast it to a PluginWindowWin) and then draw to that. Just make sure you stop drawing when DetachedEvent shows up.
For more information, you'll need to be a lot more specific; but follow those links and do some reading, then you'll know better what questions to ask.
FireBreath 1.5.2 was just released, btw! Good luck!
Good luck!
You can also use OpenGL to display images in plugin. You can get several tutorials to load jpeg image in OpenGL as texture. Same code can be ported into the Firebreath plugin using the already given OpenGL sample plugin for windows. Though OpenGL context creation will vary from one OS to the other. If you want to load jpeg images from web, you'll have to download image before converting it into opengl texture.