Hide camera device name from windows applications with MS Media Foundation? - c++

Context
I am trying to build an image filter application where, the application will get the user's selected camera frames, apply some filters on the frames, create a virtual camera device and send the frame to that virtual camera. I am successful in doing all of these except I have to hide the actual camera device because it is being used by my application and other applications (suppose zoom/meet) should see my virtual camera instead of the actual camera device.
I have become able to create a virtual camera and send frames there with the help of obs-virtual-cam's obs-virtualsource.dll.
Desired Outcome
I need to create some kind of wrapper for device enumeration DLL from Microsoft. Once my wrapper is registered, it will modify the list of devices returning by the system to the applications. The settings can be saved in Registry and invoked in the context of other processes.
Answer I want
I am proficient at C/C++ but newbie in COM and MS Media Foundation API. So, even if the problem cannot be solved right here in the answer, I welcome and link or guidance to get started in the right direction to solve this specific problem.

Microsoft Media Foundation API does not offer you means to hide cameras for applications. Neither for application that use Media Foundation to access the cameras, nor specifically for applications that access cameras without Media Foundation.

Related

what's the best approach to design this simple ReactNative AR app?

I'm trying to write a simple AR app in ReactNative, it should simply see 4 predefined markers and draw a rectangle as a boundary on the live preview of the camera, the thing is I'm trying to do the processing in C++ using opencv so as to have the logic of the app in one place accessible to both Android & IOS.
here's what I've been thinking
write the OS dependent code to open the camera and get permissions in (java/ObjC) & the C++ part to do processing on each frame.
call the C++ code (from within the native code) on each frame, and that should return lets say coordinates for the markers.
draw the rect if 4 markers found on the preview in native code (No idea how to achieve this so far but I think it will be native code).
expose that preview (the live preview with the drawn view) to ReactNative (Not sure about that or how to achieve it)
I've looked at the react native camera component but it doesn't provide access to frames & if that's even possible, I'm not sure if it would be a good idea to send frames over the bridge between JS & java/ObjC.
the problem is that I'm not sure of the performance or if that is even possible.
if you know of any ReactNative library that would be great.
Your steps seem sound. After processing the frame in C++, you will need to set the application properties RCTRootView.appProperties in iOS, and emit an event using RCTDeviceEventEmitter on Android. So, you will need an Objective-C wrapper for your C++ code on iOS and a Java wrapper on Android. In either case, you should be able to use the same React Native code for actually drawing the rectangle on top of the camera preview. You're right that the React Native camera component does not have an API for getting individual frames from the camera, so you'll need to write that code natively for each platform.

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

How to retrieve QML camera feed and send to C++ backend

I am trying different implementations of how to achieve further processing from the QML camera, I need to pass on the feed to the C++ end so it can then be converted to a cv::Mat image and passed to a function for processing. I have tried setting a QCamera from the C++ end and starting it on a button click but it seems Qt cannot create two instances of the same camera. I have also tried an open CV method but to no avail. What is the best way to do this?
This is a not a fix but a workaround. I used an OpenCv plugin for the camera and made it visible to my QML using qmlRegisterType. I could then easily send the frames from the backend to my other class for processing. For anyone looking to do this I used this plugin: https://github.com/rferrazz/CvCamView
Qt QML Camera to C++ QImage on Android
I just answered this question, I think you were having the same issue.
The basic Idea here is to get the instance of QML camera, access its QMediaObject point and probe it with QVideoProbe. There are other solutions, but AFAIK they arent really easy or fast in Android platforms, but if thats not your case, you probavly should try QAbstractVideoFilter alongside QVideoFilterRunnable classes which were developed specially for post processing qml video feed.
Note that QAbstractVideoFilter and QVideoFilterRunnable are Qt 5.5 classes only.

Windows media foundation use raw image to encode video

I'm working on a project that requires me to record webcam, microphone, and the screen. I have webcam recording, audio is a work in progress, and I stumbled across CMonitor wrapper (which I did some minor modifications to) to grab RGB images of the desktop on a specified monitor (if there are multiple monitors).
How do I go about pushing my raw RGB frames into windows media foundation to encode into a video file? My current video encoding is using a slightly modified version of this msdn sample, if that's easier to modify than it is to write a new class handler.
Or, perhaps there is some sort of media foundation route to recording the screen that I don't know of (which is possible, I'm not that great of a win32 programmer)?
Found PushSource in the Windows SDK samples, which does this.
Check Desktop Duplication API for capture desktop. Media Foundation provides two solution for encoding, MF Sink Writer for simple encoding, Media Session for a more flexible control of the media pipeline. Read this overview page first.

Optical Mouse as encoder

Recently I discovered the beauty of the optical mouse as an incremental position encoder.
An optical mouse usually contains one component in which a camera is linked to an image processor linked to an USB interface. The resolution depends on the camera resolution. On the internet it is easy to find back the datasheets of this type of components describing also how to read/write with them.
My problem I first need to solve is how to make sure that an encoder mouse is not seen by the laptop/pc as a pointing device without disabling the USB port to which it is connected. I need to use 2 encoders so that means that 3 usb ports need to be used on my PC (running Windows XP), one for the mouse as pointing device and two for a mouse as encoder.
A second question is how to read/write instructions/data from/to the encoder mouse over an USB port? Could someone send me a link to a tutorial/example in C++?
Thanks very much in advance,
Stefan
The USB mouse microcontroller is probably hardcoded to send USB frames identifying itself as a HID device. In that case there's little hope you can succeed in preventing Windows to use it as a mouse. After all this IS a mouse.
If you are in DIY, you could try to hack the mouse board by unsoldering component/wires and directly control the encoders with your arduino. This way, the Arduino could read the data from the encoder and send it to the PC using its own USB serial port.
See an example there:
http://www.martijnthe.nl/2009/07/interfacing-an-optical-mouse-sensor-to-your-arduino/
For more info on HID device: http://en.wikipedia.org/wiki/USB_human_interface_device_class
Excerpt:
"There are two levels of APIs related to USB HID: the USB level and the operating system level. At the USB level, there is a protocol for devices to announce their capabilities and the operating system to parse the data it gets. The operating system then offers a higher-level view to applications, which do not need to include support for individual devices but for classes of devices. This abstraction layer allows a game to work with any USB controller, for example, even ones created after the game."
Take a look at the Raw Input API to see if you can pick up on the events that way and block Windows from acting on them.