Do Glyph Libraries support and render on Virtual machine, Using Oracle Virtual Box and when creating glyph image, virtual box is getting rebooted.
Related
I am trying to build Desktop application using Electron framework which can output content of particular <div> element to DirectShow based virtual camera.
With the help of famous Viveks virtual camera and Sample Push Source Filter I am able to create virtual camera which is rendering Desktop in virtual camera.
Now I want to use this virtual camera with electron app to output content of <div>. I am not able to figure out what should be approach to achieve this? I guess I have to develop DirectShow Capture Filter which will interact with virtual camera which I am not sure.
What should be approach to render <div> content to virtual camera?
https://learn.microsoft.com/en-us/windows/win32/directshow/step-2--declare-cvideorenderer-and-derived-classes
It appears you'd need to get the window handle HWND for the Electron window and write to a RECT within that window, similar to how you'd do it with a DirectX swapchain. I'm not sure there's a way specifically to render to an element within the window, but you may be able to expose some variable that your C++ component could read to determine the viewport.
Context
I am trying to build an image filter application where, the application will get the user's selected camera frames, apply some filters on the frames, create a virtual camera device and send the frame to that virtual camera. I am successful in doing all of these except I have to hide the actual camera device because it is being used by my application and other applications (suppose zoom/meet) should see my virtual camera instead of the actual camera device.
I have become able to create a virtual camera and send frames there with the help of obs-virtual-cam's obs-virtualsource.dll.
Desired Outcome
I need to create some kind of wrapper for device enumeration DLL from Microsoft. Once my wrapper is registered, it will modify the list of devices returning by the system to the applications. The settings can be saved in Registry and invoked in the context of other processes.
Answer I want
I am proficient at C/C++ but newbie in COM and MS Media Foundation API. So, even if the problem cannot be solved right here in the answer, I welcome and link or guidance to get started in the right direction to solve this specific problem.
Microsoft Media Foundation API does not offer you means to hide cameras for applications. Neither for application that use Media Foundation to access the cameras, nor specifically for applications that access cameras without Media Foundation.
I am working on a GUI application which runs on RedHat 7.2, with Qt 5.6, originally the graphics were all rendered using Qt native functions, however the video display and other widgets show tearing.
So I went about re-writing all the graphics replacing the QWidget based classes with classes derived from QOpenGLWidget and QOpenGLFunctions.
The tearing is still present, I've read online that calling:
window.setAnimating(true);
Use OpenGLWindow::setAnimating(true) for render() to be called at the vertical refresh rate, assuming vertical sync is enabled in the underlying OpenGL drivers.
Taken from: OpenGL Window Example
However I can't find an equivalent method for QOpenGLWidget, is there an equvalent? How do I ensure that QOpenGLWidgets are only rendered with vsync?
I have an OpenGL (Java + applications) application deployed on a Linux machine. I need to start this application from a Windows machine, display and operate it on Windows.
My application shows a whole screen of 2D graphics which is constantly updated. Both machines are OpenGL capable (video cards and drivers). When not remote, the application runns ok on both the Linux and the Windows machine
I'm using ssh (putty) and an X server (xming 6.9) on the Windows. All is fine but when I display my graphics I get an "unsupported colour depth" error and nothing is displayed.
Questions:
1. Is there a way I can tweak xming to display my graphics?
2. Is there a better suited solution for my system?
I have a C++ application that uses the Win32 API for Windows, and I'm having a problem with GDI+ dithering, when I don't know why it should be.
I have a custom control (custom window). When I receive the WM_PAINT message, I draw some Polygons using FillPolygon on a Graphics device. This Graphics device was created using the HDC from BeginPaint.
When the polygons appear on the screen, though, they are dithered instead of transparent, and only seem to show few colors (maybe 256?) When I do the same thing in C# using the .NET interface into GDI+, it works fine, which is leaving me wondering what's going on.
I'm not doing anything special, this is a simple example that should work fine, as far as I know. Am I doing something wrong?
Edit: Nevermind. It only happens over Remote Desktop, even though the C# example doesnt Dither over remote desktop. Remote Desktop is set at 32-bit color, so I don't know what's up with that.
Hmm... The filling capabilities are determined by the target device. When working over remote desktop, AFAIK Windows substitutes the display driver, so that can change the supported features of the display.
when drawing on wm_paint, you actually draw directly on the screen surface, while .net usually uses double buffering (draws to in memory bitmap and then blits the entire bitmap)
there are some settings in gdi+ that affect the drawing quality. maybe there are different defaults for on-screen, off-screen and remote painting?
It only happens over Remote Desktop
Many remoting applications will reduce colour depth in order to reduce bandwidth requirements. While I haven't used Remote Desktop, the same happens on certain VNC connections. I'd check your RD server and client settings.