How to render to a second screen without being the DRM master? - opengl

I have an embedded process that renders to a screen directly using DRM & KMS APIs. It's running on a minimal Yocto distribution (no desktop or Wayland).
I would like to render to a second screen that is attached to the same GPU from another process. The first process opens '/dev/dri/card0' and becomes the de-facto DRM master and it can do drmModeSetCrtc & drmModePageFlip on the primary screen to display the framebuffer. However, if I call drmDropMaster it can't do the page flip anymore. Therefore the second process cannot become the DRM master and render to the other display using the same technique.
There's plenty of examples on how to render to one screen using the Direct Rendering Manager (DRM) and Kernel Mode Setting (KMS), but I found none that can render to a second screen from another process.
I would like to not have a master if possible once the display mode is set, but the page flip is also a restricted API. If this cannot be achieved, maybe an example on how to grant the second process permission using drmAuthMagic?

It isn't possible to do a page flip without being the DRM master. The IOCTL is protected in drm_ioctl.c:
DRM_IOCTL_DEF(DRM_IOCTL_MODE_PAGE_FLIP, drm_mode_page_flip_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED)
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
So I decided to put the flip into a critical section where the application calls drmSetMaster, schedules the flip, and calls drmDropMaster. It's heavy handed and both processes need to be root, but it works well enough for an embedded platform. The process has to authorize itself however using drmGetMagic and drmAuthMagic in order for it to be able to render while it isn't the master and to grab mastership again. I do this when it first becomes master and does the mode set.

Related

How to get the next frame presentation time in Vulkan

Is there a way to get an estimated (or exact) timestamp when the submitted frame will be presented on screen?
I'm interested in WSI windowed presentation as well as fullscreen on Windows and Linux.
UPD: One of the possible ways on Windows is IDCompositionDevice::GetFrameStatistics (msdn), which is used for DirectComposition and DirectManipulation, but I'm not sure is it applicable to Vulkan WSI presentation.
VK_GOOGLE_display_timing extension exposes timings of past presents, and allows to supply timing hint for a subsequent present. But the extension is supported only on some Androids.
VK_EXT_display_control provides a VSync counter and an Fence signal when Vblank starts. But it only works with a VkDisplayKHR type swapchain. And it has only some small support on Linuxes.
The appropriate issue has been raised at Vulkan-Docs#370. Unfortunately, it is taking its time to be resolved.
I don't think you can get the exact presentation time (which would be tricky in any case, since monitors have some internal latency). I think you can get close though: The docs for vkAcquireNextImageKHR say you can pass a fence that gets signaled when the driver is done with the image, which should be close to the time it gets sent off to the display. If you're using VK_PRESENT_MODE_FIFO_KHR you can then use the refresh rate to work out when later images in the queue get presented.

Which trigger should I use for UWP app to be persistently in the background

I am trying to make a simple app that would work with the Corsair SDK to change the colors of my keyboard programmatically. I've developed a simple one that uses straight Win32 API that uses a notification icon to hold the process and allow me to stop it.
I'm trying to make a UWP equivalent of the application using C++ as well. The thing I am looking for is the appropriate trigger so that I can run it in the background as soon as I have it installed much like the "Mail" app that runs in the background and can create notifications (which I won't be doing yet).
Also I would like it that I don't have any forms (or at most have it integrated with Settings)
However for now, I would just like to figure out which trigger should I use?
I was thinking of SystemTrigger::userPresent and SystemTrigger::userNotPresent (to show a different lighting effects on my keyboard depending on whether the user is present or not).
The only thing different between the two modes if when the user is present, I will take in the keyboard states and read some user specific settings.

C++ Code for Screen Capturing

I need to write code which will do screen sharing like WebEx or Team Viewer for Windows PC. The requirement is I don't have admin access and I can not Install any application or software for this. I know below technology but none of them is working for me. I have tried all sample for this code project URL http://www.codeproject.com/Articles/5051/Various-methods-for-capturing-the-screen
(1) GetDC(NULL) and BitBlt with SRCCOPY <= This will not capture Transparent Window and It cause GDI hung (Just try drawing in Paint.. your pencil stuck for some time when BitBlt operation performerd)
(2) GetDC(NULL) and BitBlt with SRCCOPY and CAPTUREBLT Option <= This will hide Cursor when I call BitBlt Operation and also GDI Hung when BitBlt Operation Performed.
(3) I also tried with DirectX using GetFrontBufferData.. This cause Flicker of my Transparent Window.
(4) I tried with Windows Media API but this require Windows Media Encoder to be Installed.
(5) I tried with Mirror Driver also but this require Driver to be Installed with Admin Access.
Can any one please suggest API where without any Installation I can capture entire screen and No flicker or GDI hung problem.
Thanks in Advance.....
The problem is that whatever method you'll use you have to hook into the system (intercept some OS-to-driver call) to let the system give you the time to do your operation safely. that requires whatever software to run in administrative mode.
All the above methods fail because of some internall call failure due to not enough priviledges.
If you think a bit, if running an exe at user level can share a system call even from non system level users, the system may have serious security breaches: I just have to deliver an application you use that shares your screen without you notice that.
So, instead to try to fool your company security policies, just ask to your admins: if you need those software for business purpose, they will do what is needed.

OpenGL and MultiGPU

We are trying to setup a server with Multiple Tesla M2050 to run with OpenGL.
The current setup is as follows : Ubuntu 12.04 with NVidia Drivers. We have setup the xorg.conf with separate devices identified by BUS ID.
Now we have tied an X server each with display which in turn is tied to each device and our code is attached to each of these X servers. But somehow only one X session seems to work out alright. The other one produces garbled output and while watching it from nvidia-smi, we notice that when the garbled output is being produced the GPU's are not at all used.
Could someone verify that our setup seems reasonable? The other thing we noticed was that, it was only the first X server that was started is the one that has the issue.
EDIT : This is in headless mode.
A problem with multiple X servers is, that each server may grab the active VT and hence disable the other X server's rendering output. This can be avoided. But I think in your situation good ole' "Zaphod Mode" would suit your needs far better:
Zaphod mode is a single X server, controlling multiple Devices, each with its own Monitor forming a Screen, joined in a single screen layout. This is not TwinView or Xinerama! In Zaphod mode you can not move windows between Screens, i.e. each Screen acts on its own.

Off screen rendering when laptop shuts screen down?

I have a lengthy number-crunching process which takes advantage of quite abit of OpenGL off-screen rendering. It all works well but when I leave it to work on its own while I go make a sandwich I would usually find that it crashed while I was away.
I was able to determine that the crash occurs very close to the moment The laptop I'm using decides to turn off the screen to conserve energy. The crash itself is well inside the NVIDIA dlls so there is no hope to know what's going on.
The obvious solution is to turn off the power management feature that turns the screen and video card off but I'm looking for something more user friendly.
Is there a way to do this programatically?
I know there's a SETI#home implementation which takes advantage of GPU processing. How does it keep the video card from going to sleep?
I'm not sure what OS you're on, but windows sends a message that it is about to enter a new power state. You can listen for that and then either start processing on the CPU or deny the request to enter a lower-power state.
For the benefit of Linux users encountering a similar issue, I thought I'd add that, you can obtain similar notifications and inhibit power state changes using the DBUS API. An example script in Python, taken from the link, to inhibit power state change:
#!/usr/bin/python
import dbus
import time
bus = dbus.Bus(dbus.Bus.TYPE_SESSION)
devobj = bus.get_object('org.freedesktop.PowerManagement',
'/org/freedesktop/PowerManagement')
dev = dbus.Interface (devobj, "org.freedesktop.PowerManagement.Inhibit")
cookie = dev.Inhibit('Nautilus', 'Copying files from /media/SANVOL')
time.sleep(10)
dev.UnInhibit(cookie)
According to MSDN, there is an API that allows an application to tell Windows that it is still working and that Windows should not go to sleep or turn off the display.
The function is called SetThreadExecutionState (MSDN). It works for me, using the flags ES_SYSTEM_REQUIRED and ES_CONTINUOUS.
Note, however, that using this function does not stop the screen saver from running, which might interfere with your OpenGL app if the screen saver also uses OpenGL (oder Direct3D).