Change HDMI color space using X11 API - c++

The nvidia settings tools offers the possibility to change the HDMI color space to RGB or YCbCr444 as shown in this picture
I wonder if there is a way to do the same using X11 API (aka, modifying the color space of the HDMI output/screen) ?

Modifying your Screen Colorspace
1. Programmaticaly
xlib doesn't provide a way to modify the Colorspace screen hardware.
So, you have to use your vendor specific API, if it exists.
Luckily, nvidia provides an API clearly named nvapi.
It was designed for MSWIN, but since some years ago, it is available on Linux too:
nvapi-open-source-sdk
NVidia provides a programmer's guide (a bit outdated) too:
PG-5116-001_v02_public.pdf
which is more explicit than the one you'll get from the sdk.
2. or Manually at startup
2.1 using xorg.conf config options
according to xconfigoptions from the nvidia documentation (see your nvidia package),
they may be specified either in the Screen or Device sections of the X config file
So you could have something like:
Section "Device"
Identifier "card0"
Driver "nvidia"
# ...
Option "ColorSpace" "YCbCr444"
# ...
EndSection
2.2 Using nvidia-settings tool:
You can query your current settings:
$ nvidia-settings --query=CurrentColorSpace
from which you will retrieve the display number and its value
then, you could modify it from shell script (eg .xinitrc) or from command line:
$ nvidia-settings --assign=:0[dpy:2]/ColorSpace=0
where :0 is your X11 $DISPLAY, and dpy:2 the screen you want to modify

Related

How to give an option to select graphics adapter in a DirectX 11 application?

I think I know how it should work - only it does not. I have a lenovo laptop with a 860m and an intel integrated card.
I can run my application from outside with both gpu, and everything works fine: the selected gpu will be the adapter with index 0, it has the laptop screen as output, etc.
However if I try to use the adapter with index 1 (if I run the app normally, that is the nvidia, if I run it with the nvidia gpu, that is the intel), IDXGIOutput::EnumOutputs does not find anything, so I can't configure the display settings properly.
I was thinking about simply skipping the configuration, or using the output from the other adapter - but then there is no way to filter out adapters without real output - e.g. my pc has an integrated card too, but it does not have a monitor physically connected, so using that should not be possible.
I also tried to find what exactly the "Run with graphical processor" context menu button does, but I could not find anything.
Goal is to give the user the ability to select adapter inside the application, his/her choices is saved to a config file, and would be used after restart - but I can't find the way to filter the possible adapters.
You likely have a 'heterogenous adapter' system (a.k.a. NVIDIA Optimus or AMD PowerXPress). These solutions have the driver manipulate the default adapter and the device enumeration to control which card is used. You really don't have any programmatic control over this, but you can inject something into your Win32 'classic' desktop application which will encourage the driver to select the discrete part:
// Indicates to hybrid graphics systems to prefer the discrete part by default
extern "C"
{
__declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
__declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
}
UPDATE With Windows 10 April 2018 Update (17134) or later, you can use the DXGI 1.6 interface EnumAdapterByGpuPreference. See GitHub for some example usage.

Opengl rendering in server [duplicate]

And if so why? What does X do for me beyond piping my rendering commands to the graphics card driver?
I'm not clear on the relationship X - OpenGL. I've searched the internet but couldn't find a concise answer.
If it matters, assuming a minimal modern distribution, like a headless Ubuntu 13 machine.
With the current drivers: Yes.
And if so why?
Because the X server is the host for the actual graphics driver talking to the GPU. At the moment Linux GPU drivers require a X server that gives them an environment to live in and a channel to the kernel interfaces to talk through with the GPU.
On the DRI/DRM/Gallium front a new driver model has been created that allows to use the GPU without an X server, for example using the EGL-API. However only a small range of GPUs is supported by this right now; most Intel and AMD; none NVidia.
I'm not clear on the relationship X - OpenGL
I covered that in detail in the SO answers found at https://stackoverflow.com/a/7967211/524368 and https://stackoverflow.com/a/8777891/524368
In short the X server acts like a "proxy" to the GPU. You send the X server commands like "open a window" or "draw a line there". And there's an extension to the X protocol called "GLX", where each OpenGL command gets translated into a stream of GLX/X opcodes and the X server executes those commands on the GPU on behalf of the calling client. Also most OpenGL/GLX implementations provide a mechanism to bypass the X server if the client process could actually talk directly to the GPU (because it runs on the same machine as the X server and has permissions to access the kernel API); that is called Direct Rendering. It however still requires the X server for opening the window, creating the context and to general housekeeping.
Update due to comment
Also if you can live without GPU acceleration, you can use Mesa3D using the osmesa (off-screen mesa) mode and the LLVMpipe software rasterizer.
With Linux 3.12: Not any more.
Offscreen rendering is what DRM render nodes are for, according to the commit. See the developer's blog for a better explanation.
TLDR:
A render node (/dev/dri/renderD<num>) appears as a GPU with no screens attached.
As for how exactly one is supposed to make use of this, the (kernel) developer only has very general advice for userspace infrastructure. Nevertheless, it is fair to assume the feature to be nothing short of a show-enabler for Wayland and Mir, as clients won't be able to render on-screen any more.
The wikipedia entry has some more pointers.

DirectX11 Desktop duplication not working with NVIDIA

I'm trying too use DirectX desktop duplication API.
I tried running exmaples from
http://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk
And from
https://code.msdn.microsoft.com/windowsdesktop/Desktop-Duplication-Sample-da4c696a
Both of these are examples of screen capture using DXGI.
I have NVIDIA GeForce GTX 1060 with Windows 10 Pro on the machine. It has Intelâ„¢ Core i7-6700HQ processor.
These examples work perfectly fine when NVIDIA Control Panel > 3D Settings is selected to Auto select processor.
However if I set the setting manually to NVIDIA Graphics Card the samples stop working.
Error occurs at the following line.
//IDXGIOutput1* DxgiOutput1
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
Error in hr(HRESULT) is DXGI_ERROR_UNSUPPORTED 0x887A0004
I'm new to DirectX and I don't know the issue here, is DirectX desktop duplication not supported on NVIDIA ?
If that's the case then is there a way to select a particular processor at the start of program so that program can run with any settings ?
#Edit
After looking around I asked the developer (Evgeny Pereguda) of the second sample project on codeproject.com
Here's a link to the discussion
https://www.codeproject.com/Tips/1116253/Desktop-Screen-Capture-on-Windows-via-Windows-Desk?msg=5319978#xx5319978xx
Posting the screenshot of the discussion on codeproject.com in case original link goes down
I also found an answer on stackoverflow which unequivocally suggested that it could not be done with the desktop duplication API referring to support ticket at microsoft's support site https://support.microsoft.com/en-us/help/3019314/error-generated-when-desktop-duplication-api-capable-application-is-ru
Quote from the ticket
This issue occurs because the DDA does not support being run against
the discrete GPU on a Microsoft Hybrid system. By design, the call
fails together with error code DXGI_ERROR_UNSUPPORTED in such a
scenario.
However there are some applications which are efficiently duplicating desktop on windows in both modes (integrated graphics and discrete) on my machine. (https://www.youtube.com/watch?v=bjE6qXd6Itw)
I have looked into the installation folder of the Virtual Desktop on my machine and can see following DLLs of interest
SharpDX.D3DCompiler.dll
SharpDX.Direct2D1.dll
SharpDX.Direct3D10.dll
SharpDX.Direct3D11.dll
SharpDX.Direct3D9.dll
SharpDX.dll
SharpDX.DXGI.dll
SharpDX.Mathematics.dll
Its probably an indication that this application is using DXGI to duplicate desktop, or may be the application is capable of selecting a specific processor before it starts.
Anyway the question remains, is there any other efficient method of duplicating desktop in both modes?
The likely cause is certain internal limitation for Desktop Duplication API, described in Error generated when Desktop Duplication API-capable application is run against discrete GPU:
... when the application tries to duplicate the desktop image against the discrete GPU on a Microsoft Hybrid system, the application may not run correctly, or it may generate one of the following errors:
Failed to create windows swapchain with 0x80070005
CDesktopCaptureDWM: IDXGIOutput1::DuplicateOutput failed: 0x887a0004
The article does not suggest any other workaround except use of a different GPU (without more specific detail as for whether it is at all achievable programmatically):
To work around this issue, run the application on the integrated GPU instead of on the discrete GPU on a Microsoft Hybrid system.
Microsoft introduced a registry value that can be set programmatically to control which GPU an application runs on. Full answer here.

How to use DRI instead of glx? [duplicate]

I want to use OpenGL rendering without X, with google i find it: http://dvdhrm.wordpress.com/2012/08/11/kmscon-linux-kmsdrm-based-virtual-console/ there says that it is possible. I should use DRM and EGL. EGL can create opengl context but requires a NativeWindow. DRM probably will provide me NativeWindow, is not it? Should i use KMS? I know that i must have open source video driver. I want exactly OpenGL context, but not OpenGL ES (Linux). Maybe, someone knows tutorial or example code?
Yes, you need kms stack (example). Here is a simple example under linux, it use OpenGL es, But the step to have it working against OpenGL api are simple.
In the egl attribs set EGL_RENRERABLE_TYPE to EGL_OPENGL_BIT
And tell egl which api to bind to:
eglBindAPI(EGL_OPENGL_API);
Be sure to have latest kernel drivers and mesa-dev, libdrm-dev, libgbm-dev. This pieces of code is portable on android, it's just not so easy to have default android graphic stack silenced.
note: I had trouble with 32bit version, but still don't know why. those libs are actively developed, so not sure it wasn't a bug.
*note2: depending on your GLSL version, float precision is supported or not.
precision mediump float;
note3: if you have permision failure with /dev/dri/card0, grant it with:
sudo chmod 666 /dev/dri/card0
or add current user to video group with
sudo adduser $user video
you may also setguid for your executable with group set to video. (maybe best option)

How to set system brightness with help of any API in QT?

I want to create a QSlider by which I can handle the brightness of the screen (not of the application) for actual screen.
You need a platform-specific function, there is nothing in the Qt library.
On Linux you can do like:
xrandr --output LVDS1 --brightness 0.9
"LVDS1" is the name of the display you want to change. Run xrandr and check the name of the display you have. The line will look something like "LVDS1 connected 1920x1080+0+0".
You can also try this:
xbacklight -set 100
On windows you can use the Gamma Ramp API as here. You can also use WinI2C/DDC which is a professional tool that allows you to control display devices in the Windows environment via the DDC/CI protocolthat. It is free for personal use and non-free for commercial use. They may even allow you to use it for free if you contact them and explain it's for a non-profit organisation.