X hangs up because of application (use C++, Qt, OpenGL) - c++

My application gets data from the network and draws it on the scene (scene uses handmade OpenGL engine).
It works for several hours. When I'm not using my desktop, my monitor, because of Display Power Manager Signaling (dpms) turns off. And then, when I touch the mouse or keyboard, the monitor turns on, and the application hangs up (X hangs up too).
If I do
xset -dmps
the operation system doesn't use dpms and the application works stable.
These issues occur in Centos 6 and Archlinux, but when I run the application under Ubuntu 12.10 it works great!
I tried different NVidia drivers. No effect.
I tried to use ssh to remote login and attach to the process with gdb.
After monitor is turned on I can't find the application in the process table.
How to diagnose the problem? What happens (in OpengGL environment) when the monitor turns off/turns on? Does Ubuntu do something special when using dpms?
We have a guess for reasons of the problem!
When the monitor is turned off we lose the OpenGL context. When the monitor wakes up, the application hangs (no context).
And the difference in behavior depending on the operation system is because of different monitor connections: The monitor for Kubuntu is connected with VGA cable. And so (probably) it has no influence on X behaviour.

Have you tried adding robustness support to your OpenGL implementation using GL_ARB_robustness?
2.6 "Graphics Reset Recovery"
Certain events can result in a reset of the GL context. Such a reset
causes all context state to be lost. Recovery from such events
requires recreation of all objects in the affected context. The
current status of the graphics reset state is returned by
enum GetGraphicsResetStatusARB();
The symbolic constant returned indicates if the GL context has been in
a reset state at any point since the last call to
GetGraphicsResetStatusARB. NO_ERROR indicates that the GL context has
not been in a reset state since the last call.
GUILTY_CONTEXT_RESET_ARB indicates that a reset has been detected that
is attributable to the current GL context. INNOCENT_CONTEXT_RESET_ARB
indicates a reset has been detected that is not attributable to the
current GL context. UNKNOWN_CONTEXT_RESET_ARB indicates a detected
graphics reset whose cause is unknown.
Also, make sure you have a debug context when you initialize your context, and use the ARB_debug_output extension to receive log output.
void DebugMessageControlARB(enum source,
enum type,
enum severity,
sizei count,
const uint* ids,
boolean enabled);
void DebugMessageInsertARB(enum source,
enum type,
uint id,
enum severity,
sizei length,
const char* buf);
void DebugMessageCallbackARB(DEBUGPROCARB callback,
const void* userParam);
uint GetDebugMessageLogARB(uint count,
sizei bufSize,
enum* sources,
enum* types,
uint* ids,
enum* severities,
sizei* lengths,
char* messageLog);
void GetPointerv(enum pname,
void** params);
For example:
// Initialize GL_ARB_debug_output ASAP
if (glfwExtensionSupported("GL_ARB_debug_output"))
{
typedef void APIENTRY (*glDebugMessageCallbackARBFunc)
(GLDEBUGPROCARB callback, const void* userParam);
typedef void APIENTRY (*glDebugMessageControlARBFunc)
(GLenum source, GLenum type, GLenum severity,
GLsizei count, const GLuint* ids, GLboolean enabled);
auto glDebugMessageCallbackARB = (glDebugMessageCallbackARBFunc)
glfwGetProcAddress("glDebugMessageCallbackARB");
auto glDebugMessageControlARB = (glDebugMessageControlARBFunc)
glfwGetProcAddress("glDebugMessageControlARB");
glDebugMessageCallbackARB(debugCallback, this);
glDebugMessageControlARB(GL_DONT_CARE, GL_DONT_CARE,
GL_DEBUG_SEVERITY_LOW_ARB, 0, nullptr, GL_TRUE);
}
...
std::string GlfwThread::severityString(GLenum severity)
{
switch (severity)
{
case GL_DEBUG_SEVERITY_LOW_ARB: return "LOW";
case GL_DEBUG_SEVERITY_MEDIUM_ARB: return "MEDIUM";
case GL_DEBUG_SEVERITY_HIGH_ARB: return "HIGH";
default: return "??";
}
}
std::string GlfwThread::sourceString(GLenum source)
{
switch (source)
{
case GL_DEBUG_SOURCE_API_ARB: return "API";
case GL_DEBUG_SOURCE_WINDOW_SYSTEM_ARB: return "SYSTEM";
case GL_DEBUG_SOURCE_SHADER_COMPILER_ARB: return "SHADER_COMPILER";
case GL_DEBUG_SOURCE_THIRD_PARTY_ARB: return "THIRD_PARTY";
case GL_DEBUG_SOURCE_APPLICATION_ARB: return "APPLICATION";
case GL_DEBUG_SOURCE_OTHER_ARB: return "OTHER";
default: return "???";
}
}
std::string GlfwThread::typeString(GLenum type)
{
switch (type)
{
case GL_DEBUG_TYPE_ERROR_ARB: return "ERROR";
case GL_DEBUG_TYPE_DEPRECATED_BEHAVIOR_ARB: return "DEPRECATED_BEHAVIOR";
case GL_DEBUG_TYPE_UNDEFINED_BEHAVIOR_ARB: return "UNDEFINED_BEHAVIOR";
case GL_DEBUG_TYPE_PORTABILITY_ARB: return "PORTABILITY";
case GL_DEBUG_TYPE_PERFORMANCE_ARB: return "PERFORMANCE";
case GL_DEBUG_TYPE_OTHER_ARB: return "OTHER";
default: return "???";
}
}
// Note: this is static, it is called from OpenGL
void GlfwThread::debugCallback(GLenum source, GLenum type,
GLuint id, GLenum severity,
GLsizei, const GLchar *message,
const GLvoid *)
{
std::cout << "source=" << sourceString(source) <<
" type=" << typeString(type) <<
" id=" << id <<
" severity=" << severityString(severity) <<
" message=" << message <<
std::endl;
AssertBreak(type != GL_DEBUG_TYPE_ERROR_ARB);
}
You almost certainly have both of those extensions available on a decent OpenGL implementation. They help, a lot. Debug contexts do validation on everything and complain to the log. They even give performance suggestions in the log output in some OpenGL implementations. Using ARB_debug_output makes checking glGetError obsolete - it checks for you every call.

You can start by looking at the X's logs, usually located /var/log/, and ~/.xsession-errors.
It's not out of the question that OpenGL is doing something screwy, so if your app has any logging turn it on.
Enable core dumps by running ulimit -c unlimited. You can analyze the dump by opening it in gdb like this:
gdb <executable file> <core dump file>
See if that produces anything useful, then research whatever that is.

Related

BLE BlueZ return fixed pin for connection

I have bluez 5.48 running on embedded device and I am able to connect Apple and Windows devices over the bluetooth. I have also been able to get Bluetooth pairing working using DisplayOnly custom agent which generates random pin/pass for pairing.
The embedded device has no Input/Output peripherals so I need to return fixed pin for all connections but for some reason I am not finding the right way to do it. So far I have created custom agent, registered it on dbus, which receives the calls RequestPinCode and DisplayPasskey (but they are set to return auto generated pins.)
here is code snippet from my set up
static void bluez_agent_method_call(GDBusConnection *con,
const gchar *sender,
const gchar *path,
const gchar *interface,
const gchar *method,
GVariant *params,
GDBusMethodInvocation *invocation,
void *userdata)
{
int pass;
int entered;
char *opath;
GVariant *p= g_dbus_method_invocation_get_parameters(invocation);
g_print("Agent method call: %s.%s()\n", interface, method);
if(!strcmp(method, "RequestPinCode")) {
;
}
else if(!strcmp(method, "DisplayPinCode")) {
;
}
else if(!strcmp(method, "RequestPasskey")) {
g_print("Getting the Pin from user: ");
fscanf(stdin, "%d", &pass);
g_print("\n");
g_dbus_method_invocation_return_value(invocation, g_variant_new("(u)", pass));
}
else if(!strcmp(method, "DisplayPasskey")) {
g_variant_get(params, "(ouq)", &opath, &pass, &entered);
cout << "== pass = " << pass << endl;
pass=1234; // Changing value here does not change the actual Pin for some reason.
cout << "== pass = " << pass << "opath = " << opath << endl;
g_dbus_method_invocation_return_value(invocation, NULL);
}
else if(!strcmp(method, "RequestConfirmation")) {
g_variant_get(params, "(ou)", &opath, &pass);
g_dbus_method_invocation_return_value(invocation, NULL);
}
else if(!strcmp(method, "RequestAuthorization")) {
;
}
else if(!strcmp(method, "AuthorizeService")) {
;
}
else if(!strcmp(method, "Cancel")) {
;
}
else
g_print("We should not come here, unknown method\n");
}
I tried changing the pass variable in DisplayPasskey function to set new pin but bluetooth still connects with the auto generated pin only.
I found this stack overflow question which is exactly what I need How to setup Bluez 5 to ask pin code during pairing and from the comments, there seems to be solution to return the fixed pins.
It would be great if somebody can provide me with some examples to return fix pin in DisplayPasskey and RequestPinCode functions.
The Bluetooth standard does not contain a fixed key association model. The standard does not use a PAKE (https://en.m.wikipedia.org/wiki/Password-authenticated_key_agreement) but a custom ad-hoc weaker protocol. The custom protocol used during passkey pairing is only secure for one time passkeys (in particular, a passive eavesdropper learns the passkey used after a successful pairing attempt and can also be brute forced in at most 20 pairing attempts).
BlueZ follows the Bluetooth standard, which says the passkey should be randomly generated. Therefore you cannot set your own fixed passkey. If you don't have the required I/O capabilities, you shall use the "Just Works" association model instead (which unfortunately does not give you MITM protection). If you want higher security by using a fixed passkey for MITM protection, you must implement your own security layer on top of the (insecure) Application layer. This is for example what Apple's Homekit does.
Please also see my post at https://stackoverflow.com/a/59282315.
This article is also worth reading that explains why a static passkey is insecure: https://insinuator.net/2021/10/change-your-ble-passkey-like-you-change-your-underwear/.

Qt - capturing internal qWarning or "failing makeCurrent()" error

I'm running a QWebEngineView in Qt 5.12 on a linux machine with Wayland, where a bug in Qt (there are a few issues reported about this) causes the app to freeze rendering if the screen is disconnected and reconnected, or if it goes into sleep mode.
The app still runs fine, but no further rendering is done ant it's stuck at last frame. The only errors that are visible when running the app are:
[14:52:07.809] connector 87 disconnected
QOpenGLFunctions created with non-current context
QPaintDevice::metrics: Device has no metric information
QQuickWidget: Cannot render due to failing makeCurrent()
QQuickWidget: Cannot render due to failing makeCurrent()
This is the place in Qt that throws this warning:
// qtdeclarative/src/quickwidgets/qquickwidget.cpp
void QQuickWidgetPrivate::render(bool needsSync)
{
if (!useSoftwareRenderer) {
#if QT_CONFIG(opengl)
// createFramebufferObject() bails out when the size is empty. In this case
// we cannot render either.
if (!fbo)
return;
Q_ASSERT(context);
if (!context->makeCurrent(offscreenSurface)) {
qWarning("QQuickWidget: Cannot render due to failing makeCurrent()");
return;
}
I unfortunately cannot alter the Qt version on this device, so I'd like to know if there is any reliable way of detecting this situation from inside the app to do some proper crash/restart?
I couldn't find any way to capture qWarning messages from inside of Qt or anything like that, and the code that causes this doesn't look as it could emit any interesting signal.
As last resort, I could parse the output of my app and look for "Cannot render due to failing makeCurrent()" but I really don't like that I cannot detect it from my own app that rendering has frozen.
I could add QT_FATAL_WARNINGS environment variable, which causes the app to automatically crash upon any warning, but I'm kinda scared that there may be some non-fatal warnings that would cause unintentional crashes.
You can use qInstallMessageHandler to override the default logging mechanism. And in there, you can check the qWarning messages for the specific string you're looking for.
void myMessageOutput(QtMsgType type, const QMessageLogContext &context, const QString &msg)
{
QByteArray localMsg = msg.toLocal8Bit();
const char *file = context.file ? context.file : "";
const char *function = context.function ? context.function : "";
switch (type) {
case QtWarningMsg:
fprintf(stderr, "Warning: %s (%s:%u, %s)\n", localMsg.constData(), file, context.line, function);
if (localMsg == theBadRenderError) {
// Do something here
}
break;
...
}
}
int main(int argc, char **argv)
{
qInstallMessageHandler(myMessageOutput);
QApplication app(argc, argv);
...
return app.exec();
}

How to create an OpenGL context on an NodeJS native addon on MacOS?

Follow-up for this question.
I'm trying to create a NodeJS native addon that uses OpenGL.
I'm not able to use OpenGL functions because CGLGetCurrentContext() always returns NULL.
When trying to create a new context to draw into, CGLChoosePixelFormat always returns the error kCGLBadConnection invalid CoreGraphics connection.
What is bugging me out is that when I isolate the code that creates the OpenGL context into a standalone CPP project, it works! It just gives an error when I run it inside the NodeJS addon!
I created this NodeJS native addon project to exemplify my error: https://github.com/Psidium/node-opengl-context-error-example
This is the code that works when executed on a standalone project and errors out when running inside NodeJS:
//
// main.cpp
// test_cli
//
// Created by Borges, Gabriel on 4/3/20.
// Copyright © 2020 Psidium. All rights reserved.
//
#include <iostream>
#include <OpenGL/OpenGL.h>
int main(int argc, const char * argv[]) {
std::cout << "Context before creating it: " << CGLGetCurrentContext() << "\n";
CGLContextObj context;
CGLPixelFormatAttribute attributes[2] = {
kCGLPFAAccelerated, // no software rendering
(CGLPixelFormatAttribute) 0
};
CGLPixelFormatObj pix;
CGLError errorCode;
GLint num; // stores the number of possible pixel formats
errorCode = CGLChoosePixelFormat( attributes, &pix, &num );
if (errorCode > 0) {
std::cout << ": Error returned by choosePixelFormat: " << errorCode << "\n";
return 10;
}
errorCode = CGLCreateContext( pix, NULL, &context );
if (errorCode > 0) {
std::cout << ": Error returned by CGLCreateContext: " << errorCode << "\n";
return 10 ;
}
CGLDestroyPixelFormat( pix );
errorCode = CGLSetCurrentContext( context );
if (errorCode > 0) {
std::cout << "Error returned by CGLSetCurrentContext: " << errorCode << "\n";
return 10;
}
std::cout << "Context after being created is: " << CGLGetCurrentContext() << "\n";
return 0;
}
I already tried:
Using fork() to create a context in a subprocess (didn't work);
Changing the pixelformat attributes to something that will create my context (didn't work);
I have a hunch that it may have something to do with the fact that a Node native addon is a dynamically linked library, or maybe my OpenGL createContext function may not be executing on the main thread (but if this was the case, the fork() would have solved it, right?).
Accessing graphics hardware requires extra permissions - Windows and macOS (don't know for others) restrict creation of hardware-accelerated OpenGL context to interactive user session (I may be wrong with the terms here). From one of articles on the web:
In case the user is not logged in, the CGLChoosePixelFormat will return kCGLBadConnection
Interactive session is easier to feel than to understand; e.g. when you interactively login and launch application - it is interactive session; when process is started as service - it is non-interactive. How this is actually managed by system requires deeper reading. As far as I know, there is no easy way "escaping" non-interactive process flag.
NodeJS can be used as part of a web-server, so that I may expect that it can be exactly the problem - it is started as a service, by another non-interactive user or has other special conditions making it non-interactive. So maybe more details on how you use / start NodeJS itself might clarify why the code doesn't work. But I may expect that using OpenGL on server part might be not a good idea anyway (if this is a target); although it might be possible that software OpenGL implementation (without kCGLPFAAccelerated flag might work).
By the way, there are at least two OpenGL / WebGL extensions to NodeJS - have you tried their samples to see if they behave in the same or different way in your environment?
https://github.com/google/node-gles
https://github.com/mikeseven/node-webgl

WIN API - Programm getting stuck while a button is selected

I am creating a simple software using WINAPI that reads the data from a sensor connected to a computer via USB.
In this software, I am implementing some functions like read mode, test mode, etc.
The problem that I am facing is that I am getting stuck while I select the button for continuous reading, the code follows below:
case WM_COMMAND:
switch (wp)
{
case START_BUTTON:
printf("START_BUTTON");
while(SendDlgItemMessage(hWnd,START_BUTTON,BM_GETCHECK ,TRUE,0)== BST_CHECKED)
{
char* var = USB_Read(); //Get data from the sensor
SetWindowText(hLux, var); //Display the data
if (SendDlgItemMessage(hWnd,START_BUTTON,BM_GETCHECK ,TRUE,0)!= BST_CHECKED) //Check if the botton is no longer selected
break;
}
break;
}
break;
I know that the problem is in the while-loop, when I press it all the program gets stuck, only the data is being displayed correctly, the other controls get like frozen.
The question is: How could I display the data continuously and have access to the other controls at the same time?
You have to create a thread of execution that reads the usb while the start is checked.
So we create a thread that is started in the program initialization, which run continuosly and reads usb each time it founds the button checked.
Now in the message loop you simply check or uncheck the button.
DWORD WINAPI ThreadFunction( LPVOID lpParam )
{
(void)lpParam; //make happy compiler for unused variable
while (TRUE) //Once created the thread runs always
{
//If checked reads usb for each iteration
if(SendDlgItemMessage(hWnd,START_BUTTON,BM_GETCHECK ,0,0)== BST_CHECKED)
{
char* var = USB_Read(); //Get data from the sensor
SetWindowText(hLux, var); //Display the data
Sleep(1); //Why this? to don't have a furious CPU usage
}
}
}
.....
//Winmain
DWORD dwThreadId; //thread ID in case you'll need it
//Create and start the thread
CreateThread(
NULL, // default security attributes
0, // use default stack size
ThreadFunction, // thread function name
NULL, // argument to thread function
0, // use default creation flags
&dwThreadId); // returns the thread identifier
......
case WM_COMMAND:
switch (wp)
{
case START_BUTTON:
printf("START_BUTTON");
if(SendDlgItemMessage(hWnd,START_BUTTON,BM_GETCHECK ,0,0)== BST_CHECKED)
SendDlgItemMessage(hWnd,START_BUTTON,BM_SETCHECK ,BST_UNCHECKED, 0);
else
SendDlgItemMessage(hWnd,START_BUTTON,BM_SETCHECK ,BST_CHECKED, 0);
break;
}
break;
EDIT: I modified the program to check/uncheck the radio button.
Please note the usage of the Sleep function with a minimal value of 1ms. It is used to give back control to the OS to smooth the CPU usage. If in the function that reads the usb there are enough OS synch primitives it can be omitted (check cpu usage).

I2C error when using the Windows Monitor Configuration Functions

I'm attempting to get/set the brightness level of the monitor through the Windows API. I've tried both the Low-Level Monitor Configuration Functions and the High-Level Monitor Configuration Functions, but they both seem to be breaking in the same place. In both cases I have no problem getting the HMONITOR handle and getting the physical monitor handle from the HMONITOR, but once I try to query the DDC/CI capabilities, I get an error saying "An error occurred while transmitting data to the device on the I2C bus."
The particular functions that cause this error are GetMonitorCapabilities for the high-level functions and GetCapabilitiesStringLength for the low-level functions. They both cause the same error.
This leads me to believe that maybe my monitor doesn't support DDC/CI, but I know my laptop's monitor brightness can be changed through the control panel, so it must be controlled through software somehow. Also I can successfully use the WMI classes in a PowerShell script to get/set the brightness as described on this page. Most things I've read suggest that most modern laptop screens do support DDC/CI.
Is there any way to find out what is causing this error or to get more information about it? I'm currently working in C++ in Visual Studio 2013 on Windows 7. I could probably use WMI in my C++ program if I can't get this current method working, but I thought I would ask here first.
Here's the code I currently have:
#include "stdafx.h"
#include <windows.h>
#include <highlevelmonitorconfigurationapi.h>
#include <lowlevelmonitorconfigurationapi.h>
#include <physicalmonitorenumerationapi.h>
#include <iostream>
#include <string>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD minBrightness, curBrightness, maxBrightness;
HWND curWin = GetConsoleWindow();
if (curWin == NULL) {
std::cout << "Problem getting a handle to the window." << std::endl;
return 1;
}
// Call MonitorFromWindow to get the HMONITOR handle
HMONITOR curMon = MonitorFromWindow(curWin, MONITOR_DEFAULTTONULL);
if (curMon == NULL) {
std::cout << "Problem getting the display monitor" << std::endl;
return 1;
}
// Call GetNumberOfPhysicalMonitorsFromHMONITOR to get the needed array size
DWORD monitorCount;
if (!GetNumberOfPhysicalMonitorsFromHMONITOR(curMon, &monitorCount)) {
std::cout << "Problem getting the number of physical monitors" << std::endl;
return 1;
}
// Call GetPhysicalMonitorsFromHMONITOR to get a handle to the physical monitor
LPPHYSICAL_MONITOR physicalMonitors = (LPPHYSICAL_MONITOR)malloc(monitorCount*sizeof(PHYSICAL_MONITOR));
if (physicalMonitors == NULL) {
std::cout << "Unable to malloc the physical monitor array." << std::endl;
return 1;
}
if (!GetPhysicalMonitorsFromHMONITOR(curMon, monitorCount, physicalMonitors)) {
std::cout << "Problem getting the physical monitors." << std::endl;
return 1;
}
std::cout << "Num Monitors: " << monitorCount << std::endl; // This prints '1' as expected.
wprintf(L"%s\n", physicalMonitors[0].szPhysicalMonitorDescription); // This prints "Generic PnP Monitor" as expected
// Call GetMonitorCapabilities to find out which functions it supports
DWORD monCaps;
DWORD monColorTemps;
// The following function call fails with the error "An error occurred while transmitting data to the device on the I2C bus."
if (!GetMonitorCapabilities(physicalMonitors[0].hPhysicalMonitor, &monCaps, &monColorTemps)) {
std::cout << "Problem getting the monitor's capabilities." << std::endl;
DWORD errNum = GetLastError();
DWORD flags = FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS;
LPVOID buffer;
FormatMessage(flags, NULL, errNum, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPWSTR)&buffer, 0, NULL);
wprintf(L"%s\n", buffer);
return 1;
}
// Same error when I use GetCapabilitiesStringLength(...) here.
// More code that is currently never reached...
return 0;
}
Edit: Also I should note that physicalMonitors[0].hPhysicalMonitor is 0, even though the monitor count and text description are valid and the GetPhysicalMonitorsFromHMONITOR function returns successfully. Any thoughts on why this might be?
This is a "wonky hardware" problem, the I2C bus it talks about is the logical interconnect between the video adapter and the display monitor. Primarily useful for plug & play. Underlying error code is 0xC01E0582, STATUS_GRAPHICS_I2C_ERROR_TRANSMITTING_DATA. It is generated by a the DxgkDdiI2CTransmitDataToDisplay() helper function in the video miniport driver. It is the vendor's video driver job to configure it, providing the functions that tickle the bus and to implement the IOCTL underlying GetMonitorCapabilities().
Clearly you are device driver land here, there isn't anything you can do about this failure in your C++ program. You can randomly spin the wheel of fortune by looking for a driver update from the video adapter manufacturer. But non-zero odds that the monitor is at fault here. Try another one first.
I know its bad time to reply but i thought you should know.
The problem you are facing is because of the DDC/CI disabled on your monitor so you should go to the monitor settings and check if DDC/CI is disabled and if it is, then you have to enable it and run your code again. It would work. If you were not able to find DDC/CI option ( some of the monitor have a separate button for enabling/disabling the DDC/CI like the Benq's T71W monitor has a separate down arrow button to enable/disable DDC/CI ) then you should refer to your monitor's manual or contact the manufacturer.
Hope that helps. and sorry for late reply.
Best of luck. :)
As I read the original question, the poster wanted to control a laptop display using DDC/CI. Laptop displays do not support DDC/CI. They provide a stripped down I2C bus sufficient to read the EDID at slave address x50, but that's it.