Qt app works on desktop, but not laptop? - c++

I am writing an OpenGL app using Qt, and it builds and runs fine on my desktop, but when I try running the exact same code on my laptop, it builds but does not output anything? Here is my main.cpp
#include <QtGui/QApplication>
#include <QtOpenGL/QGLWidget>
#include "GLWidget.h"
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
GLWidget window;
window.resize(1050,700);
window.setFixedSize(1050, 700);
window.show();
return app.exec();
}
I do not want the user to be able to resize the window, hence the fixed size. If I set a breakpoint on the last line of main, it never reaches it on my laptop. I have stepped through the code and right after show() is called (which is just an inline function) the debugger finishes with code 0. I checked all the project build and run settings, they are the same on both machines.
My desktop has a 1920x1080 monitor, but my laptop is only 1366x768 could this have anything to do with it? Is there some sort of internal check going on under the hood in Qt that depends on my screens resolution? That is pretty much the only thing I can think of.

I do not want the user to be able to resize the window
May I ask why? May I presume you want the window to be a fixed size, because you want to use OpenGL to generate a image exactly this size? If so, then I must tell you, it will not work that way. OpenGL implementations will only render what will become visible on the screen (pixel ownership test). If parts of the window are not visible (and in your case this will be the case on the laptop) those pixels are simply not rendered. Reading out the framebuffer will leave those pixels undefined.
The proper way to tackle this problem is using either a PBuffer, or a Frame Buffer Object (FBO). FBOs are easier to use, but not as widely supported on Windows (Intel graphics on Windows have rather poor FBO support). FBOs are supported by all Linux OpenGL implementations (Mesa (also does Intel), ATI/AMD and NVidia). There are numerours FBO and PBuffer tutorials in the web.

Related

GLUT problem with the support of hidpi retina on macOS (c++)

I making a port of my 3d program to macOS.
I'm using c++ and FreeGlut at windows. So at macOS, I've started to use it with GLUT. I don't use cocoa and create an OpenGL window context via GLUT.
There is a problem with the support of hidpi retina.
Glut reshapefunc is detecting two times the smaller resolution (I mean it detects logical points, not actual retina pixels) That's why the image looks pixelated.
How to turn on retina support in GLUT (or freeglut)?
I've tried the solution from this article http://iihm.imag.fr/blanch/software/glut-macosx/ (add line "hidpi" to glutInitDisplayString
and GLUT_3_2_CORE_PROFILE to glutInitDisplayMode). But it doesn't help.
Is it possible to make it without big changes in the program? Because it's quite a big program (3d software).
Thank you
You should only use one of glutInitDisplayString and glutInitDisplayMode.
glutInitDisplayMode overides glutInitDisplayString if it comes after.
try:
glutInitDisplayString("hidpi core rgba double")

Qt renders this SVG correctly in "debug" mode but not in "release"

I got this weird problem, when I build for debug and link against debug dlls (Qt 5.12.2, open source) I get the expected rendering.
When I built for release and link against release dlls, the image is completely blank. The program is ran from MSVC so the dll paths should be setup correctly. Anybody know whats going on?
#include <QApplication>
#include <QSvgRenderer>
#include <QPainter>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
//https://commons.wikimedia.org/wiki/File:USAF-1951.svg
QSvgRenderer renderer(QString("USAF-1951.svg"));
QImage image(512, 512, QImage::Format_Grayscale8);
QPainter painter(&image);
renderer.render(&painter);
image.save("USAF-1951.bmp");
return 0;
}
I tried a few other SVG images and they seem to work. Not sure whats up with this image.
The OP provided the correct fix in his self-answer but missed to explain why this is necessary. So, I will step in:
Qt doc. about QImage::QImage():
QImage::QImage(int width, int height, QImage::Format format)
Constructs an image with the given width, height and format.
A null image will be returned if memory cannot be allocated.
Warning: This will create a QImage with uninitialized data. Call fill() to fill the image with an appropriate pixel value before drawing onto it with QPainter.
(Emphasize mine.)
Uninitialized means there might be any value in the image pixel bytes. If it were all 0s, the alpha values would be 0s as well. That might explain why there appeared nothing.
Now, an additional note why this may have worked in debug mode:
OP mentioned explictly MSVC. The MS guys tried to provide best support for debugging and decided to fill every allocated memory (in debug mode) with patterns like e.g. CD standing for "allocated but not yet initialized". (More about this here: SO: In Visual Studio C++, what are the memory allocation representations?.)
Sometimes, it is really helpful. Interpreting this bit-pattern as float or double yields quite strange numbers (easy to recognize with a little bit experience), and integral values in hex-view become quite obvious.
However, this has some draw-backs: An uninitialized bool will be always "initialized" to true in debug mode (somehow) where it has arbitrary values in release. → The worst imaginable accident: debug runs but release fails sporadically. (My most afraided bugs.)
In OP's case, it's (probably) similar: In debug mode, the image has always a light gray background with an opacity which is high enough to ignore the unexpected transparency while in release mode... see above. (Alternatively, the OP could've get a noise pattern as well like known from TV after midnight in the past. Not sure, whether this had helped further...)
So apparently, if I set the background in works as expected in both debug and release.
#include <QApplication>
#include <QSvgRenderer>
#include <QPainter>
//https://stackoverflow.com/questions/55895293/qt-renders-this-svg-correctly-in-debug-mode-but-not-in-release
int main(int argc, char *argv[])
{
//https://commons.wikimedia.org/wiki/File:USAF-1951.svg
QApplication a(argc, argv);
QSvgRenderer renderer(QString("USAF-1951.svg"));
QImage image(512, 512, QImage::Format_Grayscale8);
image.fill(255);//<- Need to set background
QPainter painter(&image);
renderer.render(&painter);
image.save("Test.bmp");
return 0;
}

Design user interfaces with automatically scaling fontsizes in Qt

Qt 5.7 is claiming to have improved high DPI support. With modern Qt it is possible to create an app starter like:
#include <QApplication>
int main(int argc, char *argv[])
{
QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QApplication app(argc, argv);
return app.exec();
}
I expect UI to automatically scale when running on high DPI, but the scaling doesn't necessarily work as expected. At least it doesn't scale the UI for me under Linux. What I am seeing is that the layout scales up, but the fonts stay right where they were, at the sizes Qt Creator assigned them in the form layout tool.
If you want a larger size font for some element, and you set it in form Design screen, there seems no way to say "twice as big". Instead it injects a font property with an absolute point size.
It seems to be the same for the QMessageBox static methods too. Display a static QMessageBox, like QMessageBox::info and its text and icon do not get scaled up to compensate for the high dpi.
So exactly what is it you are supposed to do to allow your Qt application, designed in Creator at a standard DPI, to automatically adjust to a high DPI environment, fonts, QMessageBoxes and all.
I've gotten some traction setting the application's style sheet to use larger font for QMessageBox. but it feels ugly, and I'm not sure how to trigger it automatically.
EDIT:
Manually setting the environment variable
declare -x QT_SCALE_FACTOR=2
Does seem to invoke the sort of behavior I am looking for. But how to do it automatically only on High DPI environment, and preferably inside the program itself . ( setenv (3) could work under Linux, I guess )
As of Qt5.11 the following seems to be good enough for my Ubuntu 18.04 laptop with a 4k screen:
Download and install Qt5.11 from official website (not from apt).
Open the ~/.local/share/applications/DigiaQt-qtcreator-community.desktop file.
Change the line Exec=/path/to/Qt/Tools/QtCreator/bin/qtcreator to Exec=env QT_AUTO_SCREEN_SCALE_FACTOR=1 /path/to/Qt/Tools/QtCreator/bin/qtcreator
Restart Qt Creator.

SDL OpenGL Textures Lost

I have a fully working engine that is using SDL and OpenGL. I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
Some extra notes: I am using CodeBlocks with MinGW on windows 7, the libraries I have linked are: SOIL (a library for easily loading textures in OGL - http://www.lonesock.net/soil.html), OpenGL32, Glu32 and SDL.
I have some images to demonstrate my problem (the first one is windowed mode and the second one is when I try to change to fullscreen with a call to SDL_SetVideoMode(...) - SDL_WM_ToggleFullScreen doesn't work.
I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
It strongly depends on how the used framework implements video mode changes.
In general when deleting an OpenGL context all it's associated data is lost, except if there's another OpenGL context with which a "sharing" has been established. That can be used to keep all uploaded data persistent between context recreation. However a mere video mode change usually doesn't require a context recreation, and usually also not a window recreation.
However the framework used by you (SDL) will completely clean up a window and the context when changing the video mode, thus loosing you the loaded resources. Unstable development versions of SDL have better OpenGL support, allowing for video mode changes without context teardown inbetween.
Unfortunately, the problem stems from the way SDL recreates the window. I had this problem before and the solution for me was to set up a special uninitialize and initialize function that only got rid of/created images.
Essentially, when SDL's Resize event is called (http://www.libsdl.org/docs/html/sdlresizeevent.html) you would uninitialize any artistic assets required and then re-initialize them after entering or leaving fullscreen.

Compiz and OpenGL window

I've written an OpenGL application in Linux through GLX. It uses double buffering with glXSwapBuffers and Sync to VBlank set via NVIDIA X Server Settings. I'm using Compiz and have smooth moving of windows and no tearing (Sync to VBlank enabled in Compiz settings).
But when I
Try to move or resize the OpenGL window or
Move other windows through the area occupied by the OpenGL window
the system stutters and freezes for 3-4 seconds. Moving other
windows outside the area occupied by the OpenGL window is smooth as always.
Moreover the problem only arises if the OpenGL application is in the
loop of producing frames of animation therefore swapping the buffers.
If the content is static and the application is not swapping the buffers there are no problems,moving the various windows is smooth.
Could be a synchronization issue between my application and Compiz?
I don't know if it's still in the same shape as a few years ago, but…
Your description matches very well a Compiz SNAFU. Every window resize triggers the recreation of a texture that will receive the window contents. Texture creation is a costly operation and hence should be avoided. Unfortunately the Compiz developers don't seems the brightest ones, because they did not realize there's an obvious solution to this problem: Windows in X11 can be reparented without much cost (every Window manager does this several times), it's called stacking. Compiz is a window manager.
So why doesn't Compiz keep a desktop sized window around into which it reparents those windows that are about to be resized, gets its constant sized window texture from there and after finishing the resize operation reparents the window into its decoration frame?
I don't know why this is the case. Anyway, some things Compiz does are not very smart.
If you want to fix this, well: Compiz is open source and I just described what to do.