I'm taking over some legacy code, and it's using Xlib + glX to create its drawing windows. However, the window creation fails when the display name is set to be anything than :0.0.
I was able to reproduce this behavior in a minimal example:
#include <X11/Xlib.h>
#include <GL/glew.h>
#include <GL/glx.h>
int main()
{
Display* display = XOpenDisplay(":0.1");
GLint vi_att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo* vi = glXChooseVisual(display, 0, vi_att);
Window root = DefaultRootWindow(display);
Colormap cmap = XCreateColormap(display, root, vi->visual, AllocNone);
XSetWindowAttributes swa;
swa.colormap = cmap;
swa.event_mask = ExposureMask;
Window window = XCreateWindow(display, root, 0, 0, 200, 400, 0,
vi->depth, InputOutput, vi->visual, CWColormap | CWEventMask,
&swa);
GLXContext context = glXCreateContext(display, vi, NULL, GL_TRUE);
glXMakeCurrent(display, window, context);
XMapWindow(display, window);
XFlush(display);
return 0;
}
Executing this example I get a console message
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 78 (X_CreateColormap)
Serial number of failed request: 21
Current serial number in output stream: 23
and the stepping through the various routines I find that I do get a valid display, and I do get a valid visual. Problems arise at glXCreateContext.
To make things clear, :0.1 is a valid display (I've set up separate X displays for my monitors to test this), and, interestingly, it does not make a difference from what display I'm executing the code. At first I thought that it wouldn't work to set up the window on a different display, but running the example with :0.0 from display :0.1 works fine. Running :0.1 from :0.1 does not.
More interestingly, choosing NULL as XOpenDisplay parameter, and running it on the :0.1 display also produces the same error.
Having multiple screens indicates that your X server has been set up in Zaphod mode. Depending on which driver you're using and how graphics output has been configured OpenGL may work not at all or on only one of the screens.
Please post your /var/log/Xorg.0.log so that I can give you more details.
But I can tell you already that Zaphod mode and OpenGL are on difficult terms with most drivers and system configurations.
Update due to comment
Okay, given your Xorg.0.log it's no surprise that you can create OpenGL contexts on only one of the screens: You got only a single GPU as indicated by the lines (note the identical PCI Bus-ID):
[ 18.192] (II) NVIDIA(0): NVIDIA GPU GeForce GTS 450 (GF116) at PCI:1:0:0 (GPU-0)
…
[ 18.214] (II) NVIDIA(1): NVIDIA GPU GeForce GTS 450 (GF116) at PCI:1:0:0 (GPU-0)
and use it on multiple X screens with different outputs. It's simply not supported by the drivers and it's been perfectly well documented:
(…) windows cannot be dragged between X screens, hardware accelerated OpenGL cannot span the (…) X screens (…)
However the actual question is: Why for bob's sake are you using multiple screen (=Zaphod) mode in the first place? There's only one situation in which doing this is sensible. And that is if you have multiple graphics cards, which you cannot interconnect (SLi or CrossFire or different models or vendors) in a single machine and you'd like to use them all together in a single X display multiple screen configuration.
Apart from that you should be using TwinView, because that does what you trivially expect from it.
Note that if your desire is to get more screen real estate by plugging several graphics cards into a box, you can use a combination of DMX (Distributed Multihead X) and Xpra or Chromium, but this requires some serious tinkering and AFAIK nobody did documented the Xpra method to this data (could be a nice weekend project though).
Related
Magnification API can work well when used to capture primary screen, but when i use it to capture sub screen,after MagSetWindowSource() called, MagSetImageScalingCallback() not triggerd.
I checked the window position is set correctly, in my computer is {-1080, -250, 0, 1670}, and i showed the window, it placed on the right position.
I use the following code to get sub screenshot, just same with the webrtc code, but MagSetImageScalingCallback() not triggerd.
// Create the host window.
host_window_ = CreateWindowExW(WS_EX_LAYERED, kMagnifierHostClass,
kHostWindowName, 0, 0,
0, 0, 0, nullptr, nullptr, hInstance, nullptr);
// Create the magnifier control.
magnifier_window_ = CreateWindowW(kMagnifierWindowClass,
kMagnifierWindowName,
WS_CHILD | WS_VISIBLE, 0, 0, 0, 0,
host_window_, nullptr, hInstance, nullptr);
BOOL result = SetWindowPos(magnifier_window_, NULL, rect.left(), rect.top(),
rect.width(), rect.height(), 0);
// value is -1080, -250, 0, 1670
RECT native_rect = {rect.left(), rect.top(), rect.right(), rect.bottom()};
result = set_window_source_func_(magnifier_window_, native_rect);
The working environment is Windows 10 Professional 64bit, my application is also 64bit, my primary screen is plugged in discrete graphics card, sub screen is plugged in integrated graphics.
In the documentation of the MagSetImageScalingCallback, which can be found here, it specifies a few things to take into account:
"This function requires Windows Display Driver Model (WDDM)-capable video cards." You might want to check if the other screen is running via a different videocard, maybe the driver isn't WDDM capable.
"This function works only when Desktop Window Manager (DWM) is off." I'm not sure how much of this statement is true, but as of Windows 8 DWM can no longer be programmatically disabled. You might come into some border cases, I do not think the documentation of this API is kept up to date with all the quicks it might have.
One very important note, you might want to look for a different solution, as this is specified in the documentation: "The MagSetImageScalingCallback function is deprecated in Windows 7 and later, and should not be used in new applications. There is no alternate functionality."
Is there any specific reason why you want to use this Windows API, maybe an alternative is a better choice? I am quite familiar with making screenshots, but I'm more of a C# guy, but some c++ examples that look usable can be found here. The GDI way pretty much describes what I use in Greenshot, and this works without any big issues.
P.S.
If my information doesn't help, you might want to extend your question with more information on your setup, like Windows version, is it 32/64 bit, is your application 32/64 bit, graphics card, screens etc.
When using the following BGFX code:
#include "GLFW/glfw3.h"
#include <bgfx/bgfx.h>
int main() {
glfwInit();
GLFWwindow* window = glfwCreateWindow(800, 600, "Hello, bgfx!", NULL, NULL);
bgfx::Init bgfxInit;
bgfxInit.type = bgfx::RendererType::Count; // Automatically choose a renderer.
bgfxInit.resolution.width = 800;
bgfxInit.resolution.height = 600;
bgfxInit.resolution.reset = BGFX_RESET_VSYNC;
bgfx::init(bgfxInit);
}
A black openGL window pops up and appears fine for a second, however, a GLXBadDrawable error then pops up. I do not know what the cause of this error is, and the other question has no answers and has not been active for some time now.
I believe that this is not an issue with the code, but rather my machine, however, I may be wrong.
I currently have a Lenovo T400 laptop, with a Core 2 Duo P9500. I have 2 built-in GPUs, a Mobile 4 Series Chipset integrated graphics chip, along with an ATI Mobility Radeon HD 3450/3470. I am also running Artix Linux with the 6.0.7-artix1-1 kernel. I also am using the glfw-x11 and glfw packages if that helps, along with the i3-gaps window manager.
I have also attempted to use SDL2 instead of GLFW, and the same issue occurs. However, for GLFW a black window shows up, while in SDL2, a transparent(?) window instead shows up. Searching the github issues page also yielded no results.
I have a vertical dual screen setup with each monitor's size of 1920x1080.
My software must run on both screens with a single SDL window (1920x2160) on fullscreeen.
The SDL_WindowFlags mask used in the window creation is the following : (SDL_WINDOW_FULLSCREEN | SDL_WINDOW_FULLSCREEN_DESKTOP).
Since SDL_WINDOW_FULLSCREEN_DESKTOP polls the actual hardware resolution (as far as i know) I am presented with a single screen of 1920x1080 (the first half of the software's GUI) rather than 1920x2160 so the second screen is not drawn.
A workaround is to change the mask to (SDL_WINDOW_FULLSCREEN | SDL_WINDOW_BORDERLESS) to run it in windowed borderless mode but that case is not applicable to the software needs (its required and it should not be done like this).
Any recommendations for running the software in real fullscreen, excluding splitting the logic into many SDL windows, are welcomed.
I am trying to run a programming club in my school and it's not practical to physically connect the Pi's to kb, mouse and monitor so they all auto-run VNC and we connect to the machines using Ultra-VNC. The programs are written in a shared directory and Eclipse C++ runs on the host; therefore all program output is viewed via VNC.
Everything was fine while programming in Python and when we started to use C++. However, I hit a brik wall when trying to get graphics to display. I could build a program that appeared to run, but which only gave terminal output - it would never display drawings on the screen. While trying to solve the problem and at one point connected a keyboard and mouse and noticed that they seemed to be recognised (laser came on, Caps Lock toggled, etc.) but they didn't do anything when moved/typed on.
Eventually the penny began to teater on the edge as I got increasingly confused as to why no one else was having this problem given that there seem to be a lot of people using openvg and I began to wonder more about the kb/mouse issue.
I tried plugging the HDMI output into a monitor at home (shool ones are still analogue d-sub!) and lo and behold, the physical kb and mouse worked. Then it got really strange!
Somehow I have 2 desktops running at the same time. The physical keyboard and mouse control one and VNC controls the other. If I start a terminal window on 'Physical' desktop, it doesn't show up on 'VNC' desktop and vice versa - they seem to be independent, although that's not quite true.
When I run the graphics executable on 'Physical' desktop, it works fine and can be controlled only using the physical kb. When I run it on 'VNC' desktop, it can be controlled only with the VNC kb but the output displays on the physical screen.
I really don't get this!
I kind of need to be able to run the programs over VNC, but I need to be able to tell the code I run which desktop to output to as it seems to default to the wrong one. Actually, it would be prefferable to get VNC to connect to the existing HDMI desktop rather than starting a new one but I cannot findout how to tell tightVNC to do that.
The code is here, but I think the problem might be in the init() function which is in a library, so it is probably better to get VNC on to the right desktop...
Thanks in advance for any help!
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
extern "C" {
#include "VG/openvg.h"
#include "VG/vgu.h"
#include "fontinfo.h"
#include "shapes.h"
}
using namespace std;
int main (void) {
int width, height;
VGfloat w2, h2, w;
char s[3];
init(&width, &height); // Graphics initialization
w2 = (VGfloat)(width/2);
h2 = (VGfloat)(height/2);
w = (VGfloat)w;
Start(width, height); // Start the picture
Background(0, 0, 0); // Black background
Fill(44, 77, 232, 1); // Big blue marble
Circle(w2, 0, w); // The "world"
Fill(255, 255, 255, 1); // White text
TextMid(w2, h2, "hello, world", SerifTypeface, width/10); // Greetings
End(); // End the picture
fgets(s, 2, stdin); // Pause until RETURN]
finish(); // Graphics cleanup
exit(0);
}
See last comment - abandoned openvg and using X Windows.
I am trying to change the size of the window of my app with:
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
Although I am using vsync swapbuffers (in driver xorg-video-ati), I can see flickering when the window size changes (I guess one or more black frames):
void Video::draw()
{
if (videoChanged){
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
scene->init(); //Update glFrustum & glViewPort
}
scene->draw();
SDL_GL_SwapBuffers();
}
So please, someone knows, if...
The SDL_SetVideoMode is not vsync'ed as is SDL_GL_SwapBuffers()?
Or is it destroying the window and creating another and the buffer is black in meantime?
Someone knows a working code to do this? Maybe in freeglut?
In SDL-1 when you're using a windowed video mode the window is completely torn down and a new one created when changing the video mode. Of course there's some undefined data inbetween, which is perceived as flicker. This issue has been addressed in SDL-2. Either use that or use a different OpenGL framework, that resizes windows without gong a full window recreation.
If you're using a FULLSCREEN video mode then something different happens additionally:
A change of the video mode actually changes the video signal timings going from the graphics card to the display. After such a change the display has to find synchronization with the new settings and that takes some time. This of course comes with some flickering as the display may try to display a frame of different timings with the old settings until it detects that those no longer match. It's a physical effect and there's nothing you can do in software to fix this, other than not changing the video mode at all.