glxBadDrawable when using bgfx - c++

When using the following BGFX code:
#include "GLFW/glfw3.h"
#include <bgfx/bgfx.h>
int main() {
glfwInit();
GLFWwindow* window = glfwCreateWindow(800, 600, "Hello, bgfx!", NULL, NULL);
bgfx::Init bgfxInit;
bgfxInit.type = bgfx::RendererType::Count; // Automatically choose a renderer.
bgfxInit.resolution.width = 800;
bgfxInit.resolution.height = 600;
bgfxInit.resolution.reset = BGFX_RESET_VSYNC;
bgfx::init(bgfxInit);
}
A black openGL window pops up and appears fine for a second, however, a GLXBadDrawable error then pops up. I do not know what the cause of this error is, and the other question has no answers and has not been active for some time now.
I believe that this is not an issue with the code, but rather my machine, however, I may be wrong.
I currently have a Lenovo T400 laptop, with a Core 2 Duo P9500. I have 2 built-in GPUs, a Mobile 4 Series Chipset integrated graphics chip, along with an ATI Mobility Radeon HD 3450/3470. I am also running Artix Linux with the 6.0.7-artix1-1 kernel. I also am using the glfw-x11 and glfw packages if that helps, along with the i3-gaps window manager.
I have also attempted to use SDL2 instead of GLFW, and the same issue occurs. However, for GLFW a black window shows up, while in SDL2, a transparent(?) window instead shows up. Searching the github issues page also yielded no results.

Related

How to make make Nvidia as the default graphics card?

March 27 2020: The question boils down to how to run applications in Nvidia graphics card. If Intel Graphics card is enabled, OpenGL version is 4.6 for both Nvidia and Intel GPU's according to GPU-Z software. But, if disable Intel, to run the application using Nvidia, the application crashes; GPU-Z shows OpenGL version 1.1. So, how can I run the application with Nvidia graphics cards?
Notes: 1. I tried adding the application in the graphics settings to use high performance GPU, but the application uses Intel GPU.
2. Also, tried adding the application in Nvidia Control Panel to no luck.
March 16 2020: I was executing the example1 code in NanoGUI in Windows 10. The program is working when I connect my display using HDMI cable(connected to motherboard), but crashes without any errors using DP cable(connected to NVIDIA graphics card). I have Intel UHD Graphics 630 and NVIDIA GeForce GT 730 in my system. The driver version of NVIDIA is 26.21.14.4250.
I ran a simple OpenGL code in debug mode, and the program crashes at glfwInit() function.
The error is at
libEGL!eglDestroyImageKHR
Here is a sample code that crashes with DP port and works with the HDMI port.
// #include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <iostream>
void framebuffer_size_callback(GLFWwindow* window, int width, int height);
void processInput(GLFWwindow *window);
// settings
const unsigned int SCR_WIDTH = 800;
const unsigned int SCR_HEIGHT = 600;
int main()
{
// glfw: initialize and configure
// ------------------------------
glfwInit();
// glfw window creation
// --------------------
GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL);
if (window == NULL)
{
std::cout << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
// render loop
// -----------
while (!glfwWindowShouldClose(window))
{
// glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.)
// -------------------------------------------------------------------------------
glfwSwapBuffers(window);
glfwPollEvents();
}
// glfw: terminate, clearing all previously allocated GLFW resources.
// ------------------------------------------------------------------
glfwTerminate();
return 0;
}
The issue was solved in another update of the Nvidia drivers to 445.75 standard.
Also, I found that remote desktop has issues with Nvidia drivers. Remote software programs sometimes install their own display drivers. More can be found here.

SDL OpenGL segmentation fault when using SDL_CreateWindow

I've got a weird problem that's suddenly appeared across all projects I'm working on. I'm using C++, SDL2 and OpenGL, and one of the first things that happens in my int main is to create an SDL window with an OpenGL flag like below:
int main( int argc, char* args[] )
{
//Minor stuff here e.g. initialising SDL
mainwindow = SDL_CreateWindow("...", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_OPENGL);
}
For some reason this has started to cause a segmentation fault. If I change the flag from SDL_WINDOW_OPENGL to anything else, it does create a window but obviously fails shortly afterwards given the lack of an OpenGL context to do anything with. I've gone as far as to strip out all code except for the SDL and OpenGL initialisation stuff, and it still fails with a segfault error.
This issue has started as of today across two projects that share the same basic int main structure. This leads me to believe it's not a code issue (largely because the code hasn't actually changed), but that something with my setup / environment has gone wrong. So far I've tried the following to no avail:
Redownloaded latest SDL library
Redownloaded latest GLEW library
Reinstalled Codeblocks
Any ideas for a) what might be causing this and b) where I should start looking to fix it?
Thanks
Nathan
And like so many other problems in life, the answer turned out to be drivers. A system-wide update of some kind interfered with the graphics' ability to render any kind of OpenGL. A direct download and install of the latest graphic drivers fixed it.

How do I Control Which Desktop C++ Graphics Output to on Raspberry Pi?

I am trying to run a programming club in my school and it's not practical to physically connect the Pi's to kb, mouse and monitor so they all auto-run VNC and we connect to the machines using Ultra-VNC. The programs are written in a shared directory and Eclipse C++ runs on the host; therefore all program output is viewed via VNC.
Everything was fine while programming in Python and when we started to use C++. However, I hit a brik wall when trying to get graphics to display. I could build a program that appeared to run, but which only gave terminal output - it would never display drawings on the screen. While trying to solve the problem and at one point connected a keyboard and mouse and noticed that they seemed to be recognised (laser came on, Caps Lock toggled, etc.) but they didn't do anything when moved/typed on.
Eventually the penny began to teater on the edge as I got increasingly confused as to why no one else was having this problem given that there seem to be a lot of people using openvg and I began to wonder more about the kb/mouse issue.
I tried plugging the HDMI output into a monitor at home (shool ones are still analogue d-sub!) and lo and behold, the physical kb and mouse worked. Then it got really strange!
Somehow I have 2 desktops running at the same time. The physical keyboard and mouse control one and VNC controls the other. If I start a terminal window on 'Physical' desktop, it doesn't show up on 'VNC' desktop and vice versa - they seem to be independent, although that's not quite true.
When I run the graphics executable on 'Physical' desktop, it works fine and can be controlled only using the physical kb. When I run it on 'VNC' desktop, it can be controlled only with the VNC kb but the output displays on the physical screen.
I really don't get this!
I kind of need to be able to run the programs over VNC, but I need to be able to tell the code I run which desktop to output to as it seems to default to the wrong one. Actually, it would be prefferable to get VNC to connect to the existing HDMI desktop rather than starting a new one but I cannot findout how to tell tightVNC to do that.
The code is here, but I think the problem might be in the init() function which is in a library, so it is probably better to get VNC on to the right desktop...
Thanks in advance for any help!
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
extern "C" {
#include "VG/openvg.h"
#include "VG/vgu.h"
#include "fontinfo.h"
#include "shapes.h"
}
using namespace std;
int main (void) {
int width, height;
VGfloat w2, h2, w;
char s[3];
init(&width, &height); // Graphics initialization
w2 = (VGfloat)(width/2);
h2 = (VGfloat)(height/2);
w = (VGfloat)w;
Start(width, height); // Start the picture
Background(0, 0, 0); // Black background
Fill(44, 77, 232, 1); // Big blue marble
Circle(w2, 0, w); // The "world"
Fill(255, 255, 255, 1); // White text
TextMid(w2, h2, "hello, world", SerifTypeface, width/10); // Greetings
End(); // End the picture
fgets(s, 2, stdin); // Pause until RETURN]
finish(); // Graphics cleanup
exit(0);
}
See last comment - abandoned openvg and using X Windows.

Opening an X11 window for GL on a specific display

I'm taking over some legacy code, and it's using Xlib + glX to create its drawing windows. However, the window creation fails when the display name is set to be anything than :0.0.
I was able to reproduce this behavior in a minimal example:
#include <X11/Xlib.h>
#include <GL/glew.h>
#include <GL/glx.h>
int main()
{
Display* display = XOpenDisplay(":0.1");
GLint vi_att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
XVisualInfo* vi = glXChooseVisual(display, 0, vi_att);
Window root = DefaultRootWindow(display);
Colormap cmap = XCreateColormap(display, root, vi->visual, AllocNone);
XSetWindowAttributes swa;
swa.colormap = cmap;
swa.event_mask = ExposureMask;
Window window = XCreateWindow(display, root, 0, 0, 200, 400, 0,
vi->depth, InputOutput, vi->visual, CWColormap | CWEventMask,
&swa);
GLXContext context = glXCreateContext(display, vi, NULL, GL_TRUE);
glXMakeCurrent(display, window, context);
XMapWindow(display, window);
XFlush(display);
return 0;
}
Executing this example I get a console message
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 78 (X_CreateColormap)
Serial number of failed request: 21
Current serial number in output stream: 23
and the stepping through the various routines I find that I do get a valid display, and I do get a valid visual. Problems arise at glXCreateContext.
To make things clear, :0.1 is a valid display (I've set up separate X displays for my monitors to test this), and, interestingly, it does not make a difference from what display I'm executing the code. At first I thought that it wouldn't work to set up the window on a different display, but running the example with :0.0 from display :0.1 works fine. Running :0.1 from :0.1 does not.
More interestingly, choosing NULL as XOpenDisplay parameter, and running it on the :0.1 display also produces the same error.
Having multiple screens indicates that your X server has been set up in Zaphod mode. Depending on which driver you're using and how graphics output has been configured OpenGL may work not at all or on only one of the screens.
Please post your /var/log/Xorg.0.log so that I can give you more details.
But I can tell you already that Zaphod mode and OpenGL are on difficult terms with most drivers and system configurations.
Update due to comment
Okay, given your Xorg.0.log it's no surprise that you can create OpenGL contexts on only one of the screens: You got only a single GPU as indicated by the lines (note the identical PCI Bus-ID):
[ 18.192] (II) NVIDIA(0): NVIDIA GPU GeForce GTS 450 (GF116) at PCI:1:0:0 (GPU-0)
…
[ 18.214] (II) NVIDIA(1): NVIDIA GPU GeForce GTS 450 (GF116) at PCI:1:0:0 (GPU-0)
and use it on multiple X screens with different outputs. It's simply not supported by the drivers and it's been perfectly well documented:
(…) windows cannot be dragged between X screens, hardware accelerated OpenGL cannot span the (…) X screens (…)
However the actual question is: Why for bob's sake are you using multiple screen (=Zaphod) mode in the first place? There's only one situation in which doing this is sensible. And that is if you have multiple graphics cards, which you cannot interconnect (SLi or CrossFire or different models or vendors) in a single machine and you'd like to use them all together in a single X display multiple screen configuration.
Apart from that you should be using TwinView, because that does what you trivially expect from it.
Note that if your desire is to get more screen real estate by plugging several graphics cards into a box, you can use a combination of DMX (Distributed Multihead X) and Xpra or Chromium, but this requires some serious tinkering and AFAIK nobody did documented the Xpra method to this data (could be a nice weekend project though).

Use SDL inside Irrlicht

I know you can do the same in lrrlicht, but I want to use SDL code/ functions to draw text, images inside Irrlicht (to handle 2d) and use Irrlicht to do the hardcore 3D thing, how can you apply text or images from sdl to this Irrlicht Engine, can you show me simple code, so that I can understand?
In the SDL you can do such:
// I start by declare the SDL video Name
SDL_Surface *screen;
// set the video mode:
screen = SDL_SetVideoMode(640, 480, 32, SDL_DOUBLEBUF | SDL_FULLSCREEN); if (screen == NULL) { printf("Unable to set video mode: %s\n", SDL_GetError()); return 1; }
// I can now display data, image, text on the screen by using the declared SDL video Name " screen "
SDL_BlitSurface(my_image, &src, screen, &dest);
If you are using / targeting Windows , and are a little familiar with the WinApi, you may be able to use SDL with Irrlicht by running both inside a Win32 Window, using a combination of the SDL Window ID Hack (also see this thread) and running Irrlicht in a Win32 Window. The only problems you may face is that running SDL in a Win32 Window is extremely difficult and unpredictable.
You may be also be able to achieve a similar result using GTK+ (for cross-platform purposes), but I have personally never succeeded in running SDL or Irrlicht in a GTK+ Window.
Also, if you want a light graphics and media library like SDL , may I suggest SFML. It can run in Win32 and X/11 windows without any problems and may easily work with Irrlicht.