I have an application which I usually run on Nvidia graphics card. I thought I'd try running it on the Sandy Bridge Intel HD Graphics 3000.
However, when I'm running on the intel hardware I get an "framebuffer not complete" from the following initialization code:
glGenFramebuffers(1, &fbo_);
glBindFramebuffer(GL_FRAMEBUFFER_EXT, fbo_);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glDisable(GL_MULTISAMPLE_ARB);
// Error: "the object bound to FRAMEBUFFER_BINDING_EXT is not "framebuffer complete"
Any ideas why?
You kinda need at least one color attachment (before OpenGL 4.3 at least).
More info.
Related
I recently switched my laptop's OS from Windows 10 to Fedora linux. After this I tried to load and run my current c++ SFML project. However, when it attempts to load my geometry shader I just get this:
Failed to create a shader: your system doesn't support geometry shaders (you should test Shader::isGeometryAvailable() before trying to use geometry shaders)
I know my system should support geometry shaders as it worked just fine previously on windows. My laptop has a Quad Core AMD Ryzen 5 2500U with Radeon Vega Mobile Gfx chip with the amdgpu driver. Is this an issue with drivers? Could my software have caused this issue? (If so let me know and I will edit this post)
The result of sf::shader::isAvailable() is true. The result of sf::shader:isGeometryAvailable() is false.
If anyone knows how to fix this issue it would be excellent.
I'm making a multiplayer game using SDL, however, at some point I've stopped being able to run two instances of it concurrently. The first instance runs without problems, however, once the second one is launched, its render thread hangs. It manifests as a system-wide graphics freeze, e.g. I can no longer move the mouse and nothing on the screen is updated inside or outside the SDL window(s). After a couple of seconds, the render thread recovers only to freeze again momentarily. SDL does manage to catch a quit event if I've sent it and exit. Then, the terminal's window with stdout of the program is updated (that's how I can assume that the update thread was running alright, there is a large interval where only its debugging info is present).
By removing piece of code from the render procedure I was able to determine that these three uncommented SDL calls were what was causing the delay:
void Renderer::render() {
SDL_SetRenderDrawColor(sdlRenderer, 0, 0, 0, 255);
SDL_RenderClear(sdlRenderer);
// for (auto target : targets) {
// target->render(this);
// // std::cout << "rendered object with sceneId " << target->target->sceneId << std::endl;
// }
// auto targetCopy = newTargets;
// for (auto newTarget : targetCopy) {
// targets.push_back(newTarget);
// // std::cout << "adding render target" << std::endl;
// }
// newTargets.clear();
SDL_RenderPresent(sdlRenderer);
}
What could be causing this behavior?
This is the SDL initialization code for further information, also attempted without acceleration:
SDL_Init(SDL_INIT_VIDEO);
int fullscreenType = 0; // SDL_WINDOW_FULLSCREEN_DESKTOP;
int windowFlags = fullscreenType | SDL_WINDOW_OPENGL | SDL_WINDOW_BORDERLESS |
SDL_WINDOW_ALLOW_HIGHDPI;
int rendererFlags = SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC;
SDL_Window *window =
SDL_CreateWindow("Game", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
1000, 1000, windowFlags);
SDL_Renderer *sdlRenderer = SDL_CreateRenderer(window, -1, rendererFlags);
SDL_RenderSetLogicalSize(sdlRenderer, 1000, 1000);
IMG_Init(IMG_INIT_PNG);
I'm running Manjaro with GNOME on Wayland. Acer Swift 3. Output of glxinfo | grep OpenGL:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kaby Lake GT2)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 17.3.5
OpenGL core profile shading language version string: 4.50
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 17.3.5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 17.3.5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
On X.Org this behavior is slightly different (I can move the mouse around but everything is unresponsive), but the same underlying freeze issue is present.
A wild An educated guess, mostly based on the fact, that you're on Intel graphics, would be, that no proper buffer swap command is issued. The Intel drivers are a little bit annoying, because they rely entirely on the buffer swap to flush and synchronize the presentation queue.
What I suppose what's happening is, that your render loops are running unthrottled and push a lot of frames per display refresh interval. But why would this happen, if you're asking for a double buffered window? The answer: Wayland. The Wayland model (which actually makes a lot of sense!) is, that clients render into off-screen surfaces, which are managed by the compositor, and the compositor itself is responsible for doing the "composition" (hence the name), i.e. putting it all together on the screen and synchronizing with the display. However for this to work the clients' render results must be ready, before composition starts.
Obviously an off-screen surface doesn't swap, so any request of "buffer swap" or synchronization must be forwarded to the compositor. And if that doesn't work properly, trouble begins.
With just one process constantly pushing frames, it kind of gets rate limited; but with several processes stuffing the queue it looks like most of the GPU time is consumed up by the clients, with the Compositor starving for a change to insert a buffer swap that could flush/sync the presentation queue.
For a quick check add a usleep(20000) after SDL_RenderPresent.
I have the following code to begin a frame:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Without the last line, the program runs (but obviously does not blend), but with it it segfaults. GDB is not a lot of help, as it looks like the stack is corrupted, and after the segfault, running:
set $pc = *(void**)$rsp
set $rsp = $rsp+8
Points to the ending brace of the function as the last frame.
I have a small suspicion that this is a bug in the driver, but couldn't find a bug report on their tracker. The driver is flgrx-updates running on Ubuntu. GLXInfo gives:
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon R9 200 Series
OpenGL core profile version string: 4.3.13399 Core Profile Context 15.201.1151
OpenGL core profile shading language version string: 4.40
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
Okay, I still don't really know why this fixes it - but the seg fault was caused by me changing the order the object files were specified in the linking command. But specifying the object file that loads the function pointers to OpenGL before the file that uses those functions makes everything work nicely.
I'm developing a 3D stereoscopic OpenGL app specifically for Windows 7 and nVidia Quadro K5000 cards. Rendering the scene from left and right-eye perspectives using glDrawBuffer(GL_BACK_LEFT) and glDrawBuffer(GL_BACK_RIGHT) works fine, and the 3D effect is displayed nicely.
While this works, I'd like to use nVidia's nSight Graphics local debugging. However, I get the error "Cannot enter frame debugging. nSight only supports frame debugging for ... OpenGL 4.2. Reason: glDrawBuffer(bufs[i] = 0x00000402)"
If the calls to glDrawBuffer are removed, nSight local debugging works.
Going through the OpenGL 4.2 spec, DrawBuffer is described in section 4.2.1
So, two questions:
1) Is there some other way (besides DrawBuffer) to specify BACK_RIGHT or BACK_LEFT buffers for drawing to quad-buffers?
2) Is nSight capable of doing frame-level debugging on quad-buffered stereoscopic setups? If so, how?
I use that function to get my texture from the graphics card,
but for some reason it doesnt return anything on some cards if miplevel > 0
Here is the code im using to get image:
glGetTexImage(GL_TEXTURE_2D, miplevel, GL_RGB, GL_UNSIGNED_BYTE, data);
here is the code i use to check which method to use for mipmapping:
ext = (char*)glGetString(GL_EXTENSIONS);
if(strstr(ext, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
So far it has worked properly, so it says GL_GENERATE_MIPMAP is supported for those ATI cards below.
Here are the tested cards:
ATI Radeon 9550 / X1050 Series
ATI Mobility Radeon HD 3470
ATI Radeon X700
ATI Radeon HD 4870
ATI Radeon HD 3450
At this moment i am taking the miplevel 0 and generating the mipmap by own code. Is there better fix for this?
Also glGetError() returns 0 for all cards, so no error occurs. it just doesnt work. probably a driver problem?
Im still looking for better fix than resizing it myself on CPU...
Check the error that glGetTexImage is reporting. It most probably tells you what the error is .
Edit: Sounds like the joys of using ATI's poorly written OpenGL drivers. Assuming your drivers are up-to-date, use an nVidia card, work around it or accept it won't work. Thats pretty much your only options. Might be worth hassling ATI about it but they will, most likely, do nothing, alas.
Edit2: On the problem cards are you using GL_GENERATE_MIPMAP? It might be that you can't grab the mip levels unless they are explicitly built ...? ie Try gluBuild2DMipmaps() for everything.
Edit 3: Thing is though. It "may" be the cause of your problems. It doesn't sound unlikely to me that the ATI card grabs the texture from a local copy, however if you use the auto generate of mip maps then it does it entirely on the card and never copies them back. Explicitly try building the mip maps locally and see if that fixes your issues. It may not, however you do need to try these things or you will never figure out the problem. Alas trial and error is all that works with problems like this. This is why a fair few games have large databases with driver name, card name and driver version to decide whether a feature will or will not work.