glGenerateMipmap raises GL_INVALID_OPERATION on specific platform [closed] - opengl

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I performed glGenerateMipmap(GL_TEXTURE_2D) on a texture that used as FBO rendering target. It works well on several Windows computers, but when I test it on a Linux laptop with Intel HD3000 graphics card, it raises GL_INVALID_OPERATION error.
I checked the program with AMD CodeXL. When this error raised, the GL_TEXTURE_BINDING_2D has correct value, and bounded texture has correct properties:
# of mipmaps: 10 levels;
dimensions: 600x520
internal format: GL_DEPTH_STENCIL;
GL_TEXTURE_MIN_FILTER: GL_LINEAR_MIPMAP_LINEAR.
GL_TEXTURE_MIN_LOD: -1000
GL_TEXTURE_MAX_LOD: 1000
GL_TEXTURE_BASE_LEVEL: 0
GL_TEXTURE_MAX_LEVEL: 1000

It seems caused by generating mipmap on DEPTH24_STENCIL8 texture. Temporarily I masked mipmap generation of all those depth-stencil textures, and all this kind of warnings eliminated.
Non-power-of-2 texture size does not looks like the cause, because I have many other same sized textures that works well.
I have known that the Intel HD Graphics Linux drivers have many limits, such as not support #version 150 GLSL. And now I've got one more limit :)

Your texture dimension is not power-of-two, see https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glGenerateMipmap.xml
Errors
GL_INVALID_OPERATION is generated if either the width or height of the
zero level array is not a power of two.
To fix it, use a power-of-two texture, eg. 512x512 etc.

Related

Strange behavior with texture rendering when switching from Intel to nVidia graphic card [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 17 days ago.
Improve this question
I'm developing a little image visualizer just to learn some openGL graphics.
I'm using Dear ImGUI to have a GUI interface.
I'm struggling with an issue happening when I switch from Intel graphic card to nVidia graphic card.
I'm working on a Dell precision 7560 laptop running Windows 10.
I'm rendering an image texture to a Frame Buffer Object and then displaying the FBO texture into an ImGui::Image widget.
What is happening is that if I use the Intel Graphic card everything works fine:
texture displayed with Intel graphic card
but as soon as I switch to nVidia the texture is rendered in a wrong way:
texture displayed with nVIida graphic card
I'm using the most basic shader code needed to deal with textures.
Does anyone have experienced the same?
I've tried to check and recheck the code but everything seems ok, so I'm supposing that is something that with Intel goes by default and with the nVidia I need to specify something more...

Inconsistant OpenGL rendering bug with 3D objects [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
So I've been hammering away at my code for a while, trying to resolve this bug, with absolutely no progress being made.
Mostly due to how utterly random and unpredictable this bug is.
So this is how the scene works when everything is working fine
And when the bug kicks in
As you can see, the bug only prevents my cubemap skybox, model, and light source mesh from rendering, but the ortho projected 2d elements are just fine.
I've ruled out shaders, as even the simplest of shader programs still experience this problem. I use ASSIMP to load in mesh files and SOIL to load textures, but up until about a day ago they have worked flawlessly.
There is absolutely no pattern to when this happens, the only way to resolve it is to just keep restarting the program until the desired output appears. That is obviously not a good solution. I'm at a complete loss and need help, as opengl doesn't push out an error or anything. I don't know where to even begin looking for a solution. Could EBO's or frame buffers cause this? As I have started implementing those recently.
I have searched far and wide for anything that could be related to this, but I have come up with nothing so far.
TL;DR: 3D objects will not render only on some runs and work fine on others, possible issues with recently implemented framebuffers and EBOs.
UPDATE:
It turns out that my mouse look code in my Camera class was causing some odd issues where calculating the change in camera angles caused it to be set to an extraordinarily high negative value. Turning mouse look off permanently resolved the issue.

Intel OpenGL Driver bug?

Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh

finding allocated texture memory on intel

as a follow up to a different question (opengl: adding higher resolution mipmaps to a texture), I'd like to emulate a function of gDEBugger. I'd like to find the total size of the currently allocated textures, which would be used to decide between different ways to solve that question.
The specific thing I'd like to do is figure out how gDEBugger fills in the info in "view/viewers/graphic and compute memory analysis viwer", and in that window the place where it tells me the sum of the size of all currently loaded textures.
For nvidia cards it appears I can call "glGetIntegerv(GL_GPU_MEM_INFO_CURRENT_AVAILABLE_MEM_NVX,#mem_available);" just before starting the texture test and just after, make the difference, and get the desired result.
For ATI/AMD it appears I can call "wglGetGPUInfoAMD(0, WGL_GPU_RAM_AMD, GL_UNSIGNED_INT, 4, &mem_available);" before and after the texture test to get the wanted result.
For intel video cards however I am not finding the right keywords to put in various search engines to figure this out.
So, anyone can help figure out how to do this with intel cards and confirm the method I'll use for ati/amd and nvidia cards?
edit: it appears that for amd/ati cards what I wrote earlier might be for total memory and for current memory I should use instead "glGetIntegerv( GL_TEXTURE_FREE_MEMORY_ATI, &mem_avail );"
edit2: for reference, here's what seems to be the most concise and precise source for what I wrote for the ati/amd and nvidia cards: http://nasutechtips.blogspot.ca/2011/02/how-to-get-gpu-memory-size-and-usage-in.html

Multi-headed display system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
What tools, APIs, libraries are out there that I could use to create a system capable of rendering hi-res 3D scenes in real time in a display made of 4, 8, 9, 16, etc screens/projectors? For a setup with 8 projectors I should go for clustered solutions or should I stay with a single node featuring 4 dual headed video cards? Does someone have any experience with that?
Equalizer is probably one of the better solutions you'll find.
It's specifically designed for splitting apart renders and distributing them across display's.
Description:
Equalizer allows the user to scale rendering performance, visual quality and display size. An Equalizer-based application runs unmodified on any visualization system, from a simple workstation to large scale graphics clusters, multi-GPU workstations and Virtual Reality installations.
Example Usage of Equalizer:
(source: equalizergraphics.com)
I've worked on projects trying to do similar things without Equalizer, and I can honestly say it was pretty bad. We only got it barely working. After finding equalizer later, I can't imagine how much easier it would have been with such a tool.
You can use Xinerama or XRandR when working with X11/Xorg. But to quote Wikipedia on Xinerama:
In most implementations, OpenGL (3D)
direct-rendering only works on one of
the screens. Windows that should show
3D graphics on other screens tend to
just appear black. This is most
commonly seen with 3D screen savers,
which show on one of the screens and
black on the others. (The Solaris
SPARC OpenGL implementation allows
direct rendering to all screens in
Xinerama mode, as does the nvidia
driver when both monitors are on the
same video card.)
I suggest you read the Wikipedia article first.
You should have a look at the "AMD Radeon HD 5870 Eyefinity 6-edition" graphics card. This supports output to six displays simultaneously and allows the setting of several options in the driver regarding the arrangment of the outputs (3 in a row, 2x3 horizontal/vertical), etc).
Regarding API's: with a card like this (but also with a TripleHead2Go) you get a single virtual canvas, which supports full 3D accelerations without performance loss (so much better than with an Extended desktop). At AMD they call this a Single Large Surface (probably equivalent to what NVidia calls a Horizontal/Vertical span). The caveat here is that all outputs need to have the same resolution, frame rate and color depth. Such a surface could have a resolution of 5760 x 3240 or higher, depending on settings, so it's a good thing that the 5870 is so fast.
Then, in your application, you render to this large virtual canvas (using OpenGL, Direct3D or some other way) and you're done... Except that you did not say if you were going to have the displays at an angle to each other or in a flat configuration. In the latter case, you can just use a single perspective camera and render to the entire backbuffer. But if you have more of a 'surround' setup, then you need to have multiple cameras in your scene, all looking out from the same point.
The fastest loop to do this rendering is then probably:
for all all objects
set textures and shader and renderstate
for all viewports
render object to viewport
and not
for all viewports
for all objects
set textures and shader and renderstate
render object to viewport
because switching objects causes the GPU to lose much more useful information from its state and caches than switching viewports.
You could contact AMD to check if it's possible to add two of these cards (power-supply permitting) to a single system to drive up to 12 displays.
Note that not all configurations are supported (e.g. 5x1 is not, as I read from the FAQ).
A lot of my experience regarding this was gathered during the creation of the Future Flight Experience project, which uses three beamers (each with its own camera in the 3D scene), a dual Nvidia GTX 280 in SLI, and a Matrox TripleHead2Go on Windows XP.
I use one of these nifty TripleHead2Go's at home on my gaming rig to drive 3 displays from one video card (even in Vista). Two displays with a bezel in the middle is kinda a bummer for gaming.
(source: maximumpc.com)
I found out about them because we were looking at using several of them at work for driving a system of ours that has about 9 displays. I think for that we ended up going with a system with 5 PCI-X slots and a dual-head card in each. If you have trouble with getting that many PCI slots on a motherboard, there are PCI-X bus expansion systems avilable.
I know that the pyglet OpenGL wrapper (http://www.pyglet.org) for python has multiplatform multimonitor support; you might want to look at their source code and figure out how it is implemented.