When using software rendering, or any graphics card in our developlment office, our little coloured GL_POINTS render in exactly the colour we expect. Out in the field, some users report points rendered in the wrong colours. Getting them to turn off hardware acceleration fixed their problem, so we've been putting the whole thing down to a third-party issue and using a workaround (tiny pixel-sized rectangles whose colour remains unproblematic). The snag: we are taking a huge performance hit.
My question is, has anyone else had a similar issue, and, if so, did they come up with a way to keep their GL_POINTS and get the colour right?
I haven't encountered similar problem, but the solution is simple : get the card your user is using, and set the same environment.
Maybe the problem is something stupid as old drivers. I don't see what else can render wrong color.
Related
I'm playing around with openGL (3.3 on OSX Mavericks), and I'm getting random parts of my screen being rendered to my window. I'm assuming that's probably clear evidence that I'm doing SOMETHING wrong... but what? Is it something with uninitialized values in a buffer? Am I using a buffer I didn't create? Some weird memory management thing? Or something like that?
Sorry if the question is a bit vague- I'm just betting that this is one of those bugs that openGL vets will hear and go "Of course! That means that {insert thing I'm doing wrong}".
Here's a screen shot to get an idea for what I'm talking about:
the black circle is what I'm attempting to render, the upside-down google logo is what I don't understand. Also, every time I run it I get different random textures.
Thanks! And I'd be happy to supply more details, I just don't know what other relevant info to include...
Thanks to #Andon M. Coleman (in the comments above), I've realized that this was simply a result of me not properly clearing the color buffer.
Specifically, my pipeline involved me rendering to a texture, and then blitting that texture to the screen. I WAS correctly clearing the SCREENS color buffer, but I never cleared the intermediate FB's color buffer.
I have a question about picking. I found a really neat way to do picking by rendering a single display call with lights turned off and everything and unique colour for each object, then simply finding what colour pixel is under the mouse pointer. It works fine and from I understand that also been used by others in the past for object picking. I thought rendering a single frame in that way will not be noticeable, however it is noticeable. I have to say I am using JOGL bindings.
So I was wondering what factors affect the performance in this case? I read on Apples dev guide NOT to use glBegin and glEnd and instead use buffers. Is that right? Am I right to think that it is possible to render a single frame without anyone noticing the flicker?
I am trying to visualize a CAD geometry where GL_QUADS is used for the geometry and glutBitmapCharacter to annotate with a text.
The GL_QUADS hides the text partially (e.g 33,32,... here) for some view orientations (picture 1).
If I use glDisable(GL_DEPTH_TEST) to get the text displayed properly, I get the text that is supposed to annotate the back surface is also displayed (picture 2).
My objective is to annotate the visible front surfaces without being obscured but having the annotation on the back surfaces not shown.
(I am able to solve this by slightly offsetting the annotation normal to the quad, but this will cause me some other issues in my program, so I don't prefer this solution)
Could somebody please suggest me a solution ?
Well, as I expect you already know, it looks like the text is getting cut off because of the way it's positioned/oriented - it is drawing from a point and from right-to-left on the screen.
If you don't want to offset it (as you already mentioned, but I still suggest as it's the simple solution) then one way might be to rotate the text the same way the object's being rotated. This would (I'd expect) simply be a matter of changing where you draw the text to the same place you draw each quad (thus using the same Matrix). Of course then the text won't be as legible. This solution also requires the use of a different Object for rendering the text, such as FreeType Fonts.
EDIT 2: another solution would be texture-mapped text
Could somebody please suggest me a solution ?
You need to implement collision detection engine.
If point in 3d space at which label must be displayed is not obscured, render text with depth test disabled. This will fix your problem completely.
As far as I can tell, there's no other way to solve the problem if you want to keep letters oriented towards viewer - no matter what you do, there will always be a a good chance of them being partially obscured by something else.
Since you need a very specific kind of collision detection (detect visibility of a point), you could try to solve this problem using select buffer. On other hand, detecting ray/triangle (see gluUnProject/gluProject) collision isn't too hard to implement, although on complex scenes things will quickly get more complicated and you'll need to implement scene graph and use algorithms similar to octrees.
we've been creating several half-transparent 3D cubes in a scene by OpenGL which displays very good on Windows 7 and Fedora 15, but become quite awful on Meego system.
This is what it looks like on my Fedora 15 system:
This is what it looks like on Meego. The color of the line has been changed by us, otherwise the cubes you see would be more pathetic:
The effects are implemented by just using the normal glColor4f function, and made to be transparent just by setting the value of alpha. How could it be like that?
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
I've even tried to use an engine like irrlicht to implement this instead but there would be nothing but black on the screen when the zBuffer argument of beginScene method was set to be false (and normal when it's true, but that would not be what we want).
This should not be the problem of the display card or the driver, because we've seen a 3D game with a transparent ball involved on the very same netbook and system.
We failed to find the reason here. Could any one give any help on why this would be happening please?
It sounds as if you may be relying on default settings (or behavior), which may be different between platforms.
Are you explicitly setting any of OpenGL's blend properties, such as glBlendFunc? If you are, it may help to post the relevant code that does this.
One of the comments mentioned sorting your transparent objects. If you aren't, that's something you might want to consider to achieve more accurate results. In either case, that behavior should be the same from platform to platform so I would have guessed that's not your issue.
Edit:
One other thought. Are you setting glCullFace? It could be that your transparent faces are being culled because of your vertex winding.
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
Those are just simple windowing frameworks and have no effect whatsoever on the OpenGL execution.
Somewhere in your blending code you're messing up. From the looks of the correct rendering I'd say your blend function there is glBlendFunc(GL_ONE, GL_ONE), while on Meego it's something like glBlendFunc(GL_SRC_ALPHA, GL_ONE).
When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
Check also what have you set for the MIN_FILTER:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, ... );
Try the different settings ( a list is here ).
However, if you're dissatisfied with the results of gluBuild2DMipmaps I advise you to look at alternatives:
glGenerateMipmap/glGenerateMipmapEXT (yes, it works without FBO's)
SGIS_generate_mipmap extension (widely supported)
Especially the latter is highly customizable. And what was not mentioned, this extension is fired up by setting GL_GENERATE_MIPMAP to true. It is automatical so you don't need to do recalculation if data changes.
You should enable multi-sampling through your application, not the nvidia control panel, if you want your rendering to always have it. That might even fix your issue.
As for the GL_TEXTURE_MAX_LEVEL setting being ignored when using the control panel multisampling, it sounds like a driver bug/feature. It's weird because this feature can be used to limit what you actually load in the texture (the so-called texture completeness criteria). What if you don't load the lowest mipmap levels at all ? What gets rendered ?
Edit: From the picture you're showing, it does not really look like it ignores the setting. For one thing, MAX_LEVEL=0 is different from MAX_LEVEL=6. Now, considering the noise in your textures, I don't even get why your MAX_LEVEL=6/MS off looks that clean. It should be noisy based on the MAX_LEVEL=16/MS off picture. At this point, I'd advise to put distinct solid colors in each mip level of your diffuse texture (and not rely on GL to build your mips), to see exactly which mip levels you're getting.