I apologise for the limited code in this question, but It's tied into a personal project with much OpenGL functionality abstracted behind classes. Hoping someone visually recognises the problem and can offer direction.
During the first execution of my animation loop, I'm creating a GL_R32F (format: GL_RED, type: GL_FLOAT) texture, rendering an orthographic projection of a utah teapot to it (for the purposes of debugging this I'm writing the same float to every fragment).
The texture however renders incorrectly, as it should be a solid silhouette.
Re-running the the program causes the patches to move around.
I've spent a good few hours tweaking things trying to work out the cause, I've compared the code to my working shadow mapping example which similarly writes to a GL_R32F texture, yet I can't find a cause.
I've narrowed it down, to find that it's only the first renderpass to the texture which this occurs. This wouldn't be so much of an issue except I don't require more than a single render (and looping the bindFB, setViewport, render, unbindFB doesn't fix it).
I've
If anyone has an suggestions for specific code extracts to provide, I'll try and edit the question.
This was caused by a rogue call to glEnable(GL_BLEND) during an earlier stage of the algorithm.
This makes sense because I was writing to a single channel, therefore the Alpha channel would contain random garbage, leading to garbled texture.
Related
I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.
I have a question about picking. I found a really neat way to do picking by rendering a single display call with lights turned off and everything and unique colour for each object, then simply finding what colour pixel is under the mouse pointer. It works fine and from I understand that also been used by others in the past for object picking. I thought rendering a single frame in that way will not be noticeable, however it is noticeable. I have to say I am using JOGL bindings.
So I was wondering what factors affect the performance in this case? I read on Apples dev guide NOT to use glBegin and glEnd and instead use buffers. Is that right? Am I right to think that it is possible to render a single frame without anyone noticing the flicker?
I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.
When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
Check also what have you set for the MIN_FILTER:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, ... );
Try the different settings ( a list is here ).
However, if you're dissatisfied with the results of gluBuild2DMipmaps I advise you to look at alternatives:
glGenerateMipmap/glGenerateMipmapEXT (yes, it works without FBO's)
SGIS_generate_mipmap extension (widely supported)
Especially the latter is highly customizable. And what was not mentioned, this extension is fired up by setting GL_GENERATE_MIPMAP to true. It is automatical so you don't need to do recalculation if data changes.
You should enable multi-sampling through your application, not the nvidia control panel, if you want your rendering to always have it. That might even fix your issue.
As for the GL_TEXTURE_MAX_LEVEL setting being ignored when using the control panel multisampling, it sounds like a driver bug/feature. It's weird because this feature can be used to limit what you actually load in the texture (the so-called texture completeness criteria). What if you don't load the lowest mipmap levels at all ? What gets rendered ?
Edit: From the picture you're showing, it does not really look like it ignores the setting. For one thing, MAX_LEVEL=0 is different from MAX_LEVEL=6. Now, considering the noise in your textures, I don't even get why your MAX_LEVEL=6/MS off looks that clean. It should be noisy based on the MAX_LEVEL=16/MS off picture. At this point, I'd advise to put distinct solid colors in each mip level of your diffuse texture (and not rely on GL to build your mips), to see exactly which mip levels you're getting.
So I've got this class where I have to make a simple game in OpenGL.
I want to make space invanders (basically).
So how in the world should I make anything appear on my screen that looks decent at all? :(
I found some code, finally, that let me import a 3DS object. It was sweet I thought and went and put it in a class to make it a little more modular and usable (http://www.spacesimulator.net/tut4_3dsloader.html).
However, either the program I use (Cheetah3d) is exporting the uv map incorrectly and/or the code for reading in a .bmp that ISN'T the one that came with the demo. The image is all weird. Very hard to explain.
So I arrive at my question. What solution should I use to draw objects? Should I honestly expect to spend hours guessing at vertices to make a space invader ship? Then also try to map a decent texture to this object as well? The code I am using draws the untextured object just fine but I can't begin to go mapping the texture to it because I don't know what vertices correspond to what polygons etc.
Thanks SO for any suggestions on what I should do. :D
You could draw textured quads, provided you have a texture loader.
I really wouldn't worry too much about your "uv map" - if you can get your vertices right then you can generally cludge something anyway. That's what I'd do.