How to achieve a result like DestAtop on the following picture?
In what order should I render each image and what blending functions to use? Tried literally every combination, checked every similar question, still can't get desired result.
Related
So I have a question about if there is a way to get a correct alpha result when drawing something with alpha to coverage in OpenGL. I am drawing some stuff in a buffer that is going to be composited on top of a video, so it is writing to a black transparent buffer and the un-premultiplying it to composite
However some of the objects are drawn with Alpha to Coverage. The issue is that in order for the coverage resolve to produce the correct alpha value for each of the samples, the alpha of the sample needs to be 1. But the alpha that is output is for example 0.75 writing to a transparent backing, in order for alpha to coverage to work, and if you write 0.75 to the alpha, it will then average 0.75 among the samples to give 0.5625.
So basically i'm wondering if there is some way to output and alpha of 1 to the samples I am writing to, or if not is there another way to achieve the result I want (Ideally still using alpha to coverage because I need the order independent transparency)
I don't mind using super modern opengl stuff or nvidia extensions for this due to the very specific hardware requirements
Ok I figured it out - there is
GL_SAMPLE_ALPHA_TO_ONE
which does exactly what I want - sets the alpha to the max value no matter what you output
I want to blend multiple photo shots of same scene but only one object is in different position on every shot. I want to know what kind of algorithm would give desired results. Here is an example
Well, what you are looking for is called Image Fusion. There are many methods that do this, but it is still a fairly active research idea. Based on the images you have, you should select the one that performs the best. Because your images will have imperfections and lighting, shadowing differences this is way beyond than a simple cut and paste.
Here is a little more information and some algorithm explanations: Image Fusion by Image Blending.
I am trying to visualize a CAD geometry where GL_QUADS is used for the geometry and glutBitmapCharacter to annotate with a text.
The GL_QUADS hides the text partially (e.g 33,32,... here) for some view orientations (picture 1).
If I use glDisable(GL_DEPTH_TEST) to get the text displayed properly, I get the text that is supposed to annotate the back surface is also displayed (picture 2).
My objective is to annotate the visible front surfaces without being obscured but having the annotation on the back surfaces not shown.
(I am able to solve this by slightly offsetting the annotation normal to the quad, but this will cause me some other issues in my program, so I don't prefer this solution)
Could somebody please suggest me a solution ?
Well, as I expect you already know, it looks like the text is getting cut off because of the way it's positioned/oriented - it is drawing from a point and from right-to-left on the screen.
If you don't want to offset it (as you already mentioned, but I still suggest as it's the simple solution) then one way might be to rotate the text the same way the object's being rotated. This would (I'd expect) simply be a matter of changing where you draw the text to the same place you draw each quad (thus using the same Matrix). Of course then the text won't be as legible. This solution also requires the use of a different Object for rendering the text, such as FreeType Fonts.
EDIT 2: another solution would be texture-mapped text
Could somebody please suggest me a solution ?
You need to implement collision detection engine.
If point in 3d space at which label must be displayed is not obscured, render text with depth test disabled. This will fix your problem completely.
As far as I can tell, there's no other way to solve the problem if you want to keep letters oriented towards viewer - no matter what you do, there will always be a a good chance of them being partially obscured by something else.
Since you need a very specific kind of collision detection (detect visibility of a point), you could try to solve this problem using select buffer. On other hand, detecting ray/triangle (see gluUnProject/gluProject) collision isn't too hard to implement, although on complex scenes things will quickly get more complicated and you'll need to implement scene graph and use algorithms similar to octrees.
I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.
When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
Check also what have you set for the MIN_FILTER:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, ... );
Try the different settings ( a list is here ).
However, if you're dissatisfied with the results of gluBuild2DMipmaps I advise you to look at alternatives:
glGenerateMipmap/glGenerateMipmapEXT (yes, it works without FBO's)
SGIS_generate_mipmap extension (widely supported)
Especially the latter is highly customizable. And what was not mentioned, this extension is fired up by setting GL_GENERATE_MIPMAP to true. It is automatical so you don't need to do recalculation if data changes.
You should enable multi-sampling through your application, not the nvidia control panel, if you want your rendering to always have it. That might even fix your issue.
As for the GL_TEXTURE_MAX_LEVEL setting being ignored when using the control panel multisampling, it sounds like a driver bug/feature. It's weird because this feature can be used to limit what you actually load in the texture (the so-called texture completeness criteria). What if you don't load the lowest mipmap levels at all ? What gets rendered ?
Edit: From the picture you're showing, it does not really look like it ignores the setting. For one thing, MAX_LEVEL=0 is different from MAX_LEVEL=6. Now, considering the noise in your textures, I don't even get why your MAX_LEVEL=6/MS off looks that clean. It should be noisy based on the MAX_LEVEL=16/MS off picture. At this point, I'd advise to put distinct solid colors in each mip level of your diffuse texture (and not rely on GL to build your mips), to see exactly which mip levels you're getting.