Seamless Cubemap Generation to Map on Quadcube - opengl

I have a quadcube generated from a cube that I want to texture.
Six textures are generated randomly.
If I mapped them directly onto the quadcube, they would look very distorted, so I distort them beforehand using pincushion distortion so the distortions cancel each other out. The problem is that by pre-distorting the texture, parts of it are being cut off so very ugly seams occur.
How could I modify the texture generation to achieve seamless transitions between the six textures? Or do I do it all wrong?
Because I cannot post more than two links in one question, here is a gallery containing some images of my problem.
I'm thankful for every kind of help. Thanks.

Related

Can you apply transformation matrices after running your pixel shaders?

I'm working with images, and I was tasked to extend the amount of image post-processing effects that we can perform on our images. Certain required effects need pixel data for calculations, so I created a few pixel shaders to do the job, and they work fine.
The problem is that the images need to be transformable, i.e. they need to be able to rotate, zoom in and out, pan, etc. The creation of all these textures, the algorithms to do the post-processing, they're all slowing the program down. I need a way to do these transformations without completely re-doing every effect. Some of the images the program works on are multi-gigabyte images, so I can't really do the obvious thing of caching the images after transformations for later use.
I'm looking for some sort of reasonable solution here. I'm not a graphics guy, but I can't imagine that similar programs with post-processing redo the post processing every time you pan. My best guess is saving off the last texture and applying the transformations on that, but I don't really know how to do that.
By saying "images" I assume you mean 2D textures you load and apply some post-pro effects. If that's the case just create a render target and render to that with all the post-effects.
Then rotate/pan a quad with that texture attached (a simplistic texture-fetching fragment shader will be required). Rerender that texture in case the post-pro parameters change.
If, on the other hand, you have a 3D scene, then there is no going around it, you have to render it each frame.
If my assumptions are wrong, it would be best if you provided more details on your case.

Algorithm to correct wrong normal direction on 3D model

i am new to 3D math, but i am facing a problem that when i am importing a model to CAD program - (single sided light/ open scene graph) - there are a lot of faces from meshs, Black !
When i flip the normal manually for each of the faces i got the correct material and texture.
my question is, if i know vertices table and normals table for every mesh in the model, can i write an algorithm that will correct all wrong normal direction automatically, i mean it must detect the wrong normals without any help form users and correct them.
i thought about an idea that needs image processing, i know nothing about image processing, so if you can help with from where i should start to achieve this :
first i will assume every blacked face is a wrong normal.
second i will direct a light form the camera to the mesh, and if are all the pixels in black then flip the normal.
and do this for all meshs.
i think i will have a speed issue but that all what i think about.
thanks.
the wrong red plane and the black one, they are in the same model and both of them must have red color, but as i mentioned the black one his normals are flipped.
OSG has a visitor class in osgUtil that can calculate all the vertex normals for you, called smoothingVisitor. You just need to point it at your model after you've loaded it, eg:
// recalculate normals with the smoothing visitor
osgUtil::SmoothingVisitor sv;
loadedModel->accept(sv);
This will clobber existing vertex normals, or create new ones if they don't exist. Just looking quickly at it, it looks like it assumes your geometry is triangles, not quads as you've drawn, but I could be wrong.
I suspect something is up with the indexing, if not then it is probably better to ignore the normal's table entirely. I'm not really sure about the details, but I assume it is 1 normal/vertex, so approaching it would involve calculating poligon normals, which you can do with:
normalize(cross_product(vert0 - vert1, vert0 - vert2));
After that averaging the normals of the poligons sharing the vertex should do.
Calculating the dot product of the freshly calculated normal and the original normal would reveal if the original normal is way off.
Anyways for your problem StackOverflow isn't really the one, there are other Stack Exchange sites that are actually specialized for similar problems. Also there are way too few informations about the issue so helping isn't easy as well.

Clarification needed on Bloom and Post-Processing (DirectX 10 / 11)

the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.

Outline / Silhouette rendering with OpenGL

I know there are several techniques to achieve this, but none of them seems sufficient.
Using a sobel / laplace filter doesn't find all the correct edges (and finds unwanted edges), is slow and doesn't give me control over the outline width.
What i have settled on for now is rendering the backside of my objects first with a solid color and a little bigger than the actual objects. The result does look good, but i really want my outlines to have a constant width.
I already tried rendering the backside of my objects with thick wireframe lines. Gives me a constant outline width, but line width is deprecated, produces rendering artifacts and leaves gaps, if the outline abruptly changes direction (like on a cube for example). I have not yet tried using a third rendering pass drawing a point the size of the wireframe lines for each vertex, because of the other problems with this technique.
Any ideas?
edit I even looked at finding the edges myself using a geometry shader, as described in http://prideout.net/blog/?p=54, but it suffers from the same gaps as the wireframe technique.
edit I was able to get rid of the rendering artifacts with the wireframe technique by disabling the GL_DEPTH_TEST while drawing the outlines. Unfortunately i also lost the outlines on overlapping objects...
My goal is to get the same effect they use on characters in the Dragons Lair 3 game. Does anyone know how they did it?
in case you're after real edge detection, Ive found that you can get pretty good results with a convolution LoG (Laplacian over Gaussian) 5x5 kernel, applied to the depth buffer and blended over the rendered object (possibly with a decent FSAA). You need some tuning in the fragment shader in order to clamp the blended outline, but the results are good. (and its a matter of what you really want, btw)
note that:
1) Laplace filtering and log filtering are different things and produce different results
2) if you apply the convolution on the depth buffer, instead of the rendered image, you end up with totally different results, firthermore, if an outline width conrol is desired, a dilate filter followed by a selective-erode pass can be applied, this way you will end up with a render that closely match a hand drawn sketch made with a marker, and you have fine control over tip size but at the cost of 2 extra pass

OpenGL texturing via vertex alphas, how to avoid following diagonal lines?

http://img136.imageshack.us/img136/3508/texturefailz.png
This is my current program. I know it's terribly ugly, I found two random textures online ('lava' and 'paper') which don't even seem to tile. That's not the problem at the moment.
I'm trying to figure out the first steps of an RPG. This is a top-down screenshot of a 10x10 heightmap (currently set to all 0s, so it's just a plane), and I texture it by making one pass per texture per quad, and each vertex has alpha values for each texture so that they blend with OpenGL.
The problem is that, notice how the textures trend along diagonals, and even though I'm drawing with GL_QUAD, this is presumably because the quads are turned into sets of two triangles and then the alpha values at the corners have more weight along the hypotenuses... But I wasn't expecting that to matter at all. By drawing quads, I was hoping that even though they were split into triangles at some low level, the vertex alphas would cause the texture to radiate in a circular outward gradient from the vertices.
How can I fix this to make it look better? Do I need to scrap this and try a whole different approach? IS there a different approach for something like this? I'd love to hear alternatives as well.
Feel free to ask questions and I'll be here refreshing until I get a valid answer, so I'll comment as fast as I can.
Thanks!!
EDIT:
Here is the kind of thing I'd like to achieve. No I'm obviously not one of the billions of noobs out there "trying to make a MMORPG", I'm using it as an example because it's very much like what I want:
http://img300.imageshack.us/img300/5725/runescapehowdotheytile.png
How do you think this is done? Part of it must be vertex alphas like I'm doing because of the smooth gradients... But maybe they have a list of different triangle configurations within a tile, and each tile stores which configuration it uses? So for example, configuration 1 is a triangle in the topleft and one in the bottomright, 2 is the topright and bottomleft, 3 is a quad on the top and a quad on the bottom, etc? Can you think of any other way I'm missing, or if you've got it all figured out then please share how they do it!
The diagonal artefacts are caused by having all of your quads split into triangles along the same diagonal. You define points [0,1,2,3] for your quad. Each quad is split into triangles [0,1,2] and [1,2,3]. Try drawing with GL_TRIANGLES and alternating your choice of diagonal. There are probably more efficient ways of doing this using GL_TRIANGLE_STRIP or GL_QUAD_STRIP.
i think you are doing it right, but you should increase the resolution of your heightmap a lot to get finer tesselation!
for example look at this heightmap renderer:
mdterrain
it shows the same artifacts at low resolution but gets better if you increase the iterations
I've never done this myself, but I've read several guides (which I can't find right now) and it seems pretty straight-forward and can even be optimized by using shaders.
Create a master texture to control the mixing of 4 sub-textures. Use the r,g,b,a components of the master texture as a percentage mix of each subtextures ( lava, paper, etc, etc). You can easily paint a master texture using paint.net, photostop, gimp and just paint into each color channel. You can compute the resulting texture before hand using all 5 textures OR you can calculate the result on the fly with a fragment shader. I don't have a good example of either, but I think you can figure it out given how far you've come.
The end result will be "pixel" pefect blending (depends on the textures resolution and filtering) and will avoid the vertex blending issues.