This is what I'm seeing:
To provide some perspective for the image, the torus is behind the model. The model is transparent. Those lines appear on the model. I don't want those lines to appear.
Can someone explain what I'm seeing? I don't know what to search for. I tried:
Weird lines
Line artifacts
Artifacts
etc. etc. but I could find nothing relevant. I understand that my question is vague, but, if someone could name my problem, I think I can identify the problematic code!
If you render transparencies you need to keep different thing in mind.
Normally you render in OpenGL with z-buffer testing and writing enabled.
So if a face is rendered OpenGL looks which pixels are visible testing them against the z-buffer. If it is visible it is drawn with the blending setting and its z value is written into the z-buffer. If not it is discarded.
If you don't render your faces in the correct z-order (from back to front out of the view direction) they are rendered in the order they arrive in the pipeline.
The artifacts appear when e.g. for some areas the pixels of the back-faces are rendered before the overlaying pixels of the front-faces, and if for some areas the pixels of the front faces are rendered before the one of the back face. So for some areas of your object you have a blending of background - backface - frontface and for some areas you only have background - forntface.
I know that explanation is not accurate, but i hope you get what i mean. Otherwise feel free to ask.
Related
Blender noob trying to render an object but certain parts of it keep coming out black. I'm not sure why. My image is here which might help:
A few extra details:
The shelves I've been trying to render are just a collection of planes which I've aligned to form shelves
All of them have no material or texture on them
Despite this, the very furthest plane (back of the shelves) and the front one which says laundry render properly in white, but the others render in black
I've tried adding material/texture to all of them, but they still come out black
My light source is set to "Sun" and sits a little bit behind the camera and shines directly onto the shelves
Is this a lighting issue?? If so any suggestions for how to fix?
Your "normals" are flipped. In other words, each surface has a "front" and "back" side. The backs of yours are facing in (which makes sense, when you consider the method you created the volume...you're looking at the inside surfaces).
To fix your issue, select the dark faces and do Mesh->Normals->Flip
It looks like they are not getting any light on their faces because of the angle compared to the light source. Try adding a point light source to your scene just in front of your shelf objects to see if that changes the outcome.
Flipping the 'normals' worked for me, only after switching to render cycles from blender render. Hope that helps someone else.
I have a model with transparent quads for a beard. I can not tell what triangles belong to the beard because their color comes from the texture passed to the fragment shader. I have tried to presort the triangles back to front during export of the model, but this does not help. So I implemented MSAA and Alpha to Coverage, but this did not help either. My last attempt was to draw the model, with a depth mask off and skipping any transparent data, so the color buffer would have non-clear color values to blend with. Then I would draw the model a second time with depth testing on and drawing the alpha pieces.
Nothing I have tried so far has worked. What other techniques can I try to get the beard of the model to properly draw? I am looking for a way to handle this that doesn't use a bunch of extensions. I'd prefer techniques that can be handled with plain old OpenGL 4.
Here is an image of what I am dealing with.
This is what I got after I applied the selected answer.
What you're trying to do there is a still largely unsolved problem: Order independent transparency. MSAA is something entirely different, as is alpha coverage.
So far the best working solution is to separate the model into an opaque and a hairy part. Draw the opaque parts of your scene first, then draw everything (semi-)translucent, ordered far to near in a second pass.
The way your image looks like it seems like the beard is rendered as the first thing, which is quite the opposite of what you actually want.
Simple way:
Enable depth write (depth mask), disable alpha-blending, draw model without the beard.
Disable depth write, enable alpha-blending, draw the beard. Make sure face culling is enabled.
Hard way:
Because order-independent transparency in renderers that use z-buffer is an unsolved problem (as datenwolf said), you could try depth-peeling. I believe the paper is available within OpenGL SDK. Most likely it'll be slower than "simple way", and there'll be a limit on number of maximum overlapping transparent polygons. Also check wikipedia article on order-independent transparency.
the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.
We are working on porting some software from Windows to MacOS.
When we bring up a texture with an alpha channel, the pixels that are fully Opaque work as expected, pixels that are Fully transparent work as expected (You can see the wall behind).
However, pixels that are semi-transparent >0% opacity and < 100% opacity, render poorly and you are able to see through the wall behind and you can see the skybox through the texture and the wall behind it.
I know you will likely need more information and I will be happy to provide. I am not looking for a quick fix solution, I really have just run out of ideas and need someone else to take a guess as whats wrong.
I will post the solution and correct answer goes to whoever pointed me that way.
It is not the texture being placed right on the wall, it is placed on a static mesh close to the wall.
(Unable to post images as this is my first question here)
You are sorting transparent objects by depth, yes? I gather from your question, the answer will be no.
You cannot just render transparent objects the way you do opaque ones. Your renderer is just a fancy triangle drawer. As such, it has no real concept of objects, or even transparency. You achieve transparency by blending the transparent pixels with whatever happens to be in the framebuffer at the time you draw the transparent triangle.
It simply cannot know what it is you intend to draw behind the triangle later. Therefore, the general method for transparent objects is to:
Render all opaque objects first.
Render transparent objects sorted back-to-front. Also, turn off depth writes (depth tests are fine).
This might not be an answer but might useful.
Making an object which applies a transparent texture in Maya/3d MAX and export as fbx and import to unreal ?
I am going through the NeHe tutorials for OpenGL... I am at lesson 8 (drawing a cube with blending). http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=08
I wanted to experiment and change half of the faces to be opaque so that there is always a semi-transparent face opposite to an opaque one and be able to rotate the cube...
I changed the code a little bit, the entire source is there : http://pastebin.com/uzfSk2wB
I changed a few things :
enable blending by default and set the blending function to glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
changed the order of drawing the faces and colors for each face.
I set the depth testing
I draw all the opaque faces
I disable depth test
I draw all the transparent faces
Now, it's hard to tell exactly what is going wrong, but it definitely does not look right, I cannot recognize what face is opaque compared to the transparent ones, sometimes some faces do not seem to get drawn when they should...etc...
Seems like calculating what face is in front compared to back would not be trivial (although I am sure possible), I hope there is a way of doing that would not need to do that.
Looking either for what is wrong in my code or whether this is not the right way of doing it in the first place.
If you disable depth testing before drawing the transparent faces, then they will be drawn with no regard for their correct z-ordering. It probably looks like the transparent faces are being drawn atop all the other faces. Leave depth testing on.