How to do interlaced( an output where we draw a line and we skip (or darken) the next and so on) rendering effect in opengl.
I have read that we can use the stencil buffer to mask the lines.Every odd line fill with 1 in stencil
But i am not able to figure out the code for it , if someone can please show me how this can be done.
Related
To learn opengl i'm trying to draw a little real-time line chart (about new 20.000 point/s is the goal)
I thinked I can optimize the thing if i don't clean all the framebuffer, but somehow traslate it on one direction, and then draw the new and missing step of the graph.
I can see it is possible to copy to a FBO needed area and then edit this FBO and then draw that result, but in the end it still need to copy all pixel
I am generating 3D polycrystals structure based on Cellular automata method. My rendered structure looks like:
http://www-e.uni-magdeburg.de/dzoellne/simulation/Bilder/3D_structure.gif
Is there any way to mark boundaries of each color ? Each color limited by black line, something like this:
http://web.boun.edu.tr/jeremy.mason/research/images/monte_carlo.png
Unfortunately I'm using old 1.1 OpenGL.
Well, I might have a solution, but it is slow.
Take from the buffer your curent image and store it into an array. After go over every pixel in the array and where the pixel should be black put a black dot on the screen. Taking the image is slow and puting dots is also slow but I don't see another way around in 1.1.
Maybe some use of a Stencil buffer?
I'd try to render the image twice, with slight (1px) offset in X and Y. During the rendering assign different stencil value to each color. Then if you render first pass with 'add' operation on stencil buffer, and the second pass with 'subtract', you should get simple edge detection in the stencil buffer. Then you just need to render black quad with stencil test enabled.
I realize that this approach may be not pixel-perfect and give some artifacts but it's the best that comes to my mind ATM :).
I have a function that renders a triangle of desired color. I am trying to render a yellow triangle and then a red triangle over it with stencil test enable. I am using a circle as my stencil window. What should be my stencil test equations and operations to get the below output. All rendering in DirectX09 only.
Desired output
Kindly guide to few good and simple examples for below APIs....
SetRenderState(D3DRS_STENCILFUNC,
SetRenderState(D3DRS_STENCILREF,
SetRenderState(D3DRS_STENCILMASK,
SetRenderState(D3DRS_STENCILWRITEMASK,
SetRenderState(D3DRS_STENCILZFAIL,
SetRenderState(D3DRS_STENCILFAIL,
SetRenderState(D3DRS_STENCILPASS,
How do we use Stencil operation in DirectX09 shaders effect file (vs_3_0 and ps_3_0) ?
The documentation of the renderstates should answer most of your related questions.
For creating the stencilmask, you need the methods
SetRenderState(D3DRS_STENCILZFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFAIL,D3DSTENCILOP_INCRSAT)
SetRenderState(D3DRS_STENCILPASS,D3DSTENCILOP_INCRSAT)
SetRenderState(D3DRS_STENCILFUNC,D3DCMP_ALWAYS)
because they increment the stencilbuffer, while rendering your circle. Then youre drawing your yellow triangle without using the stencilbuffer. After that youre drawing the red triangle with
SetRenderState(D3DRS_STENCILZFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILPASS,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFUNC,D3DCMP_LESS)
SetRenderState(D3DRS_STENCILREF,0)
so your that your stenciltest returns only true, where youre circle had been drawn before (there should be the stencilvalue greater 0). If after that there is nothing drawn properly, you should try to deactivate the Z-Test maybe, the order of your triangles isn't right.
How do we use Stencil operation in DirectX09 shaders effect file (vs_3_0 and ps_3_0) ?
Stencil operations are only used from your main program code. Shaders cannot have any effect on the stenciltests.
the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.
I have implemented masking in OpenGL according to the following concept:
The mask is composed of black and white colors.
A foreground texture should only be visible in the white parts of the mask.
A background texture should only be visible in the black parts of the mask.
I can make the white part or the black part work as supposed by using glBlendFunc(), but not the two at the same time, because the foreground layer not only blends onto the mask, but also onto the background layer.
Is there anyone who knows how to accomplish this in the best way? I have been searching the net and read something about fragment shaders. Is this the way to go?
This should work:
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
This is fairly tricky, so tell me if anything is unclear.
Don't forget to request an alpha buffer when creating the GL context. Otherwise it's possible to get a context without an alpha buffer.
Edit: Here, I made an illustration.
Edit: Since writing this answer, I've learned that there are better ways to do this:
If you're limited to OpenGL's fixed-function pipeline, use texture environments
If you can use shaders, use a fragment shader.
The way described in this answer works and is not particularly worse in performance than these 2 better options, but is less elegant and less flexible.
Stefan Monov's is great answer! But for those who still have issues to get his answer working:
you need to check GLES20.glGetIntegerv(GLES20.GL_ALPHA_BITS, ib) - you need non zero result.
if you got 0 - goto EGLConfig and ensure that you pass alpha bits
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8, <- i havn't this and spent a much of time
EGL14.EGL_DEPTH_SIZE, 16,