Write to texture GLSL - c++

I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.

Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.

I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.

Related

GLSL shader: occlusion order and culling

I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).

Fully transparent torus in OpenGL

I have a torus rendered by OpenGL and can map a texture; there is no problem as long as the texture is opaque. But it doesn't work when I make the color selectively transparent in fragment shader. Or rather: it works but only in some areas depending on the order of triangles in vertex buffer; see the difference along the outer equator.
The torus should be evenly covered by spots.
The source image will be png, however for now I work with bmp as it is easier to load (the texture loading function is part of a tutorial).
The image has white background and spots of different colors on top of it; it is not transparent.
The desired result is nearly transparent torus with spots. Both spots in the front and the back side must be visible.
The rendering will be done offline, so I don't require speed; I just need to generate image of torus from an image of its surface.
So far my code looks like this (it is a merge of two examples):
https://gist.github.com/juriad/ba66f7184d12c5a29e84
Fragment shader is:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main(){
// Output color = color of the texture at the specified UV
color.rgb = texture2D( myTextureSampler, UV ).rgb;
if (color.r == 1 && color.g == 1 && color.b == 1) {
color.a = 0.2;
} else {
color.a = 1;
}
}
I know that the issue is related to order.
What I could do (but don't know what will work):
Add transparency to the input image (and find a code which loads such image).
Do something in vertex shader (see Fully transparent OpenGL model).
Sorting would solve my problem, but if I get it correctly, I have to implement it myself. I would have to find a center of each triangle (easy), project it with my matrix and compare z values.
Change somehow blending and depth handling:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0f, 1.0f);
I need an advice, how to continue.
Update, this nearly fixes the issue:
glDisable(GL_DEPTH_TEST);
//glDepthMask(GL_TRUE);
glDepthFunc(GL_NEVER);
//glDepthRange(0.0f, 1.0f);
I wanted to write that it doesn't distinguish sports in front and the back, but then I realized they are nearly white and blending with white doesn't make difference.
The new image of torus with colorized texture is:
The remaining problems are:
red spots are blue - it is related to the function loading BMP (doesn't matter)
as in the input images all the spots are of the same size, the bigger spots should be on the top and therefore saturated and not blended with white body of the torus. It seems that the order is opposite than it should be. If you compare it to the previous image, there the big spots by appearance were drawn correctly and the small ones were hidden (the back side of the torus).
How to fix the latter one?
First problem was solved by disabling depth-test (see update in the question).
Second problem was solved by manual sorting of array of all triangles. It works well even in real-time for 20000 triangles, which is more than sufficient for my purpose
The resulting source code: https://gist.github.com/juriad/80b522c856dbd00d529c
It is based on and uses includes from OpenGL tutorial: http://www.opengl-tutorial.org/download/.

GLSL - how to force rendering a fragment?

is there any way to force rendering a particuar fragment. As far as I know, fragment shaders are called only for pixels within rasterized triangles. What I need to do is to draw a mark(say a single red pixel) in a constant position on the viewport. I mean something like this:
void main(void) {
if(gl_FragCoord.x == vec2(300.5, 300.5)){
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
else {
gl_FragColor = getColorFromSampler();
}
,while there's no quad nor nothing behind fragment (300.5, 300.5). I don't want to affect performance (no fake background and stuff). How to proceed in such situation?
Is there anything speaking against just rendering a single point on top of the other stuff?
So just render a single GL_POINTS primitive at the given pixel. Either use an appropriate orthographic projection to specify the vertex position directly in window space, or just compute the clip space position of the pixel and use an identity vertex transformation. And then all that you need is a simple passthrough fragment shader writing your color of choice.
While you say you want "nothing behind the fragment", I still think that single fragment rendered under the red mark doesn't hurt anyone, at least not more (rather less) than your branch inside the fragment shader just for a single pixel (or any other more elaborate technique using the stencil buffer). If you have any other more strict reason why you cannot render anything else at that position except for the mark, you might want to clarify your problem a bit more.

Use shader on texture instead of screen

I've written a simple GL fragment shader which performs an RGB gamma adjustment on an image:
uniform sampler2D tex;
uniform vec3 gamma;
void main()
{
vec3 texel = texture2D(tex, gl_TexCoord[0].st).rgb;
texel = pow(texel, gamma);
gl_FragColor.rgb = texel;
}
The texture paints most of the screen and it's occurred to me that this is applying the adjustment per output pixel on the screen, instead of per input pixel on the texture. Although this doesn't change its appearance, this texture is small compared to the screen.
For efficiency, how can I make the shader process the texture pixels instead of the screen pixels? If it helps, I am changing/reloading this texture's data on every frame anyway, so I don't mind if the texture gets permanently altered.
and it's occurred to me that this is applying the adjustment per output pixel on the screen
Almost. Fragment shaders are executed per output fragment (hence the name). A fragment is a the smallest unit of rasterization, before it's written into a pixel. Every pixel that's covered by a piece of visible rendered geometry is turned into one or more fragments (yes, there may be even more fragments than covered pixels, for example when drawing to an antialiased framebuffer).
For efficiency,
Modern GPUs won't even "notice" the slightly reduced load. This is a kind of microoptimization, that's on the brink of non-measureability. My advice: Don' worry about it.
how can I make the shader process the texture pixels instead of the screen pixels?
You could preprocess the texture, by first rendering it through a texture sized, not antialiased framebuffer object to a intermediate texture. However if your change is nonlinear, and a gamma adjustment is exactly that, then you should not do this. You want to process images in a linear color space and apply nonlinear transformation only as late as possible.

Outline effects in OpenGL

In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)