Send data from OpenGL shader to CPU memory - c++

I'm highlighting every pixel on my 3D objects with red color, if the dot-product of my mesh triangle normal with vector (0, 0, -1) is larger than 0.73. It means, if my mesh triangle has a slope larger than a certain value:
I'm using a fragment/pixel shader like this:
#define FP highp
varying FP vec3 worldNormal; // comes from vertex shader
void main()
{
vec3 n = normalize(worldNormal);
vec3 z = vec3(0.0, 0.0, -1.0); // Normalized already
float zDotN = dot(z, n); // Dot-product of "n" and "z"
if ( zDotN > 0.73 ) {
vec3 oc = vec3(1.0, 1.0-zDotN, 1.0-zDotN); // Color: shades of red!
gl_FragColor = vec4(oc, 1.0);
} else {
gl_FragColor = vec4( /* Compute color according to Phong material */ );
}
}
I need to be compatible with older versions of OpenGL and OpenGL ES 2.0 for embedded devices.
My problem is: I need to have access to vertex coordinates (of mesh triangles) of the highlighted areas from within my C++ code. One option might be to use SSBO but they are not available for older versions of OpenGL. Can anybody introduce an approach or workaround?

Related

How to apply Texture-Mapping to a Maya Object using OpenGL?

I am currently learning how to map 2d textures to 3d objects using GLSL. I have a main.cpp, fragment shader, and vertex shader to achieve this as well as a Sphere.obj I made using Maya and some PNG images.
I just created a basic sphere poly model in Maya then exported it as a ".obj".
My fragment shader code is listed below for reference:
#version 410
// Inputs from application.
// Generally, "in" like the eye and normal vectors for things that change frequently,
// and "uniform" for things that change less often (think scene versus vertices).
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// This light setup would usually be passed in from the application.
vec3 light_position_world = vec3 (10.0, 25.0, 10.0);
vec3 Ls = vec3 (1.0, 1.0, 1.0); // neutral, full specular color of light
vec3 Ld = vec3 (0.8, 0.8, 0.8); // neutral, lessened diffuse light color of light
vec3 La = vec3 (0.12, 0.12, 0.12); // ambient color of light - just a bit more than dk gray bg
// Surface reflectance properties for Phong or Blinn-Phong shading models below.
vec3 Ks = vec3 (1.0, 1.0, 1.0); // fully reflect specular light
vec3 Kd = vec3 (0.32, 0.18, 0.5); // purple diffuse surface reflectance
vec3 Ka = vec3 (1.0, 1.0, 1.0); // fully reflect ambient light
float specular_exponent = 400.0; // specular 'power' -- controls "roll-off"
// These come from the VAO for texture coordinates.
in vec2 texture_coords;
// And from the uniform outputs for the textures setup in main.cpp.
uniform sampler2D texture00;
uniform sampler2D texture01;
out vec4 fragment_color; // color of surface to draw
void main ()
{
// Ambient intensity
vec3 Ia = La * Ka;
// These next few lines sample the current texture coord (s, t) in texture00 and 01 and mix.
vec4 texel_a = texture (texture00, fract(texture_coords*2.0));
vec4 texel_b = texture (texture01, fract(texture_coords*2.0));
//vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
Kd.x = mixed.x;
Kd.y = mixed.y;
Kd.z = mixed.z;
// Transform light position to view space.
// Vectors here are appended with _eye as a reminder once in view space versus world space.
// "Eye" is used instead of "camera" since reflectance models often phrased that way.
vec3 light_position_eye = vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
// Diffuse intensity
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// Specular is view dependent; get vector toward camera.
vec3 surface_to_viewer_eye = normalize (-position_eye);
// Phong
//vec3 reflection_eye = reflect (-direction_to_light_eye, normal_eye);
//float dot_prod_specular = dot (reflection_eye, surface_to_viewer_eye);
//dot_prod_specular = max (dot_prod_specular, 0.0);
//float specular_factor = pow (dot_prod_specular, specular_exponent);
// Blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
// Specular intensity
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final color
fragment_color = vec4 (Is + Id + Ia, 1.0);
}
I type in the following command into the terminal to run my package:
./go fs.glsl vs.glsl Sphere.obj image.png image2.png
I am trying to map a world map.jpg to my sphere using this method and ignore the 2nd image input. But it won't run. Can someone tell me what I need to comment out in my fragment shader to ignore the second texture input so my code will run?
PS: How would I go about modifying my fragment shader to implement various types of 'tiling'? I'm a bit lost on this as well. Any examples or tips are appreciated.
Here is the texture portion of my main.cpp code.
// load textures
GLuint tex00;
int tex00location = glGetUniformLocation (shader_programme, "texture00");
glUniform1i (tex00location, 0);
glActiveTexture (GL_TEXTURE0);
assert (load_texture (argv[4], &tex00));
//assert (load_texture ("ship.png", &tex00));
GLuint tex01;
int tex01location = glGetUniformLocation (shader_programme, "texture01");
glUniform1i (tex01location, 1);
glActiveTexture (GL_TEXTURE1);
assert (load_texture (argv[5], &tex01));
/*---------------------------SET RENDERING DEFAULTS---------------------------*/
// Choose vertex and fragment shaders to use as well as view and proj matrices.
glUniformMatrix4fv (view_mat_location, 1, GL_FALSE, view_mat.m);
glUniformMatrix4fv (proj_mat_location, 1, GL_FALSE, proj_mat.m);
// The model matrix stores the position and orientation transformations for the mesh.
mat4 model_mat;
model_mat = translate (identity_mat4 () * scale(identity_mat4(), vec3(0.5, 0.5, 0.5)), vec3(0, -0.5, 0)) * rotate_y_deg (identity_mat4 (), 90 );
// Setup basic GL display attributes.
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
glEnable (GL_CULL_FACE); // cull face
glCullFace (GL_BACK); // cull back face
glFrontFace (GL_CCW); // set counter-clock-wise vertex order to mean the front
glClearColor (0.1, 0.1, 0.1, 1.0); // non-black background to help spot mistakes
glViewport (0, 0, g_gl_width, g_gl_height); // make sure correct aspect ratio

How can I add different color overlays to each iteration of the texture in this glsl shader?

So I'm working on this shader right now, and the goal is to have a series of camera based tiles, each overlayed with a different color (think Andy Warhol). I've got the tiles working (side issue - the tiles on the ends are currently being cut off ~50%) but I want to add a color filter to each iteration. I'm looking for the cleanest possible way of doing this. Any ideas?
frag shader:
#define N 3.0 // number of columns
#define M 3. // number of rows
#import "GPUImageWarholFilter.h"
NSString *const kGPUImageWarholFragmentShaderString = SHADER_STRING
(
precision highp float;
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
vec4 color = texture2D(inputImageTexture, vec2(fract(textureCoordinate.x * N)/(M/ N), fract(textureCoordinate.y * M) / (M/N)));
gl_FragColor = color;
}
);
Let's say you're texture is getting applied to a quad and your vertices are:
float quad[] = {
0.0, 0.0,
1.0, 0.0,
1.0, 0.0, // First triangle
1.0, 0.0,
1.0, 0.0,
1.0, 1.0 // Second triangle
};
You can specify texture coordinates which corresponds to each of these vertices and describe the relationship in your vertex array object.
Now, if you quadruple the number of vertices, you'll have access to mid points between the extents of the quad (create four sub-quads) and you can associate (0,0) -> (1,1) texture coordinates with each of these subquads. The effect would be a rendering of the full texture in each subquad.
You could then do math on your vertices in your vertex shader to calculate a color component to assign to each subquad. The color component would be passed to your fragment shader. Use uniforms to specify number of rows and columns then define the color component based on which cell you've calculated the current vertex to be in.
You could also try to compute everything in the fragment shader, but I think it could get expensive pretty fast. Just a hunch though.
You could also try the GL_REPEAT texture parameter, but you'll have less control of the outcome: https://open.gl/textures

Phong Illumination in OpenGL website

I was reading through the following Phong Illumination shader which has in opengl.org:
Phong Illumination in Opengl.org
The vertex and fragment shaders were as follows:
vertex shader:
varying vec3 N;
varying vec3 v;
void main(void)
{
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
varying vec3 N;
varying vec3 v;
void main (void)
{
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = gl_FrontLightModelProduct.sceneColor + Iamb + Idiff + Ispec;
}
I was wondering about the way which he calculates the viewer vector or v. Because by multiplying the vertex position with gl_ModelViewMatrix, the result will be in view matrix (and view coordinates are rotated most of the time, compared to world coordinates).
So, we cannot simply subtract the light position from v to calculate the L vector, because they are not in the same coordinates system. Also, the result of dot product between L and N won't be correct because their coordinates are not the same. Am I right about it?
So, we cannot simply subtract the light position from v to calculate
the L vector, because they are not in the same coordinates system.
Also, the result of dot product between L and N won't be correct
because their coordinates are not the same. Am I right about it?
No.
The gl_LightSource[0].position.xyz is not the value you set GL_POSITION to. The GL will automatically multiply the position by the current GL_MODELVIEW matrix at the time of the glLight() call. Lighting calculations are done completely in eye space in fixed-function GL. So both V and N have to be transformed to eye space, and gl_LightSource[].position will already be transformed to eye-space, so the code is correct and is actually not mixing different coordinate spaces.
The code you are using is relying on deprecated functionality, using lots of the old fixed-function features of the GL, including that particular issue. In mordern GL, those builtin uniforms and attributes do not exist, and you have to define your own - and you can interpret them as you like.
You of course could also ignore that convention and still use a different coordinate space for the lighting calculation with the builtins, and interpret gl_LightSource[].position differently by simply choosing some other matrix when setting a position (typically, the light's world space position is set while the GL_MODELVIEW matrix contains only the view transformation, so that the eye-space light position for some world-stationary light source emerges, but you can do whatever you like). However, the code as presented is meant to work as some "drop-in" replacement for the fixed-function pipeline, so it will interpret those builtin uniforms and attributes in the same way the fixed-function pipeline did.

Why is the default viewspace in OpenTK from -1 to 1?

I'm rendering a texture to screen with this code:
if (beganDraw)
{
beganDraw = false;
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
if (CameraMaterial != null)
{
GL.BindBuffer(BufferTarget.ArrayBuffer, screenMesh.VBO);
GL.BindVertexArray(VAO);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, screenMesh.VEO);
CameraMaterial.Use();
screenMesh.ApplyDrawHints(CameraMaterial.Shader);
GL.DrawElements(PrimitiveType.Triangles, 6, DrawElementsType.UnsignedInt, 0);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
GL.BindVertexArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.UseProgram(0);
}
}
As you see there is no transformation matrix.
I create the mesh to render the surface like this:
screenMesh = new Mesh();
screenMesh.SetVertices(new float[] {
-1,-1,
1,-1,
1,1,
-1,1
});
screenMesh.SetIndices(new uint[] {
2,3,0,
0,1,2
});
And my question is, why do I have to go from -1 to 1 in order to fill the screen? Shouldn't it default to 0 to 1 ? Also, how can I make it to go from 0 to 1? Or is that even advised?
This is the shader:
[Shader vertex]
#version 150 core
in vec2 pos;
out vec2 texCoord;
uniform float _time;
uniform sampler2D tex;
void main() {
gl_Position = vec4(pos, 0, 1);
texCoord = pos/2+vec2(0.5,0.5);
}
[Shader fragment]
#version 150 core
#define PI 3.1415926535897932384626433832795
out vec4 outColor;
uniform float _time;
uniform sampler2D tex;
in vec2 texCoord;
//
void main() {
outColor = texture2D(tex, texCoord);
}
The OpenTK GL. calls are just a thin layer on top of OpenGL. So your question is really about OpenGL coordinate systems.
After you applied all the transformations in your vertex shader, and assigned the desired vertex coordinates to gl_Position, those coordinates are in what the OpenGL documentation calls clip coordinates. They are then divided by the w component of the 4-component coordinate vector to obtain normalized device coordinates (commonly abbreviated NDC).
NDC are in the range [-1.0, 1.0] for each coordinate direction. I don't know what the exact reasoning was, but this is just the way it was defined when OpenGL was originally designed. I always thought it was kind of natural to have the origin of the coordinate system in the center of the view for 3D rendering, so it seems at least as reasonable as anything else that could have been used.
Since you're not applying any transformations in your vertex shader, and the w component of your coordinates is 1.0, your input positions need to be in NDC, which means a range of [-1.0, 1.0].
You can easily use a different range if you apply the corresponding mapping/transformation in your vertex shader. If you like to use [0.0, 1.0] for your x- and y-ranges, you simply add that mapping to your vertex shader by changing the value you assign to gl_Position:
gl_Position = vec4(2.0 * pos - 1.0, 0.0, 1.0);
This linearly maps 0.0 to -1.0, and 1.0 to 1.0.

Wide lines in geometry shader behaves oddly

I'm trying to render arbitrary wide lines (in screen space) using a geometry shader. At first it seems all good, but on certain view position the lines are rendered incorrectly:
The image on the left present the correct rendering (three lines on positive X, Y and Z axes, 2 pixel wide).
When the camera moves near the origin (and indeed near the lines), the lines are rendered like the right image. The shader seems straightforward, and I don't understand what's going on my GPU:
--- Vertex Shader
#version 410 core
// Modelview-projection matrix
uniform mat4 ds_ModelViewProjection;
// Vertex position
in vec4 ds_Position;
// Vertex color
in vec4 ds_Color;
// Processed vertex color
out vec4 ds_VertexColor;
void main()
{
gl_Position = ds_ModelViewProjection * ds_Position;
ds_VertexColor = ds_Color;
}
--- Geometry Shader
#version 410 core
// Viewport size, in pixels
uniform vec2 ds_Viewport;
// Line width, in pixels
uniform float ds_LineWidth = 2.0;
// Processed vertex color (from VS, in clip space)
in vec4 ds_VertexColor[2];
// Processed primitive vertex color
out vec4 ds_GeoColor;
layout (lines) in;
layout (triangle_strip, max_vertices = 4) out;
void main()
{
vec3 ndc0 = gl_in[0].gl_Position.xyz / gl_in[0].gl_Position.w;
vec3 ndc1 = gl_in[1].gl_Position.xyz / gl_in[1].gl_Position.w;
vec2 lineScreenForward = normalize(ndc1.xy - ndc0.xy);
vec2 lineScreenRight = vec2(-lineScreenForward.y, lineScreenForward.x);
vec2 lineScreenOffset = (vec2(ds_LineWidth) / ds_ViewportSize) * lineScreenRight;
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc0.xy - lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc1.xy + lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
gl_Position = vec4(ndc1.xy - lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
EndPrimitive();
}
--- Fragment Shader
// Processed primitive vertex color
in vec4 ds_GeoColor;
// The fragment color.
out vec4 ds_FragColor;
void main()
{
ds_FragColor = ds_GeoColor;
}
Your mistake is in this:
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0 /* WRONG */);
To fix it:
vec4 cpos = gl_in[0].gl_Position;
gl_Position = vec4(cpos.xy + lineScreenOffset*cpos.w, cpos.z, cpos.w);
What you did was: lose information about W, and thus detune the HW clipper, downgrading it from a 3D clipper into a 2D one.
Today I've found the answer by myself. I don't understand it entirely, but this solved the question.
The problem arise when the line vertices goes beyond the near plane of the projection matrix defined for the scene (in my case, all ending vertices of the three lines). The solution is to manually clip the line vertices within the view frustum (in this way the vertices cannot go beyond the near plane!).
What happens to ndc0 and ndc1 when they are out of the view frustum? Looking at the images, it seems that the XY components have the sign changed (after they are transformed in clip space!): that would mean that the W coordinate is opposite of the normal one, isn't?
Without the geometry shader, the rasterizer would have be taken the responsability to clip those primitives outside the view frustum, but since I've introduced the geometry shader, I need to compute those result by myself. Do anyone can suggest me some link about this matter?
I met a similar problem, where I was trying to draw normals of vertices as colored lines. The way I was drawing the normal was that I draw all the vertices as points, and then use the GS to expand each vertex to a line. The GS was straightforward, and I found that there were random incorrect lines running through the entire screen. Then I added this line into the GS (marked by the comment below), the problem is fixed. Seems like that the problem was because one end of the line is within frustum but the other is outside, so I end up with lines running across the entire screen.
// Draw normal of a vertex by expanding a vertex into a line
[maxvertexcount(2)]
void GSExpand2( point PointVertex points[ 1 ], inout LineStream< PointInterpolants > stream )
{
PointInterpolants v;
float4 pos0 = mul(float4(points[0].pos, 1), g_viewproj);
pos0 /= pos0.w;
float4 pos1 = mul(float4(points[0].pos + points[0].normal * 0.1f, 1), g_viewproj);
pos1 /= pos1.w;
// seems like I need to manually clip the lines, otherwise I end up with incorrect lines running across the entire screen
if ( pos0.z < 0 || pos1.z < 0 || pos0.z > 1 || pos1.z > 1 )
return;
v.color = float3( 0, 0, 1 );
v.pos = pos0;
stream.Append( v );
v.color = float3( 1, 0, 0 );
v.pos = pos1;
stream.Append( v );
stream.RestartStrip();
}