OpenGL 440 - controlling line thickness in frag. - opengl

I am drawing a 3D spherical grid in opengl using a VBO of vertex points and GL_LINES. What I want to achieve is to have one line - the zenith - to be brighter than the rest.
I obviously store x,y,z coords and normals, then figured I might be able to use the texture coordinates to "tag" locations where at creation - y coordinate is 0. Like so:
if (round(y) == 0.0f){
_varray[nr].tex[0] = -1.0; // setting the s variable (s,t texcoord),
// passed in with vbo
}
Now in the fragment shader I recieve this value and do:
if(vs_st[0] == -1){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
And the results looks kind of awful:
Print Screen
I realize that this is propably due to the fragment shader having to interpolate between two points, can you guys think of a good way to identify the zenith line and make it brighter? I'd rather avoid using geometry shaders...

Solution was this:
if (round(y) == 0.0f) _varray[nr].tex[0] = -2; // set arb. number.
And then do not setthat variable anywhere else! then in fragment:
if( floor(vs_st[0]) == -2){
diffuse = gridColor*2.f;
}else{
diffuse = gridColor;
}
Dont know how neat that is, but it works.

Related

Simple, most basic SSR in GLSL

I'm experimenting trying to implement "as simple as possible" SSR in GLSL. Any chance someone could please help me set up an extremely basic ssr code?
I do not need (for now) any roughness/metalness calculations, no Fresnel, no fading in-out effect, nothing fancy - I just want the most simple setup that I can understand, learn and maybe later improve upon.
I have 4 source textures: Color, Position, Normal, and a copy of the previous final frame's image (Reflection)
attributes.position is the position texture (FLOAT16F) - WorldSpace
attributes.normal is the normal texture (FLOAT16F) - WorldSpace
sys_CameraPosition is the eye's position in WorldSpace
rd is supposed to be the reflection direction in WorldSpace
texture(SOURCE_2, projectedCoord.xy) is the position texture at the current reflection-dir endpoint
vec3 rd = normalize(reflect(attributes.position - sys_CameraPosition, attributes.normal));
vec2 uvs;
vec4 projectedCoord;
for (int i = 0; i < 10; i++)
{
// Calculate screen space position from ray's current world position:
projectedCoord = sys_ProjectionMatrix * sys_ViewMatrix * vec4(attributes.position + rd, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
// this bit is tripping me up
if (distance(texture(SOURCE_2, projectedCoord.xy).xyz, (attributes.position + rd)) > 0.1)
rd += rd;
else
uvs = projectedCoord.xy;
break;
}
out_color += (texture(SOURCE_REFL, uvs).rgb);
Is this even possible, using worldSpace coordinates? When I multiply the first pass' outputs with the viewMatrix aswell as the modelMatrix, my light calculations go tits up, because they are also in worldSpace...
Unfortunately there is no basic SSR tutorials on the internet, that explain just the SSR bit and nothing else, so I thought I'd give it a shot here, I really can't seem to get my head around this...

How can I draw surface normals in OpenGL?

I have a vertex format which are all floats, and looks like this:
POSITION POSITION POSITION NORMAL NORMAL NORMAL TEXCOORD TEXCOORD
I was thinking I need to draw lines from the first three floats to the next three floats, then I need to skip the next two floats and continue on. Is there any way of doing this without creating another buffer for each object that's in the correct layout?
I know I can draw just one line per draw call, and just loop over, but that is many draw calls? How is the general way normals are drawn for stuff like debugging?
Also I've thought about indexing, but indexing only helps selecting specific vertices, in this case I want to draw between two attributes of my normal vertex layout.
This cannot be done just by setting appropriate glVertexAttribPointer, since you have to skip the texcoords. Additionally, you don't want to draw a line from position to normal, but from position to position + normal, since normals just describe a direction, not a point in space.
What you can do is to use a geometry shader. Basically, you set up two attributes, one for position, one for normal (as you would do for rendering the model) and issue a draw command with GL_POINTS primitive type. In the geometry shader you then generate a line from position to position + normal.
Normally to draw surface normals you would set up a separate buffer or a geometry shader to do the work. Setting a separate buffer for a mesh to draw just the normals is trivial and doesn't require a draw call for every normal, all of your surface normals would be drawn in a single drawcall
Since you'll be doing it for debugging purposes, there's no need to worry too much about performance and just stick with the quicker method that gets things on screen
The way I'd personally do it depends on whether the mesh has vertex or face normals, we could for instance fill a buffer with a line for each vertex in the mesh whose offset from the vertex itself represent the normal you need to debug with the following pseudocode
var normal_buffer = [];
//tweak to your liking
var normal_length = 10.0;
//this assumes your mesh has 2 arrays of the same length
//containing structs of vertices and normals
for(var i = 0; i < mesh.vertices.length; i++) {
//retrieving the normal associated with this vertex
var nx = mesh.normals[i].x;
var ny = mesh.normals[i].y;
var nz = mesh.normals[i].z;
//retrieving the vertex itself, it'll be the first point of our line
var v1x = mesh.vertices[i].x;
var v1y = mesh.vertices[i].y;
var v1z = mesh.vertices[i].z;
//second point of our line representing the normal direction
var v2x = v1x + nx * normal_length;
var v2y = v1y + ny * normal_length;
var v2z = v1z + nz * normal_length;
buffer.push(v1x, v1y, v1z, v2x, v2y, v2z);
}
You can later on proceed as normal and attach the buffer to a vertex buffer object and use whatever program you like to issue one single draw call that will draw all of your mesh normals
vertbuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertbuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(buffer), gl.STATIC_DRAW);
/* later on in your program */
gl.drawArrays(gl.LINES, 0, buffer.length / 3);
A cool feature of normal debugging is that you can use the normal itself in a fragment shader as an output color to quickly check if it points to the expected direction

Deferred Rendering Shadows Strange Behaviour (almost working, not acne)

I have a simple deferred rendering setup and I'm trying to get depth map shadows to work. I know that I have a simple error in my code because I can see the shadows working, but they seem to move and lose resolution when I move my camera. This is not a Shadow Acne Problem. The shadow is shifted entirely. I have also seen most of the similar questions here, but none solve my problem.
In my final shader, I have a texture for world space positions of pixels, and the depth map rendered from the light source, as well as the light source's model-view-projection matrix. The steps I take are:
Get worldspace position of pixel from pre-rendered pass.
Multiply worldspace position with the light source's model-view-projection matrix. I am using orthogonal projection (a directional light).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-0.0025, 0.0025, -0.0025, 0.0025, -0.1, 0.1);
glGetFloatv(GL_PROJECTION_MATRIX,m_lightViewProjectionMatrix);
I place the directional light at a fixed distance around my hero. Here is how I get my light's modelview matrix:
CSRTTransform lightTrans = CSRTTransform();
CVector camOffset = CVector(m_vPlanet[0].m_vLightPos);
camOffset.Normalize();
camOffset *= 0.007;
camOffset += hero.GetLocation();
CVector LPN = m_vLightPos*(-1); LPN.Normalize();
CQuaternion lRot;
lRot = CQuaternion( CVector(0,1,0), asin(LPN | CVector(-1,0,0) ) );
lightTrans.m_qRotate = lRot;
lightTrans.m_vTranslate = camOffset;
m_worldToLightViewMatrix = lightTrans.BuildViewMatrix();
And the final light mvp matrix is:
CMatrix4 lightMVPMatrix = m_lightViewProjectionMatrix * m_worldToLightViewMatrix;
m_shHDR.SetUniformMatrix("LightMVPMatrix", lightMVPMatrix);
For now I am only rendering the shadow casters in my shadow pass. As you can see, the shadow pass seems fine, my hero is centered in the frame and rotated correctly. It is worth noting that these two matrices are passed to the vertex shader of the hero so it seems like they are correct because the hero is rendered correctly in the shadow pass (shown here with more contrast for visibility).
https://lh3.googleusercontent.com/-pxBZ5jnmlfM/V0SBT75yB1I/AAAAAAAABEY/007j_toVO7M41iyEiEnJgvr7K1m5GSceQCCo/s1024/shadow_pass.jpg
And finally, in my deferred shader I do:
vec4 projectedWorldPos = LightMVPMatrix * vec4(worldPos,1.0);
vec3 projCoords = projectedWorldPos.xyz;//projectedWorldPos.w;
projCoords.xyz = (projCoords.xyz * 0.5) + 0.5;
float calc_depth = projCoords.z;
float tD = texture2DRect(shadowPass,vec2(projCoords.x*1280.0,projCoords.y*720.0)).r ;
return ( calc_depth < tD ) ;
I have the /w division commented out because I'm using orthogonal projection. Having it produces the same result anyways.
https://lh3.googleusercontent.com/-b2i7AD_Nnf0/V0SGPyfelsI/AAAAAAAABFE/mvWnhcdQSbsU3l8sd0974jWDA94r6PkxACCo/s1024/render3.jpg
In certain positions for my hero (close to initial position) (1), the shadow looks fine. But as soon as I move the hero, the shadow moves incorrectly (2), and the further away I move, the shadow starts to lose resolution (3).
It is worth noting all my textures and passes are the same size (1280x720). I'm using an NVidia graphics card. The problem seems to be matrix related, but as mentioned, the shadow pass renders OK so I'm not sure what's going on...

WebGL: Particle engine using FBO, how to correctly write and sample particle positions from a texture?

I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly.
I have a complete jsfiddle here: http://jsfiddle.net/p5mdv/53/
A brief overview of the code:
Initialization:
Create an array of random particle positions in x,y,z
Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0)
Create a Frame Buffer Object and two particle position textures (one for input, one for output)
Create a full-screen quad (-1,-1 to 1,1)
Particle simulation:
Render a full-screen quad using the particle program (bind frame buffer, set viewport to the dimensions of my particle positions texture, bind input texture, and draw a quad from -1,-1 to 1,1). Input and output textures are swapped each frame.
Particle fragment shader samples the particle texture at the current fragment position (gl_FragCoord.xy), makes some modifications, and writes out the modified position
Particle rendering:
Draw using the vertex buffer of texture sampling locations
Vertex shader uses the sampling location to sample the particle position texture, then transforms them using view projection matrix
Draw the particle using a sprite texture (gl.POINTS)
Questions:
Am I correctly setting the viewport for the FBO in the particle simulation step? I.e. am I correctly rendering a full-screen quad?
// 6 2D corners = 12 vertices
var vertexBuffer = new Float32Array(12);
// -1,-1 to 1,1 screen quad
vertexBuffer[0] = -1;
vertexBuffer[1] = -1;
vertexBuffer[2] = -1;
vertexBuffer[3] = 1;
vertexBuffer[4] = 1;
vertexBuffer[5] = 1;
vertexBuffer[6] = -1;
vertexBuffer[7] = -1;
vertexBuffer[8] = 1;
vertexBuffer[9] = 1;
vertexBuffer[10] = 1;
vertexBuffer[11] = -1;
// Create GL buffers with this data
g.particleSystem.vertexObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, g.particleSystem.vertexObject);
gl.bufferData(gl.ARRAY_BUFFER, vertexBuffer, gl.STATIC_DRAW);
...
gl.viewport(0, 0,
g.particleSystem.particleFBO.width,
g.particleSystem.particleFBO.height);
...
// Set the quad as vertex buffer
gl.bindBuffer(gl.ARRAY_BUFFER, g.screenQuad.vertexObject);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
// Draw!
gl.drawArrays(gl.TRIANGLES, 0, 6);
Am I correctly setting the texture coordinates to sample the particle positions?
for(var i=0; i<numParticles; i++)
{
// Coordinates of particle within texture (normalized)
var texCoordX = Math.floor(i % texSize.width) / texSize.width;
var texCoordY = Math.floor(i / texSize.width) / texSize.height;
particleIndices[ pclIdx ] = texCoordX;
particleIndices[ pclIdx + 1 ] = texCoordY;
particleIndices[ pclIdx + 2 ] = 1; // not used in shader
}
The relevant shaders:
Particle simulation fragment shader:
precision mediump float;
uniform sampler2D mParticleTex;
void main()
{
// Current pixel is the particle's position on the texture
vec2 particleSampleCoords = gl_FragCoord.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
// Move the particle up
particlePos.y += 0.1;
if(particlePos.y > 2.0)
{
// Reset
particlePos.y = -2.0;
}
// Write particle out to texture
gl_FragColor = particlePos;
}
Particle rendering vertex shader:
attribute vec4 vPosition;
uniform mat4 u_modelViewProjMatrix;
uniform sampler2D mParticleTex;
void main()
{
vec2 particleSampleCoords = vPosition.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
gl_Position = u_modelViewProjMatrix * particlePos;
gl_PointSize = 10.0;
}
Let me know if there's a better way to go about debugging this, if nothing else. I'm using webgl-debug to find gl errors and logging what I can to the console.
Your quad is facing away from view so I tried adding gl.disable(gl.CULL_FACE), still no result.
Then I noticed that while resizing window panel with canvas it actually shows one black, square-shaped particle. So it seems that rendering loop is not good.
If you look at console log, it fails to load particle image and it also says that FBO size is 512x1 which is not good.
Some function declarations do not exist, as getTexSize. (?!)
Code needs tiding and grouping, and always check console if you're already using it.
Hope this helps a bit.
Found the problem.
gl_FragCoord is from [0,0] to [screenwidth, screenheight], I was wrongly thinking it was from [0,0] to [1,1].
I had to pass in shader variables for width and height, then normalize the sample coordinates before sampling from the texture.

Light on everything is gone directx

When I turn of light. I can see my object but with out the 3D light.
I set my object position to this 0, 0, 10.
Here is my code to set up my Light
D3DLIGHT9 light;
ZeroMemory( &light, sizeof(D3DLIGHT9) );
light.Type = D3DLIGHT_DIRECTIONAL;
light.Diffuse.r = 1.0f;
light.Diffuse.g = 1.0f;
light.Diffuse.b = 1.0f;
light.Diffuse.a = 1.0f;
light.Range = 1000.0f;
// Create a direction for our light - it must be normalized
D3DXVECTOR3 vecDir;
vecDir = D3DXVECTOR3(0.0f,10.0f,10);
D3DXVec3Normalize( (D3DXVECTOR3*)&light.Direction, &vecDir );
// Tell the device about the light and turn it on
d3ddev->SetLight( 0, &light );
d3ddev->LightEnable( 0, TRUE );
We can't possibly tell you exactly what the problem is without a more complete source listing. Having said that, I have an idea.
Check the normals on your object. If your normals are incorrect, turning on lighting might cause the object to render as black, which may make it disappear if your background is black.
There are a few things to check. The main this is whether you set D3DRS_LIGHTING to TRUE.
Another thing to check is your material settings. Set D3DRS_COLORVERTEX to TRUE. Also if you don't have vertex colours you will need to set D3DRS_DIFFUSEMATERIALSOURCE to D3DMSC_MATERIAL and set a material with a call to SetMaterial.
I always found the lighting pipeline infuriating complicated on D3D9. I HIGHLY recommend just using shaders as Fixed function is not used any more.
Are you rendering out in a shader? If so the fixed function lighting will not work, you'll have to code your own lighting system into the shader you're using to render the model.