Multitarget rendering in two different spaces in one shader - opengl

I make a program which work with a model in a worldspace. For my purpose I want to produce in the shader two textures:
In the worldspace, for postprocessing.
In the screenspace (as UV unwrap), for saving the result of working of shader in the current scene into a new texture of object.
I can't figure out how to do it in one shader with multitarget rendering.
In the vertex shader I set position for both spaces:
layout (location = 0) in vec3 position;
out vec4 scrn_Position;
...
gl_Position = projection * view * model * vec4(position,1.0);
scrn_Position = model * vec4(position,1.0);
In the fragmnet shader I set two outputs:
in vec4 scrn_Position;
...
layout (location = 0) out vec4 wrld_Color;
layout (location = 1) out vec4 scrn_Color;
But how to work with scrn_Position in the fragment shader? Is it possible to carry out in one pass?

Related

GLSL vertex shader breaks when trying to access gl_Position

In trying to create a weak depth effect for a wireframe model, where distance from the camera plane changes the colour, I first tried
gl_Position = camMatrix * vec4(aPos, 1.0);
color = vec4(0.0f,pow(2.0f, -gl_Position.z),0.0f,1.0f);
However this would render the wireframe entirely black, with the edges of it touching the edges of the window and my camera controls would have no effect.
However if I instead recalculate the depth when calculating the colour vector i.e.
color = vec4(0.0f,pow(2.0f, -(camMatrix * vec4(aPos, 1.0)).z),0.0f,1.0f);
everything works fine. Is accessing gl_Position invalid in some way? This is with version 330 core for GLSL
The entire shader file is
#version 330 core
layout (location = 0) in vec3 aPos;
out vec4 color;
uniform mat4 camMatrix;
void main()
{
gl_Position = camMatrix * vec4(aPos, 1.0);
color = vec4(0.0f,pow(2.0f, -(camMatrix * vec4(aPos, 1.0)).z),0.0f,1.0f);
}

How to change gl_PointSize in the vertex shader?

I'm optimizing my particle renderer to work with GL_POINTS and now I need to adjust the size of the points using gl_PointSize in the vertex shader to scale the particles the right amount from the vertex shader.
This is the vertex shader I have now:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in uint uv;
uniform mat4 projection;
uniform mat4 view;
void main(){
gl_PointSize = 10; // No difference with gl_PointSize = 1000
gl_Position = projection * view * vec4(position, 1.0);
}
Changing the gl_PointSize doesn't seem to make a difference when changed in the vertex shader.
You have to enable GL_PROGRAM_POINT_SIZE (see glEnable and gl_PointSize):
glEnable(GL_PROGRAM_POINT_SIZE);
OpenGL Reference - gl_PointSize
Description:
In the vertex, tessellation evaluation and geometry languages, a single global instance of the gl_PerVertex named block is available and its gl_PointSize member is an output that receives the intended size of the point to be rasterized, in pixels. It may be written at any time during shader execution. If GL_PROGRAM_POINT_SIZE is enabled, gl_PointSize is used to determine the size of rasterized points, otherwise it is ignored by the rasterization stage.

opengl glsl bug in which model goes invisible if i use glsl texture function with different parameters

I want to replicate a game. The goal of the game is to create a path between any 2 squares that have the same color.
Here is the game: www.mypuzzle.org/3d-logic-2
The cube has 6 faces. Each faces has 3x3 squares.
The cube has different square types: empty squares(reflect the environment), wall squares(you cant color them), start/finish squares(which have a black square in the middle but the rest of it is the colored).
I've close to finishing my project but i'm stuck with a bug. I used c++,sfml,opengl,glm.
The problem is in the shaders.
Vertex shader:
#version 330 core
layout (location = 0) in vec3 vPosition;
layout (location = 1) in vec3 vColor;
layout (location = 2) in vec2 vTexCoord;
layout (location = 3) in float vType;
layout (location = 4) in vec3 vNormal;
out vec3 Position;
out vec3 Color;
out vec2 TexCoord;
out float Type;
out vec3 Normal;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(vPosition, 1.0f);
Position = vec3(model * vec4(vPosition, 1.0f));
Color=vColor;
TexCoord=vTexCoord;
Type=vType;
Normal = mat3(transpose(inverse(model))) * vNormal;
}
Fragment shader:
#version 330 core
in vec3 Color;
in vec3 Normal;
in vec3 Position;
in vec2 TexCoord;
in float Type;
out vec4 color;
uniform samplerCube skyboxTexture;
uniform sampler2D faceTexture;
uniform sampler2D centerTexture;
void main()
{
color=vec4(0.0,0.0,0.0,1.0);
if(Type==0.0)
{
vec3 I = normalize(Position);
vec3 R = reflect(I, normalize(Normal));
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(texture(skyboxTexture, R),vec4(1.0,1.0,1.0,1.0),0.3);*/
}
else if(Type==1.0)
{
if(texture(centerTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=vec4(Color,1.0);
}
else if(Type==-1.0)
{
color=vec4(0.0,0.0,0.0,1.0);
}
else if(Type==2.0)
{
if(texture(faceTexture, TexCoord)==vec4(1.0,1.0,1.0,1.0))
color=mix(vec4(Color,1.0),vec4(1.0,1.0,1.0,1.0),0.5);
}
}
/*
Type== 0 ---> blank square(reflects light)
Type== 1 ---> start/finish square
Type==-1 ---> wall square
Type== 2 ---> colored square that was once a black square
*/
In the fragment shader i draw the pixels of a square that has a certain type, so the shader only enters in 1 of the 4 if's for each square. The program works fine if i only use the glsl function texture with the same texture. If i use this function 2 times with different textures ,in 2 differents if's ,my model goes invisible. Why is that happening?
https://postimg.org/image/lximpl0bz/
https://postimg.org/image/5dzvqz2r7/
The red square is of type 1. I've modified code in the type==0 if and then my model went invisible.
Texture sampler in OpenGL should only be accessed in (at least) dynamically uniform control flow. This basically means, that all invocations of a shader execute the same code path. If this is not the case, then no automatic gradients are available and mipmapping or anisotropic filtering will fail.
In your program this problem happens exactly when you try to use multiple textures. One solution might be not to use anything that requires gradients. There are also a number of other options, for example, patching all textures together in a texture atlas and just selecting the appropriate uv-coordinates in the shader or drawing each quad separately and providing the type through a uniform variable.

how to reuse same vertex shader to draw different gl_Position

I'm reading these tutorials about modern OpenGL. In tutorial 5, there is an exercise for drawing an extra triangle besides a cube. What I understand is that I can reuse same vertex shader for drawing multiple triangles (i.e. triangles for cube and an extra independent triangle). My problem is with the vertex shader which is
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec3 vertexColor;
layout(location = 2) in vec3 vertexTriangle_modelspace;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1); // Cube
//gl_Position = MVP * vec4(vertexTriangle_modelspace,1); // Triangle
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
}
It draws only one gl_Position and the last one. Is it possible to output multiple gl_Positions for one vertex shader?
Vertex shaders don't render triangles. Or cubes. Or whatever. They simply perform operations on a vertex. Whether that vertex is part of a triangle, line strip, cube, Optimus Prime, whatever. It doesn't care.
Vertex shaders take a vertex in and write a vertex out. Objects are made up of multiple vertices. So when you issue a rendering command, each vertex in that command goes to the VS. And the VS writes one vertex for each vertex it receives.

GLSL - Passing coordinates to fragment shader?

I have the following shaders
VertexShader
#version 330
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec4 inColour;
layout (location = 2) in vec2 inTex;
layout (location = 3) in vec3 inNorm;
uniform mat4 transformationMatrix;
uniform mat4 projectionMatrix;
out vec4 ShadowCoord;
void main()
{
gl_Position = projectionMatrix * transformationMatrix * vec4(inPosition, 1.0);
ShadowCoord = gl_Position;
}
FragmentShader
#version 330
in vec4 ShadowCoord;
out vec4 FragColor;
uniform sampler2D heightmapTexture;
void main()
{
FragColor = vec4(vec3(clamp(ShadowCoord.x/500,0.0,1.0),clamp(gl_FragCoord.y/500,0.0,1.0),0.0),1.0);
}
What I would expect this to do is to draw the scene, with green in the top left, yellow in the top right, red in the bottom right and black in the bottom left. Indeed this is what happens when I replace ShadowCoord.x with gl_FragCoord.x in my fragment shader. Why then, do I only get a vertical gradient of green? Evidently something screwy is happening to my ShadowCoord. Anyone know what it is?
Yes, gl_FragCoord is "derived from gl_Position". That does not mean that it is equal to it.
gl_Position is the clip-space position of a particular vertex. gl_FragCoord.xyz contains the window-space position of the fragment. These are not the same spaces. To get from one to the other, the graphics hardware performs a number of transformations.
Transformations that do not happen to other values. That's why the fragment shader doesn't call it gl_Position; because it's not that value anymore. It's a new value computed from the original based on various state (the viewport & depth range) that the user has set.
User-defined vertex parameters are interpolated as they are. Nothing "screwy" is happening with your user-defined interpolated parameter. The system is doing exactly what it should: interpolating the value you passed across the triangle's surface.