OpenGL: Passing random positions to the Vertex Shader - opengl

I am starting to learn OpenGL (3.3+), and now I am trying to do an algorithm that draws 10000 points randomly in the screen.
The problem is that I don't know exactly where to do the algorithm. Since they are random, I can't declare them on a VBO (or can I?), so I was thinking in passing a uniform value to the vertex shader with the varying position (I would do a loop changing the uniform value). Then I would do the operation 10000 times. I would also pass a random color value to the shader.
Here is kind of my though:
#version 330 core
uniform vec3 random_position;
uniform vec3 random_color;
out vec3 Color;
void main() {
gl_Position = random_position;
Color = random_color;
}
In this way I would do the calculations outside the shaders, and just pass them through the uniforms, but I think a better way would be doing this calculations inside the vertex shader. Would that be right?

The vertex shader will be called for every vertex you pass to the vertex shader stage. The uniforms are the same for each of these calls. Hence you shouldn't pass the vertices - be they random or not - as uniforms. If you would have global transformations (i.e. a camera rotation, a model matrix, etc.), those would go into the uniforms.
Your vertices should be passed as a vertex buffer object. Just generate them randomly in your host application and draw them. The will be automatically the in variables of your shader.
You can change the array in every iteration, however it might be a good idea to keep the size constant. For this it's sometimes useful to pass a 3D-vector with 4 dimensions, one being 1 if the vertex is used and 0 otherwise. This way you can simply check if a vertex should be drawn or not.
Then just clear the GL_COLOR_BUFFER_BIT and draw the arrays before updating the screen.
In your shader just set gl_Position with your in variables (i.e. the vertices) and pass the color on to the fragment shader - it will not be applied in the vertex shader yet.
In the fragment shader the last set variable will be the color. So just use the variable you passed from the vertex shader and e.g. gl_FragColor.
By the way, if you draw something as GL_POINTS it will result in little squares. There are lots of tricks to make them actually round, the easiest to use is probably to use this simple if in the fragment shader. However you should configure them as Point Sprites (glEnable(GL_POINT_SPRITE)) then.
if(dot(gl_PointCoord - vec2(0.5,0.5), gl_PointCoord - vec2(0.5,0.5)) > 0.25)
discard;
I suggest you to read up a little on what the fragment and vertex shader do, what vertices and fragments are and what their respective in/out/uniform variables represent.
Since programs with full vertex buffer objects, shader programs etc. get quite huge, you can also start out with glBegin() and glEnd() to draw vertices directly. However this should only be a very early starting point to understand what you are drawing where and how the different shaders affect it.
The lighthouse3d tutorials (http://www.lighthouse3d.com/tutorials/) usually are a good start, though they might be a bit outdated. Also a good reference is the glsl wiki (http://www.opengl.org/wiki/Vertex_Shader) which is up to date in most cases - but it might be a bit technical.
Whether or not you are working with C++, Java, or other languages - the concepts for OpenGL are usually the same, so almost all tutorials will do well.

Related

Handling per-primitive normals when there are less vertices than primitives in OpenGL4.5

I've had a bit of trouble coming up with a solution for passing the correct normals to a fragment shader in OpenGL 4.5 for each of the triangle primitives to be able to use per triangle normals while doing indexed triangle rendering. (I want to use an IBO)
My current solution that works for some of models is to basically set the first vertex of each primitive to be the provoking vertex and have the primitive's normal be counted as the normal from the provoking vertex. (Of course adding the flat modifier to the normal attribute in the shaders)
This should work for most models but I've realized that it just doesn't work when there are more triangle primitives than vertices in a model. The simplest example I can come up with is a triangular bipyramid.
Is there a typical way this is done in industry for OpenGL? In industry are models just so large that per vertex normals are easier to implement and look better?
As others mentioned in the comments, "in the industry" one would often duplicate vertices that have discontinuous normals. This is unavoidable when only parts of your geometry are flat shaded and parts are smooth, or there are creases in it.
If your geometry is entirely flat shaded, an alternative thing you can do is to use the gl_PrimitiveID to fetch the per-primitive normal from an SSBO in the fragment shader:
layout(std430, binding = 0) buffer NormalsBuffer {
vec4 NORMALS[];
};
void main() {
vec3 normal = NORMALS[gl_PrimitiveID].xyz;
// ...
}
You can also use the unpackSnorm2x16 or similar functions to read normals stored in smaller datatypes and thus reduce the bandwidth, much like with vertex array attributes.

Calculating surface normals of dynamic mesh (without geometry shader)

I have a mesh whose vertex positions are generated dynamically by the vertex shader. I've been using https://www.khronos.org/opengl/wiki/Calculating_a_Surface_Normal to calculate the surface normal for each primitive in the geometry shader, which seems to work fine.
Unfortunately, I'm planning on switching to an environment where using a geometry shader is not possible. I'm looking for alternative ways to calculate surface normals. I've considered:
Using compute shaders in two passes. One to generate the vertex positions, another (using the generated vertex positions) to calculate the surface normals, and then passing that data into the shader pipeline.
Using ARB_shader_image_load_store (or related) to write the vertex positions to a texture (in the vertex shader), which can then be read from the fragment shader. The fragment shader should be able to safely access the vertex positions (since it will only ever access the vertices used to invoke the fragment), and can then calculate the surface normal per fragment.
I believe both of these methods should work, but I'm wondering if there is a less complicated way of doing this, especially considering that this seems like a fairly common task. I'm also wondering if there are any problems with either of the ideas I've proposed, as I've had little experience with both compute shaders and image_load_store.
See Diffuse light with OpenGL GLSL. If you just want the face normals, you can use the partial derivative dFdx, dFdy. Basic fragment shader that calculates the normal vector (N) in the same space as the position:
in vec3 position;
void main()
{
vec3 dx = dFdx(position);
vec3 dy = dFdy(position);
vec3 N = normalize(cross(dx, dy));
// [...]
}

What values can an OpenGL shader access that are set before rendering?

If I have a simple OpenGL shader that is applied to many cubes at once, then uniform values get slow. I have noticed that things like glColor3f don't slow it down as much (at least from what I have tried) but am currently using glColor3f as a sort of hack so that the shader can read gl_Color and I can use it similarly to a uniform for determing which side of the cube is being rendered for face-indepent flat lighting.
I am using displaylists so I used glColor3f because it baked into the list, and simply did a different color before every face while creating the list. Now, I want to set more values (not in the displaylist this time) before rendering.
What calls from OpenGL can I do that can be read in the shader? I am needing to send 6 ints from 0-8 into the shader before rendering, but I could probably manage to shrink that later on.
I recommend to use uniform blocks or uniforms with datatype ivec2, ivec3 or ivec4, with them you can write 2, 3, or 4 int at once.
Aapart from this there are some built-in uniforms. Apart from the matrices, these are gl_DepthRange, gl_Fog, gl_LightSource[gl_MaxLights], gl_LightModel, gl_FrontLightModelProduct, gl_BackLightModelProduct, gl_FrontLightProduct[gl_MaxLights], gl_BackLightProduct[gl_MaxLights], gl_FrontMaterial, gl_BackMaterial, gl_Point, gl_TextureEnvColor[gl_MaxTextureUnits], gl_ClipPlane[gl_MaxClipPlanes], gl_EyePlaneS[gl_MaxTextureCoords], gl_EyePlaneT[gl_MaxTextureCoords], gl_EyePlaneR[gl_MaxTextureCoords], gl_EyePlaneQ[gl_MaxTextureCoords], gl_ObjectPlaneS[gl_MaxTextureCoords], gl_ObjectPlaneT[gl_MaxTextureCoords], gl_ObjectPlaneR[gl_MaxTextureCoords] and gl_ObjectPlaneQ[gl_MaxTextureCoords].
But the intended manner to read global data from shaders are uniform variables. Another way would be to encode information in textures.

Access world-space primitive size in fragment shader

It is essential for my fragment shader to know the world-space size of the primitive it belongs to. It is intended to be used solely for rendering rectangles (=triangles pair).
Naturally, I can compute its width and height on CPU and pass it as uniform value, but using such shader can be uncomfortable in long run - one have to remember what and how to compute it or search documentation. Is there any "automated" way of finding primitive size?
I have an idea of using a kind of pass-through geometry shader for doing this (since it is the only part of pipeline I know of that have access to whole primitive), but would that a good idea?
Is there any "automated" way of finding primitive size?
No because the concept of "primitive size" depends entirely on you, namely how your shaders work. As far as OpenGL is concerned, there's not even such a thing as world space. There's just clip space and NDC space and your vertex shaders take a formless bunch of data, called vertex attributes, and throw it out into clip space.
Since the vertex shader operates on a per-vertex base and don't see the other vertices (except if you pass them in as another vertex attribute, with a shifted index; if doing it that way, the output varying must be flat per vertex and the computation being skipped for 0 != vertex_ID % 3 for triangles) the only viable shader stages to do fully automate this is using a geometry or tesselation shader, doing that preparative calculation, emitting the result as a scalar vertex attribute.

GLSL shader for each situation

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.
The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.
animationShader should transform vertices by given bone transforms.
lightingShader should do the lighting and specularLighting should do the specular lighting.
blurShader should do the blur effect.
Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.
Secondly how can i change the color of fragments on different shader?
The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.
I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.
What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.
Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.
You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.
Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.
Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.
Your shader language might look like this:
INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;
uniform mat4 modelToClipMatrix;
void main()
{
clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}
Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.
In short, this will be a lot of work.
If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.
Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.
Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.
Update due to comment
Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations
uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];
void illum_calculation()
{
for(int i = 0; i < N_LIGHT_SOURCES; i++) {
light_directions[i] = ...;
light_halfdirections[i] = ...;
}
}
you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation
uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];
in vec3 vertex_position;
void skeletal_animation()
{
/* ...*/
}
put this into illum_skeletal_anim.vs.glslmod. Then you have some common header
#version 330
uniform ...;
in ...;
and some common tail which contains the main function, which invokes all the different stages
void main() {
skeletal_animation();
illum_calculation();
}
and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.
You have to provide different shader programs for each Model you want to render.
You can switch between different shader combinations using the glUseProgram function.
So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.
So it just a question of the design of the code of your game,
because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.
These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.
That means the model of your code directly influences the assigned shader and depends on it.
If you want to share common code between different shader programs, e.g illuminateDiffuse
you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.
In any case the shader compiler tells you whats wrong.
Best regards