How do you iterate through scene objects when raytracing in GLSL? - glsl

I'm using GLSL for raytracing because this is all happening in browser via WebGL. I can get my object information through to the fragment shader via floating point textures. In looking through the texture to find my object information, I tried to use a for loop with a variable in the expression to say when it was complete. It didn't compile, it wanted a constant expression. I can do this, but it is a dynamic scene, so I don't know how many objects there are going to be.
What's the correct way to find all the objects in the scene?

You could just compile your shader to include all of the objects in your scene and appropriate intersection tests all called, then when you need to update your scene just include all the scene objects in to the shader and recompile

Related

OpenGL object manipulating from shader

This may seem like a basic question, but how can you work with/manipulate objects created with the help of shaders in OpenGL?
I am always in the need of the coordinates of different objects, to use in my host program, to create/manipulate different objects, based on those coordinates and then send them back to the vertex/fragment/geometry shader.
I have my initial vertex coordinates, that I have defined in my main program, but once they reach the geometry shader, the position is computed via:
gl_Position = projection_matrix * view_matrix * vec4(square_point,1);
EmitVertex();
And now, for example, I need to select and move them with the mouse, on the screen. But there is no easy way that I can think of getting the exact coordinates.
I've tried to do the position math in my main program, but I do not seem to get the same coordinates as the ones computed by the geometry shader. And calculating all on the CPU, is not really that optimal for the number of object that I have.
I've thought of doing some GPU->CPU data retrieve, via buffers, but there are so many object and so many coordinates, that it's relentless.
I imagine that there is another way to approach this, just that I may not have the proper knowledge of how OpenGL works.
You can use so called shader storage buffer objectsSSBO. You create them in your shader with the buffer qualifier. Then you do the necessary computation and download your data via glMapBufferRange and memcpy .

Design for good relation between shader program and VAO

I am learning OpenGL and trying to get grasp of the best practices. I am working on a simple demonstration project in C++ which however aims to be a bit more generic and better structured (not everything just put in main()) than most of the tutorials I have seen on the web. I want to use the modern OpenGL ways which means VAOs and shaders. My biggest concern is about the relation of VAOs and shader programs. Maybe I am missing something here.
I am now thinking about the best design. Consider the following scenario:
there is a scene which contains multiple objects
each object has its individual size, position and rotation (i.e. transform matrix)
each object has a certain basic shape (e.g. box, ball), there can be multiple objects of the same shape
there can be multiple shader programs (e.g. one with plain interpolated RGBA colors, another with textures)
This leads me to the basic three components of my design:
ShaderProgram class - each instance contains a vertex shader and fragment shader (initialized from given strings)
Object class - has transform matrix and reference to a shape instance
Shape base class - and derived classes e.g. BoxShape, SphereShape; each derived class knows how to generate its mesh and turn it into buffer and how to map it to vertex attributes, in other words it will initialize its own VAO; it also known which glDraw... function(s) to use to render itself
When a scene is being rendered, I will call glUseProgram(rgbaShaderProgram). Then I will go through all objects which can be rendered using this program and render them. Then I will switch to glUseProgram(textureShaderProgram) and go through all textured objects.
When rendering an individual object:
1) I will call glUniformMatrix4fv() to set the individual transformation matrix (of course including projection matrix etc.)
2) then I will call the shape to which the object is associated to render
3) when shape is redered, it will bind its VAO, call its specific glDraw...() function and then unbind VAO
In my design I wanted to uncouple the dependency between Shape and ShaderProgram as they in theory can be interchangeable. But still some dependency seems to be there. When generating vertices in a specific ...Shape class and setting buffers for them I already need to know that I for example need to generate texture coordinates rather than RGBA components for each vertex. And when setting vertex attribute pointers glVertexAttribPointer I already must know that the shader program will use for example floats rather than integers (otherwise I would have to call glVertexAttribIPointer). I also need to know which attribute will be at which location in the shader program. In other words I am mixing the responsibility for sole shape geometry and the prior knowledge about how it will be rendered. And as a consequence of this I cannot render a shape with a shader program which is not compatible with it.
So finally my question: how to improve my design to achieve the goal (render the scene) and at the same time keep the versatility (interchangeability of shaders and shapes), force the correct usage (not to allow mixing wrong shapes with incompatible shaders), have the best performance possible (avoid unnecessarry program or context switching) and maintain good design principles (one class - one responsibility).
What I would do, is prepare ShaderProgram(s) templates, which I adapt to VAO attributes.
After all, shader programs are text, initially. What you might do is write your main functions for vertex and fragment programs, and attach the "header" to them, depending on the bound VAO. It could be useful to use standardized names for the variables you use inside the shaders, such as InPos, InNor, InTan, InTex.
That way, you can scan for elements that are missing in the VAO, but used inside the shader, and simply appending them in the header as const value with some default setting.
I do this via ShaderManager, with RequestShader(template,VAO) kind of function, which adapts the template to the VAO, and caches the compiled shader for later use.
If another VAO has the same attributes, and requires the same template, the precompiled cached version is returned to avoid the same adaptation process.

Opengl glsl multiple shaders freeze screen

I'm writing a little OpenGl program with Glsl.
Now I have two objects which I need to draw. Both have different shaders.
Normally I think I should do something like that in my draw() method:
void draw() {
shaderObjektOne.bind();
glBegin(xxx);
//draw Object one
...
glEnd()
shaderObjektTwo.bind();
glBegin(xxx);
//draw Object two
...
glEnd()
}
If I do that this way my screen freezes.
Binding the shader for only one object is working without problems.
I've been looking around but I couldn't find a real explanation why this error occurs.
Is it because a render target can only be rendered with one shader?
How can I avoid a giant shader file or having multiple render targets?
Thanks.
EDIT:
I want to have separate compiled shader programs for each object. These will be bound right before I draw the vertexes for the object. I want to avoid one big shader in which I need to set specific parameters to choose the functionality for the object. I use glut and currently all drawing is done before glutSwapBuffers().
'Freezing' means that there is actual something visible on the screen (the last object I've drawn with the last bound shader) but my input isn't working anymore. That means, I cannot move the camera in the world but the program is still running normal (tested with a debugger).
Got it.
It was a problem with my program design. I accidentally added a copy of the shader which I wanted to bind.
Everytime I tried to bind the shader it bound the copy of it.
Thanks for your help guys :)

How to apply a vertex shader to all vertices in a scene in OpenGL?

I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().

Render different OpenGL point sprite at each vertex

Basic question: with OpenGL, can a shader program be made to use a single glDrawArrays call (with GL_POINTS) to draw a different point sprite at each vertex?
More info:
I have an OpenGL desktop program (using SharpGL in a WPF application) that can display thousands of 2D tracks. In part, a track is a series of time stamped points and a portion of the points are displayed depending on a time span around a changeable CurrentTime. Other properties of each point determine the point's color. Each track binds it's vertex array, color array, and a time stamp array and calls a glDrawArrays to render its points. A shader program does the rest.
I've recently started using point sprites to give different types of tracks different symbols for their points. I'd like to give different points on a track different symbols depending on other attributes of each point. I'd like to do this with a single glDrawArrays call for each track. So the thought is that an array of sprites would do the trick (applying a different sprite to each vertex). Is this possible? Am I missing a better solution?
Sure. OpenGL automatically generates texture coords from 0.0 to 1.0 across each sprite, but there's no reason your fragment shader can't change them. I'd put all the sprite images into one large texture atlas, and pass the attribute(s) that determine which image to use into the fragment shader.
In OpenGL 3.3 or better, there is a built-in gl_PrimitiveID variable you can use within the fragment shader, which is a counter incremented for each point, line, or triangle drawn. (It's a bit more complicated with geometry or tessellation shaders.) This might also be useful.
Hope this helps.