Map texture to single points rather than quads? - opengl

Using OpenGL, is it possible to apply a fragment shader to a specified region around a single vertex i.e. glPoint, rather than creating an array of quads and mapping a texture coordinate of a "duck" to each vertex?
I guess it would be more efficient as it would only require one vertex to be sent to the GPU per duck displayed, rather than the four vertices currently needed.
I have achieved a similar effect by using a geometry shader to build a quad around each vertex. Still, I am wondering if it is possible to achieve the same result without using a geometry shader.
I played around with gl_PointSize, but it is a limited feature, and I don't think it is the proper way to do it.
To summarize, I would like to know if OpenGL would allow filling a region around a single vertex using the fragment shader rather than, expressively creating a quad. Is that possible?

Related

Efficiently transforming many different models in modern OpenGL

Suppose I want to render many different models, each with a different transformation matrix I want to be applied to their vertices. As far as I understand, the naive approach is to specify a matrix uniform in the vertex shader, the value of which is updated for each mesh during rendering.
It's obvious to me that this is a bad idea, due to the expense of many uniform updates and draw calls. So, what is the most efficient way to achieve this in modern OpenGL?
I've genuinely tried to find a straight, clear answer to this question. Most answers I find vaguely mention UBOs, or instance drawing (which afaik won't work unless you are drawing instances of the same mesh many times, which is not my goal).
With OpenGL 4.6 or with ARB_shader_draw_parameters, each draw in a multi-draw rendering command (functions of the form glMultiDraw*) is assigned a draw index from 0 to the number of draw calls specified by that function. This index is provided to the Vertex Shader via the gl_DrawID input value. You can then use this index to fetch a matrix from any number of constructs: UBOs, SSBOs, buffer textures, etc.
This works for multi-draw indirect rendering as well. So in theory, you can have a compute shader operation generate a bunch of rendering commands, then render your entire scene with a single draw call (assuming that all of your objects live in the same vertex buffers and can use the same shader and other state). Or at the very least, a large portion of the scene.
Furthermore, this index is considered dynamically uniform, so you can also use it (or values derived from it and other dynamically uniform values) to index into arrays of textures, fetch a texture from an array of bindless textures, or the like.

Can I use different shader programs for the same rendering job?

EDIT:
My question was unclear at first, I'll try to rephrase it:
How do I use different shaders to do different rendering operations on the same mesh polygons? For example, I want to add lighting using one shader and add fog using another shader. I need to use the color interpolated from the first shader in the calculation of the second shader, but I don't know how to do it if I can't (or rather not supposed to) pass around the color buffer between shaders.
Also (and that was where my question started), I need the same world-view-projection calculations for both shaders, so am I supposed to calculate it in every shader seperatly? Am I supposed to use one big shader for all my rendering operations?
Original question:
Say I have two different shader programs. The first one calculates the vertex positions in the vertex shader and does some operations in the fragment shader.
Let's say I want to use the fragment shader to do different calculations, but I still want to use the same vertex positions calculated by the first vertex shader. Do I have to calculate the vertex positions again or is there a way to share state between different shader programs?
you got more options:
multi pass
this one usually render the geometry into depth and "color" buffer first and then in next passes uses that as input textures for rendering single rectangle covering whole screen/view. Deferred shading is an example of this but there are many other implementations of effects that are not Deferred shading related. Here an example of multi pass:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js?
In first pass the planets and stars and stuff is rendered, in second the atmosphere is added.
You can combine the passes either by blending or direct rendering. The direct rendering requires that you render to texture each pass and render in the last one. Blending is changing the color of the output in each pass.
single pass
what you describe is more like you should encode the different shaders as a functions for single fragment shader... Yes you can combine more shaders into single one if they are compatible and combine their results to final output color.
Big shader is a performance hit but I think it would be still faster than having multiple passes doing the same.
Take a look at this example:
Normal mapping gone horribly wrong
this one computes enviromental reflection, lighting, geometry color and combines them together to single output color.
Exotic shaders
There are also exotic shaders that go around the pipeline limitations like this one:
Reflection and refraction impossible without recursive ray tracing?
Which are used for stuff that is believed to be not possible to implement in GL/GLSL pipeline. Anyway If the limitations are too binding you can still use compute shader...

Use triangle normals in OpenGL to get vertex normals

I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.

glDrawElements and flat shading

Is it possible to achieve flat shading in OpenGL when using glDrawElements to draw objects, and if so how? The ideal way would be to calculate a normal for each triangle only once, if possible.
The solution must only use the programmable pipeline (core profile).
There are indeed ways around this without duplicating vertices, with some limitations for each one (at least those I can think of with my limited OpenGL experience).
I can see two solutions that would give you a constant value for the normal over each triangle :
declare the input as flat in your shader and pick which vertex gives its value via glProvokingVertex; fast but you'll get the normal for one vertex as the normal for the whole triangle, which might not look right
use a geometry shader taking triangles and outputing triangles to calculate a single normal per face. This is the most flexible way, allowing you to control the resulting effect, but it might be slow (and required geometry shader capable hardware, obviously)
Sadly, the only way to do that is to duplicate all your vertices, since attributes are per-vertex and not per-triangle
When you think about it, this is what we did in immediate mode...

OpenGL: Using shaders to create vertex lighting by using pre-calculated colormap?

First of all, I have very little knowledge of what shaders can do, and i am very interested in making vertex lighting. I am attempting to use a 3d colormap which would be used to calculate the vertex color at that position of the world, and also interpolate the color by using the nearby colors from the colormap.
I cant use typical OpenGL lighting because its probably too slow and theres a lot of lights i need to render. I am going to "render" the lights at the colormap first, and then i could either manually map every vertex drawn with the corresponding color from the colormap.
...Or i could somehow automate this process, so i wouldnt have to change the color values of vertexes myself, but a shader could perhaps do this for me?
Questions is... is this possible, and if it is: what i need to know to make it possible?
Edit: Note that i also need to update the lightmap efficiently, without caring about the size of the lightmap, so the update should be done only at that specific part of the lightmap i want to update.
It almost sounds like what you want to do is render the lights to your color map, then use your color map as a texture, but instead of decal mode set it to modulate mode, so it's multiplied with the existing color instead of just replacing it.
That is different in one way though: instead of just affecting the vertexes, it'll map to the individual fragments (pixels, in essence).
Edit: What I had in mind wasn't a 3D texture -- it was a cube map. Basically, create a virtual cube surrounding everything in your "world". Create a 2D texture for each face of that cube. Render your coloring to the cube map. Then, to color a vertex you (virtually) extend a ray outward from the center, through the vertex, to the cube. The pixel you hit on the cube map gives you the color of lighting for that vertex.
Updating should be relatively efficient -- you have normal 2D textures for the top, bottom, front, etc., and you update them as needed.
If you cant use the fixed function pipeline functionality the best way to do per vertex lighting should be to do all the lighting calculations per vertex in the vertex-shader, when you then pass it on the the fragment shader it will be correctly interpolated across the face.
Another way to deal with performances issues when using a lot of light sources is to use deferred rendering as it will only do lighting calculation on the geometry that is actually visible.
That is possible, but will not be effective on the current hardware.
You want to render light volumes into 3d texture. The rasterizer works on a 2D surface, so your volumes have to be split along one of the axises. The split can be done in one of the following ways:
Different draw calls for each split
Instanced draw, with layer selection based on glInstanceID (will require geometry shader)
Branch in geometry shader directly from a single draw call
In order to implement it, I would suggest reading GL-3 specification and examples. It's not going to be easy, nor it will be fast enough in the result for complex scenes.