I have many positions and directions stored in 1D textures on the GPU. I want to use those as rendersources in a GLSL geometry shader. To do this, I need to create corresponding view matrices from those textures.
My first thought is to take a detour to the CPU, read the textures to memory and create a bunch of view matrices from there, with something like glm::lookat(). Then send the matrices as uniform variables to the shader.
My question is, wether it is possible to skip this detour and instead create the view matrices directly in the GLSL geometry shader? Also, is this feasible performance wise?
Nobody says (or nobody should say) that your view matrix has to come from the CPU through a uniform. You can just generate the view matrix from the vectors in your texture right inside the shader. Maybe the implementation of the good old gluLookAt is of help to you there.
If this approach is a good idea performance-wise, is another question, but if this texture is quite large or changes frequently, this aproach might be better than reading it back to the CPU.
But maybe you can pre-generate the matrices into another texture/buffer using a simple GPGPU-like shader that does nothing more than generate a matrix for each position/vector in the textures and store this in another texture (using FBOs) or buffer (using transform feedback). This way you don't need to make a roundtrip to the CPU and you don't need to generate the matrices anew for each vertex/primitive/whatever. On the other hand this will increase the required memory as a 4x4 matrix is a bit more heavy than a position and a direction.
Sure. Read the texture, and build the matrices from the values...
vec4 x = texture(YourSampler, WhateverCoords1);
vec4 y = texture(YourSampler, WhateverCoords2);
vec4 z = texture(YourSampler, WhateverCoords3);
vec4 w = texture(YourSampler, WhateverCoords4);
mat4 matrix = mat4(x,y,z,w);
Any problem with this ? Or did I miss something ?
The view matrix is a uniform, and uniforms don't change in the middle of a render batch, nor can they be written to from a shader (directly). Insofar I don't see how generating it could be possible, at least not directly.
Also note that the geometry shader runs after vertices have been transformed with the modelview matrix, so it does not make all too much sense (at least during the same pass) to re-generate that matrix or part of it.
You could of course probably still do some hack with transform feedback, writing some values to a buffer, and either copy/bind this as uniform buffer later or just read the values from within a shader and multiply as a matrix. That would at least avoid a roundtrip to the CPU -- the question is whether such an approach makes sense and whether you really want to do such an obscure thing. It is hard to tell what's best without knowing exactly what you want to achieve, but quite probably just transforming things in the vertex shader (read those textures, build a matrix, multiply) will work better and easier.
Related
I am leaning GLSL and in general some OpenGL and I am having some trouble with vertex movement and management.
I am good with camera rotations and translation but now I need to move a few vertices and have them stay in their new positions.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices (as I need to move them around independently and it would be very pricey in terms of memory and computing power to store that many matrices).
If there were a way to change their position values in the VBO directly from the vertex shader, that would be optimal.
Is there a way to do that? What other ways do you suggest?
Thanks in advance.
PS I am using GLSL version 1.30
While it's possible to write values from a shader into a buffer and later read it from the CPU-client side (i.e., by using glReadPixels()) I don't think it is your case.
You can move a group of vertices, all with the same movement, with a single matrix. Why don't you do it with the CPU and store the results, updating their gl-buffer when needed? (VAO remains unchanged if you just update the glBuffer) Once they are moved, you don't need that matrix anymore, right? Or if you want to undo the movement, then, yes, yo need to store also the matrix.
It seems that transform feedback is exactly what you need.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices
If I understand you correctly then what you want is to send some vertices to the GPU. Then having the vertex shader moving them. You can't because a vertex shader is only able to read from the vertex buffer, it isn't able to write back to it.
it would be very pricey in terms of memory and computing power to store that many matrices.
Considering:
I am good with camera rotations and translation
Then in wouldn't be expensive at all. Considering that you already have a view and projection matrix for the camera and viewport. Then having a model matrix contain the translation, rotation and scaling of each object isn't anywhere near a bottleneck.
In the vertex shader you'd simply have:
uniform mat4 mvp; // model view projection matrix
...
gl_Position = mvp * vec4(position, 1.0);
On the CPU side of things you'd do:
mvp = projection * view * model;
GLint mvpLocation​ = glGetUniformLocation(shaderGeometryPass, "mvp")
glUniformMatrix4fv(mvpLocation, 1, GL_FALSE, (const GLfloat*)&mvp);
If this gives you performance issues then the problem lies elsewhere.
If you really want to "save" which ever changes you make on the GPU side of things, then you'd have to look into Shader Storage Buffer Object and/or Transform Feedback
I've read about how to use UBOs in OpenGL from this tutorial, in which it is suggested that both the projection and camera position matrix be stored in a single UBO, shared between shaders, like this:
layout(std140) uniform global_transforms
{
mat4 camera_orientation;
mat4 camera_projection;
};
But it seems to me to make more sense to store the product of the camera projection * position matrix multiplication in the UBO, so that the matrix multiplication doesn't have to occur once for every call to the vertex buffer. You'd be sending the same amount of data to the buffer each step, but trading potentially many matrix multiplications on the GPU for just one on the CPU.
My question: am I right in thinking that this would be even just a wee bit more performant? Perhaps the shader compiler is smart enough to perform operations involving only uniforms once per draw?
(I'd just test it myself with a few thousand polygons, but it's my first time working with a programmable pipeline and I haven't quite gotten UBOs working yet :P)
I am using OpenGL ES2 to render a fairly large number of mostly 2d items and so far I have gotten away by sending a premultiplied model/view/projection matrix to the vertex shader as a uniform and then multiplying my vertices with the resulting MVP in there.
All items are batched using texture atlases and I use one MVP per batch. So all my vertices are relative to the translation of that MVP.
Now I want to have rotation and scaling for each of the separate items, which means I need a different model for each of them. So I modified my vertex to include the model (16 floats!) and added a mat4 attribute in my shader and it all works well. But I'm kinda dissapointed with this solution since it dramatically increased the vertex size.
So as I was staring at my screen trying to think of a different solution I thought about transforming my vertices to world space before I send them over to the shader. Or even to screen space if its possible. The vertices I use are unnormalized coordinates in pixels.
So the question is, is such a thing possible? And if yes how do you do it? I can't think why it shouldn't be since its just maths but after a fairly long search on google, it doesn't look like a lot of people are actually doing this...
Strange cause if it is indeed possible, it would be quite a major optimization in cases like this one.
If the number of matrices per batch are limited then you can pass all those matrices as uniforms (preferably in a UBO) and expand the vertex data with an index which specifies which matrix you need to use.
This is similar to GPU skinning used for skeletal animation.
So, I have a simple project right now. Basically its just a bunch of cuboids that are all axis alligned... so it has really simple geometry.
Anyway I am considering adding a better shader to it. Currently I am using the "flat shader" that is a stock shader in GLShaderManager. It is coloring everything with a flat color. However I would love if I could build a shader like the following.
Basically I want a shader that has an array of point lights at various positions with varying intensities.
Probably defined like this.
struct Light {
float x;
float y;
float z;
float intensity;
};
Light Lighting[20];
And basically based on the level geometry and lights, I would love to simulate basic lighting and shadows, also it would be cool to have a circle under the player (like the player is actually their).
How hard would this be to make? How would I pass it my level geometry and light array. (note even though each cuboid is its own QUADS batch it will be easy to make any kind of variable that stores the data).
I am using Glew, GLTools, GLShaderManager, GLBatch, visual studio 2010, probably whatever "GSHL".
If you could just let me know how complicated a shader like this would be let me know. Also if it is easy to find a shader that works like this online if you could link it.
Also what are the difference between the two types of shaders? (Vertex, and fragment).
I would say it's relatively simple, but the thing about modern GL is that the initial learning curve is quite steep. At first it seems like you have to roll up your sleeves and learn how to do everything (essentially true) but later, it starts to seem like it made things easier than ever before with much more predictable behaviors since you're in the driver's seat.
One of the first things you want to learn to do is understand how to specify attributes from CPU to GPU. For attributes which don't vary on a per-vertex or per-fragment basis such as your light positions and intensities, you want uniform attributes. Check out examples utilizing the glUniform* functions to see how to do this. This will allow you to then experiment, passing values from the CPU side to the GPU side and then seeing how they affect the shader to accelerate your learning.
After that, it's worth learning how direct lighting is computed given a ray bouncing off a surface with phong shading, separating ambient, diffuse, and specular terms.
Later you might even want to store this light data into an environment map. That'll give you the ability to use as many lights as you want without affecting the speed of the shader.
About vertex vs. fragment shaders, vertex shaders compute things on a vertex-by-vertex basis, including data for the fragment shader to then use. The fragment shader is kind of like a pixel shader (in HLSL, it's actually just called a 'pixel shader'). It deals with rasterizing what's in between those vertices and is operating on a pixel-by-pixel basis (however with some potential overdraw). Often for lighting, the real heart of the logic will be in the fragment shader, while the vertex shader serves as an intermediary step to compute all the relevant values for the fragment shader to interpolate and use. The vertex shader is part of the 3D geometry pipeline, while the fragment shader is part of 2D rasterization.
It shouldn't take too long or be too hard to get the hang of this, but you want to approach this kind of slowly and in babysteps. There's a lot of setup work involved in establishing a lighting/shading pipeline for your software with the precise characteristics you want, and for the final work, you want to kind of plan ahead. So it's good to establish a separate scrap project and start experimenting away to figure out how things work.
Say if i wanted to build matrices inside the gpu pipeline for vertex transforms, i realized that my current implementation is quite inefficient because it rebuilds the matrices from the source material for every single vertex (while it only needs to build it once per affected vertices really). Is there any way to modify the whole array of vertices that get drawn in a single draw call? Calculating the matrices and storing them in vram doesn't seem to be a very good option since multiple vertices will be getting processed at the same time and i dont think i can sync them efficiently. The only other option i can think of is compute shader, i havent looked into its uses yet but would it be possible to have it calculate the matrices and store them in the gpu so i can access them later on when drawing?
Do you have any source code? I never calculate matrices in the shaders, normally do it on the CPU and pass them over in a constant buffer.
One way of achieving this is to precompute the matrix and send them to the shader as a uniform variable. For example, if your shaders only ever need to multiply the MVP matrix with the vertex positions, then you could pre-compute the MVP matrix outside the shader and send it as a float4x4 uniform to the shader, all the vertex shader does then is to multiply that single matrix with each vertex. It doesn't get much more optimal than that, since vertices are processed in parallel on the GPU and the GPU has instruction sets optimized for vector calculus.