Okay forgive me if this is at all vague, but I've been up all night trying to catch up with some coding.
I have a reasonably large and defined terrain with minimal optimisations in place and I have just started to introduce predefined meshes (.X objects) which come with materials and textures. Previously I was working in a fixed-function-pipeline approach as I have only recently started working with DirectX9. It has become apparent that FFP is old school and deprecated in DirectX10, so I have been moving relevant code to using a HLSL approach.
In my initial approach I had loaded my models into a std::vector container of models and created another std::vector of objects which contain a reference to which model to display. In my render loop, I would iterate this container and check to see if the objects were within the Field-of-view of my camera. If so I would first translate the meshes to their positions, using a SetTransform() call, then DrawSubset().
However it has become clear that SetTransform() is not applicable to the HLSL approach; therefore I'm a little stumped to how I can pre-translate these meshes to their relevant positions, or whether I should be translating them within the vertex shader. The meshes are stored within an ID3DXMESH type and it seems that I can access the Index and Vertex Buffer of these meshes; Am I supposed to take the contents of these buffers, translate the contents then draw them? Or am I really going the wrong way about doing this?
I am familiar with the Vertex Buffer approach, but not sure what the vertex format is within the mesh itself.
Any help would be appreciated as I'm about to tear my eyeballs out.
Edit
I'll accept Sergio's answer as it pushed me in the right direction although the solution came when I realised in my debug output a line about committing changes.
Solution
After transforming my mesh I needed to call
g_pEffect->CommitChanges();
As someone said earlier you should be doing the translation on your vertex shader, if you have a worldViewProjection matrix constant within your shader, then you need to multiply the vertex position by that matrix and return the transformed position on your output before you move onto your pixel shader.
Make sure the world transform you are passing in with your view and projection is not just an identity as this wont transform the verts at all.
This is a sample on how you can achieve this on your vertex shader, which is essentially multiplying the incoming untransformed vertex position by the worldViewProj matrix.
VS_OUTPUT vs_main( float4 inPos: POSITION )
{
VS_OUTPUT output;
output.pos = mul( inPos, g_worldViewProj );
return output;
}
Related
I am leaning GLSL and in general some OpenGL and I am having some trouble with vertex movement and management.
I am good with camera rotations and translation but now I need to move a few vertices and have them stay in their new positions.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices (as I need to move them around independently and it would be very pricey in terms of memory and computing power to store that many matrices).
If there were a way to change their position values in the VBO directly from the vertex shader, that would be optimal.
Is there a way to do that? What other ways do you suggest?
Thanks in advance.
PS I am using GLSL version 1.30
While it's possible to write values from a shader into a buffer and later read it from the CPU-client side (i.e., by using glReadPixels()) I don't think it is your case.
You can move a group of vertices, all with the same movement, with a single matrix. Why don't you do it with the CPU and store the results, updating their gl-buffer when needed? (VAO remains unchanged if you just update the glBuffer) Once they are moved, you don't need that matrix anymore, right? Or if you want to undo the movement, then, yes, yo need to store also the matrix.
It seems that transform feedback is exactly what you need.
What I would like to do is move them using the vertex shader but also not keep track of their new positions trough matrices
If I understand you correctly then what you want is to send some vertices to the GPU. Then having the vertex shader moving them. You can't because a vertex shader is only able to read from the vertex buffer, it isn't able to write back to it.
it would be very pricey in terms of memory and computing power to store that many matrices.
Considering:
I am good with camera rotations and translation
Then in wouldn't be expensive at all. Considering that you already have a view and projection matrix for the camera and viewport. Then having a model matrix contain the translation, rotation and scaling of each object isn't anywhere near a bottleneck.
In the vertex shader you'd simply have:
uniform mat4 mvp; // model view projection matrix
...
gl_Position = mvp * vec4(position, 1.0);
On the CPU side of things you'd do:
mvp = projection * view * model;
GLint mvpLocation​ = glGetUniformLocation(shaderGeometryPass, "mvp")
glUniformMatrix4fv(mvpLocation, 1, GL_FALSE, (const GLfloat*)&mvp);
If this gives you performance issues then the problem lies elsewhere.
If you really want to "save" which ever changes you make on the GPU side of things, then you'd have to look into Shader Storage Buffer Object and/or Transform Feedback
This may seem like a basic question, but how can you work with/manipulate objects created with the help of shaders in OpenGL?
I am always in the need of the coordinates of different objects, to use in my host program, to create/manipulate different objects, based on those coordinates and then send them back to the vertex/fragment/geometry shader.
I have my initial vertex coordinates, that I have defined in my main program, but once they reach the geometry shader, the position is computed via:
gl_Position = projection_matrix * view_matrix * vec4(square_point,1);
EmitVertex();
And now, for example, I need to select and move them with the mouse, on the screen. But there is no easy way that I can think of getting the exact coordinates.
I've tried to do the position math in my main program, but I do not seem to get the same coordinates as the ones computed by the geometry shader. And calculating all on the CPU, is not really that optimal for the number of object that I have.
I've thought of doing some GPU->CPU data retrieve, via buffers, but there are so many object and so many coordinates, that it's relentless.
I imagine that there is another way to approach this, just that I may not have the proper knowledge of how OpenGL works.
You can use so called shader storage buffer objectsSSBO. You create them in your shader with the buffer qualifier. Then you do the necessary computation and download your data via glMapBufferRange and memcpy .
I am using OpenGL ES2 to render a fairly large number of mostly 2d items and so far I have gotten away by sending a premultiplied model/view/projection matrix to the vertex shader as a uniform and then multiplying my vertices with the resulting MVP in there.
All items are batched using texture atlases and I use one MVP per batch. So all my vertices are relative to the translation of that MVP.
Now I want to have rotation and scaling for each of the separate items, which means I need a different model for each of them. So I modified my vertex to include the model (16 floats!) and added a mat4 attribute in my shader and it all works well. But I'm kinda dissapointed with this solution since it dramatically increased the vertex size.
So as I was staring at my screen trying to think of a different solution I thought about transforming my vertices to world space before I send them over to the shader. Or even to screen space if its possible. The vertices I use are unnormalized coordinates in pixels.
So the question is, is such a thing possible? And if yes how do you do it? I can't think why it shouldn't be since its just maths but after a fairly long search on google, it doesn't look like a lot of people are actually doing this...
Strange cause if it is indeed possible, it would be quite a major optimization in cases like this one.
If the number of matrices per batch are limited then you can pass all those matrices as uniforms (preferably in a UBO) and expand the vertex data with an index which specifies which matrix you need to use.
This is similar to GPU skinning used for skeletal animation.
In the vertex shader program of a WebGL application, I am doing the following:
Calculate gl_Position P using a function f(t) that varies in time.
My question is:
Is it possible to store the updated P(t) computed in the vertex shader so I can use it in the next time step? This will be useful for performing some boundary tests.
I have read some information on how textures can be used to store and updated vextex positions, but is this feasible in WebGL, since even texture access from a vertex program is unsupported in OpenGL ES 1.0?
For a more concrete example, let us say that we are trying to move a point according to the equation R(t) = (k*t, 0, 0). These positions are updated in the vertex shader, which makes the point move. Now if I want to make the point bounce at the wall located at R = (C, 0, 0). To do that, we need the position of the point at t - dt. (previous time step).
Any ideas appreciated.
Regards
In addition to the previous answers, you can circumvent vertex texture fetch by PBOs, but I do not know, if they are supported in WebGL or GLES, as I have only desktop GL experience. You write the vertex positions into the framebuffer. But then, instead of using these as vertex texture, you copy them into a vertex buffer (which works best via PBOs) and use them as a usual vertex attribute. That's the old way of doing transform feedback, which I suppose is not supported.
There's no way to store anything in the vertex shader. You can only pass values from it to the fragment shader and write those to the framebuffer pixels. And as you said, vertex texture fetch isn't universally supported (for instance, ANGLE started supporting it only a few days ago), so even that is a bit unworkable.
You can do two things: either do all the position math in JS and pass in the p1 and p0 as uniforms. Or keep track of the previous time value and do the position math twice in the shader, both for t1 and t0 (shouldn't have much of a performance impact unless you're vertex shader -bound).
Is your dt a constant? if so you could retrieve the previous position for your point by evaluating
R(t-dt). If it is not a constant then you could use a uniform to pass it along on every rendering cycle.
Could someone explain me the pretty basics of pixel and vertex shader interaction.
The obvious things are that vertex shaders receive basic vertex properties and then repass some of them to the actual pixel shader.
But how does the actual vertex->pixel transition happens? I know that obviously all types of pipelines include the rasterizer change, which is capable of interpolating the vertex parameters and can apply textures based on the certain texture coordinates.
And as far as I understand those are also interpolated (not quite sure about this moment, heard something about complex UV derivative math, but I assume that we can say that they are being interpolated).
So, here are some "targeted" questions.
How does the pixel shader operate? I mean that pixel shader obviously does some actions "per pixel", but due to the unobvious vertex->pixel transition this yields some questions.
Can I assume that if I evaluate matrix - vector product once in my pixel shader, it would be evaluated once when the image is rasterized? Or would it be better to evaluate everything that's possible in my vertex shader and then pass it to the pixel shader?
Also, if someone could point articles / abstracts on this topic, I would really appreciate that.
Thank you.
UPDATE
I thought it actually doesn't matter, because the interaction should be pretty same everywhere. I'm developing visualization applications and games for desktops, using HLSL / GLSL / Nvidia CG for shaders and mostly C++ as the base language.
The vertex shader is executed once for every vertex. It allows you to transform the vertex from world space coordinates (or whichever other coordinate system it might be in) into screenspace coordinates.
That is, if you have a triangle, each vertex is transformed, so it ends up with a position on the screen.
And given these positions, the rasterizer determines which pixels are covered by the triangle spanned by those three vertices.
And then, for each pixel inside the triangle, the pixel shader is invoked. The output from the vertex shader is usually interpolated for each pixel, so pixels close to vertex v0 will receive values very close to those computed by the vertex shader for v0.
And this means that everything you do in the pixel shader is executed once per pixel covered by the primitive being rasterized