Is there a way to pass an array of vertex data from the cpu straight to a geometry shader?
If i want to build a sphere inside a GS, I'd rather have a pre calculated array of points that I can access from within the shader instead of generating them on the fly every frame
Maybe there's a way to access a veretex buffer from a shader I don't know about?
I'm using Panda3D but pure OpenGL explanations are more than welcome as well!
Thanks in advance
Related
I have rather simple OpenGL workflow. I just use lists (no shaders attached to them):
glNewList(list, GL.COMPILE);
//add vertices and normals
glEndList();
glCallList(list)
I want to get from OpenGL some information about faces of created object. Especially I need to know if their are on light or not for a given moment of time. Something like glReadPixels but not from framebuffer, but from 3D world.
Is it possible via gl* functions?
Without using any shaders, it is not possible to query any information on the geometry itself. OpenGL is not designed for geometry processing, it is a rendering API.
There are several ways to achieve what you need by using shaders:
Perform the whole computation in a compute shader (probably the option with best performance).
Use geometry shader and transform feedback.
How exactly you would implemented it depends on which data you have and on which computations should be performed.
I'm trying to use modern OpenGL and shaders, instead of the immediate mode I have been using so far. I recently learned about VBOs and VAO, and I'm still trying to get my head round them, but I know that a VBO takes an array of floats that are vertices, which it then passes to the GPU etc
What is the best way to draw multiple objects (which are all identical) but in different positions, using VBOs. Will I have to draw one, then modify the array passed in beforehand, and then draw it again and modify and draw and modify and so on... for all blocks in the screen every frame? Or is there a better way?
I'm trying to achieve this: http://imgur.com/cBgJ0sK
Any help is appreciated - I don't want to learn bad (deprecated, old) immediate mode habits, when I could be learning a more modern way!
You should not modify the vertices in your program, that should be done in the shaders. For this, you will create a matrix that represents the transformation and will use that matrix in the vertex shader.
The main idea is:
You create a VAO holding the information of your VBO (vertices, normals, texture coordinates, tangent information, etc.)
Then, for every different object, you generate a model matrix that holds the information of the position, orientation and scale (and other homogeneous tranformations) and send that to your shader to make the transformations.
The idea is that you bind your VAO just once and then draw all the different objects just sending the information that change (model matrix, may be textures) and draw the objects.
To learn about how to use the model matrix, read tutorials like this:
http://ogldev.atspace.co.uk/www/tutorial06/tutorial06.html
There are even better ways to do this, but you can start from here.
Other information that would be good for your case is using instancing.
http://ogldev.atspace.co.uk/www/tutorial33/tutorial33.html
Later, you can move on indirect drawing for even better performance. Later...
I wonder if there is a way to bind a Texture Buffer (TBO) Object directly on a certain range of data like it's possible to do using Uniform Buffer object (UBO -> glBindBufferRange).
Actually, I store my matrices within a TBO and to recover each of them within my vertex shader I need to send an 'GLUint' offset as Uniform variable. So, I wonder if it's possible to make a "kind of glBindBufferRange" applied on my TBO. This way I won't need to send each time for each vertex shader the offset where my matrices are stored.
I did a lot of researches on the subject and I did not found any concluding information (just glTexBufferRange but this function does not seem to be used for this purpose...).
Thanks a lot in advance for your help!
Q1: I have a 3D model which has a vertex array. each element has x,y and z values.
Now I have created a dynamic vbo buffer to render this array in OpenGL. The problem
is that I have to update all the vertices each frame [ which depends on some logic. but its for sure that it is not a simple transformation: I mean it cannot be done using a single transformation matrix for all vertices ].Now for each frame I map the vbo buffer then update the data and unmap it,
and then render it.
The update is done in a for loop.
Now I was wondering is there any faster way to do that?
Some points:
I have to update all vertices. Its my requirement, I cannot work with a subset of vertices.
Q2: I have to recalculate the normals because the vertices has been updated.and for smooth shading
I need to take the resultant of all the normals at a vertex which is slow.
Is there any faster way to do so?
Basically faster recalculation of normals for smooth shading.
Some things which I already know:
Use of SSE to optimize the normal calculation.
Use of TBB or openMP to parallelize loops.
I think that Transform Feedback is what you are looking for.
Using transform feed back, you can modify your vertex data during runtime and use it for another rendering , and all these operations can be achieved from GPU itself inside a vertex shader using transform feedback.
And it is supported in Opengl 3.0 and above.
Here is a simple example for how to use transfrom feedback.
here is some details about feedback buffer usage.
I have a vertex shader that transforms vertices to create a fisheye affect. Is is possible to just use just the vertex shader and use fixed pipeline for the fragment portion.
So basically i have an application that doesnt use shaders. I want to apply a fisheye affect using a vertex shader to transform all vertices, and then leave it to the application to take care to lighting, texturing, etc?
If this is not possible, is it possible to get a fisheye affect by messing with the contents of the gl back buffer?
Thanks
If your code is on fixed function, then what you described is a problem - that's why having your graphics code in shaders is good: they let you change anything easily. Remember to use them in your next project. :)
OK, but for this particular I assume that you don't want to rewrite your whole rendering from scratch to shaders now...
You mentioned you want to have a "fisheye effect". Seems like you're lucky, because I believe you don't need shaders for that effect! If we're talking about the same effect, then you can achieve it just by replacing the GL_PROJECTION matrix from OpenGL's fixed function to a perspective matrix with a wider angle of vision.
Yes, it's possible, altough some cards (notably ATI) don't support using a vertex shader without a fragment shader.