I am writing a code in OpenGL, in c++ on Linux, where i need to draw a several sphere in the 3D space, for drawing it I use the glutSolidSphere( GLdouble(radius), GLint(slices), GLint(stacks) ) method, everytime, in the draw function, the glutSolidSphere is called a lot of times and after the sphere is traslated in the right position.
But I have noticed that when the program draw several spheres there is a framerate problem, so i was thinking if there was a method that allow me to "store" the model of the sphere without recreate it everytime and just change position.
I am not an OpenGL expert, sorry if i have committed english language errors.
The answer of #datenwolf in this topic is perfect, he said :
In OpenGL you don't create objects, you just draw them. Once they are drawn, OpenGL no longer cares about what geometry you sent it.
glutSolidSphere is just sending drawing commands to OpenGL.
So if you have performance issues you will need to look for others ways to improve performance like multithreading or maybe look to implement your own sphere draw function (google or stackoverflow research).
I heard that GluSolidSphere call glBegin(…) / glEnd() and these can be slow so maybe draw all your spheres with a loop so you have only one call of glBegin(…) / glEnd()
Related
I would like to implement a moving scene in OpenGL.
Scene description: terrain is static but all other objects are moving towards the -x axis.
Terrain is a plane in xz plane.
I have a mesh that will appear a lot of times on the terrain in several places.
But all of them will be moving towards -x axis at a specific speed.
I've thought of these possible implementations:
Create one mesh only and display it several times (I prefer this one)
Create several meshes, save them to a vector and then move them. After they leave the viewport, maybe destroy them?
The problem with the 1st way, is that I'll create meshes with a x% possibility, so this entails not knowing the number of meshes that will be needed. So how can I display them?
In example if I knew I would create 3 meshes I would do this:
glPushMatrix();
glTranslatef(mesh1 position + speed)
mesh.dray();
glPopMatrix();
glPushMatrix();
glTranslatef(mesh2 position + speed)
mesh.dray();
glPopMatrix();
glPushMatrix();
glTranslatef(mesh3 position + speed)
mesh.dray();
glPopMatrix();
Now in case we need to create meshes as long as the animation continues, how would I implement that? And secondly, what about the meshes that left the viewport? Do they continue to exist?
This answer is useless if you intend to code this in pure openGL.
However, if you are willing to try a 3rd party library, try www.ogre3d.org - I would find this really very easy to do in ogre.
In fact the challenges on 'intermediate tutorial one', if I remember correctly, should address the equivalent ogre concept of the problem you are having with openGL.
http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Intermediate+Tutorial+1&structure=Tutorials
(Would've made this as a comment, only recently became active!)
Use option 2. Just dont delete them just move them back and use them again. Like for example if I wanted to count sheep... I wouldnt create 1,000,000 meshs of sheep I would create maybe 1 or 2 and just rotate between using those.
I would like to draw a simple 2D stickman on the screen. I also want it to be anti-aliased.
The problem is that I want to use a bones system, which will be written after I would know how to draw the stickman itself based on the joints positions. This means I can't use sprites - I want my stickman to be fully controlable in the code.
It would be great if it will be possible to draw curves too.
Drawing a 3D stickman using a model would also be great if not better. The camera will be positioned like it's 2D, but I would still have depth. The problem is that I only have experience in Maya, and exporting and vertex weighting of the model in OpenGL seems like a mess...
I tried to find libraries for 2D anti-aliased drawing or enable multi-sampling and draw normally, but I had no luck. I also tried to use OpenGL's native anti-aliasing but it seems deprecated and the line joins are bad...
I don't want it to be too complicated because, well, it shouldn't be - it's just the first part of my program, and it's drawing a stickman...
I hope you guys can help me, I'm sure you know better than me :)
You could enable GL_SMOOTH. To check if you device supports your required line width for smooth lines, you can use glGet(GL_SMOOTH_LINE_WIDTH_RANGE);
If you want your code to be generic, you can also use antialiased textures.
Take a look at this link
http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node62.html
The only way to get antialiasing is use GL library which knows how to get antialiased GL context, for example, SDL. As of stickman, you can draw him with colored polygons.
I just started learning opengl and writing a first person shooter but I'm getting horrible framerates when I draw 5000 cubes. So now I'm attempting to perform occlusion and culling using an octree. What I'm confused about is where to cast the rays from. Do I only cast them from the fustrum near plane? It seems like I would miss part of the fustrum that expands. Any help is appreciated.
If 5000 cubes already gives bad framerates, you should consider changing the way you render your cubes.
It's very unclear to us what you are drawing the cubes for. If they are static (ie. don't move), then its best to pack them all into a single vertex buffer. If the cubes are supposed to move, then you should go for instancing. If you're going for a landscape made of cubes like minecraft, then you should create vertex buffers but only put in the faces of cubes that are actually visible.
I'd like to help more, but I'm unsure what you're doing.
I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.
I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.