I want to use OpenGL as graphics of my project. But I really want to do it in a good style. How can I declare a member function "draw()" for each class to call it within OpenGL display function?
For example, I want something like this:
class Triangle
{
public:
void draw()
{
glBegin(GL_TRIANGLES);
...
glEnd();
}
};
Well, it also depends on how much time you have and what is required. Your approach is not bad, a little old-fashioned, though. Modern OpenGL uses shaders, but I this is not covered by your (school?) project, I guess. For that purpose, and for starters, your approach should be completely OK.
Besides shaders, if you wanted to progress a little further, you could also go in the direction of using more generic polygon objects, simply storing a list of vertices and combine that with a separate 'Renderer' class that would be capable of rendering polygons, consisting of triangles. The code would look like this:
renderer.draw(triangle);
Of course, a polygon can have some additional attributes like color, texture, transparency, etc. You can have some more specific polygon classes like TriangleStrip, TriangleFan, etc., also. Then all you need to do is to write a generic draw() method in your OpenGL Renderer that will be able to set all the states and push the vertices to rendering pipeline.
When I was working on my PhD, I wrote a simulator which did what you wanted to do. Just remember that even though your code may look object oriented, the OpenGL engine still renders things sequentially. Also, the sequential nature of matrix algrebra, which is under the hood in OpenGL, is sometimes not in the same order as you would logically think (when do I translate, when do I draw, when do I rotate, etc.?).
Remember LOGO back in the old days? It had a turtle, which was a pen, and you moved the turtle around and it would draw lines. If the pen was down, it drew, if the pen was up, it did not. That is how my mindset was when I worked on this program. I would start the "turtle" at a familiar coordinate (0, 0, 0), and use the math to translate it (move it around to the center of the object I want to draw), then call the draw() methods you are trying to write to draw my shape based on the relative coordinate system where the "turtle" is, not from absolute (0, 0, 0). Then I would move, the turtle, draw, etc. Hope that helps...
No, it won't work like this. The problem is that the GLUT Display function is exactly one function. So if you wanted to draw a bunch of triangles, you could still only register one of their draw() functions to be the GLUT display function. (Besides, pointers to member functions in C++ are a hard topic as well).
So as suggested above, go for a dedicated Renderer class. This class would know about all drawable objects in your application.
class Renderer {
std::list<Drawable> _objects;
public:
drawAllObjects() {
// iterate through _objects and call the respective draw() functions
}
};
Your GLUT display function would then be a static function that calls drawAllObjects() on a (global or not) renderer object.
Ah, good old immediate-mode OpenGL. :) That routine there looks fine.
I would probably make the 'draw' method virtual, though, and inherit from a 'Drawable' base type that specifies the methods such classes have.
Related
I am making a basic implementation of Asteroids using SFML in C++, to practice using a component-entity-system framework.
Conceptually, it makes sense for objects like the player ship, floating asteroids etc. to share common 'components', such as a graphics component, a velocity component, and an orientation/positional component. This keeps concerns separate and has a whole range of benefits.
However, in SFML, Sprites are rendered to a fixed position that only they know about! This immediately means that my graphics component and orientation/positional component must be combined or must know about each other, which goes against the whole idea of the component-entity-system approach. In SDL, on the other hand, you can easily render the texture to a separate rectangle constructed from anywhere.
My question is this: There must be some concrete reasoning behind why Sprites in SFML hang onto their own positional information - what is this reasoning? Perhaps if I understood this better, I could form a good solution.
The sf::Sprite class is basically meant to be a quick way to draw sprites in an easy to use manner.
They're not necessarily the best for more advanced use cases, mostly because they're rather slow (since they're unbatched).
sf::Sprite is primarily geared to someone wanting to get a sprite on screen easily without worrying too much about implementation details (as are other sf::Drawable derived classes).
What you should do instead is implementing your own drawable or visual component that stores color, texture, and UV coordinate. Mabye something like this:
struct DrawableComponent {
sf::Color color;
sf::Texture *texture;
sf::IntRect uv;
}
Of course there could be other approaches with more options or various components (e.g. vector graphics vs. textured quads).
Then, when drawing, iterate over all your entities with the same texture that can be batched and put their vertices into a std::vector or sf::VertexArray and use those for quick, batched rendering.
SFML follows an object-oriented design. A sf::Sprite models a visible thing, which has a texture and a transformation. Thus, as usual in OOD, it holds both of these attributes.
This is directly at odds with ECS design, which strives to turn this inside-out by not having entities hold onto anything. You won't really be able to integrate the sf::Sprite class into your design -- it's fundamentally incompatible. The best you can do is create a temporary sf::Sprite at display time, when you have gathered all of the data you need.
As for SDL... Well, unlike SFML, it's just a low-ish-level graphics API (among others). It does not try to model anything: take a texture, slap it on the framebuffer, that's it. Two very different tools for very different goals.
I'm using cocos2d-x and want to create a dynamic shape as part of my user interface. I need a circle with an adjustable section removed. I attempted this using the draw method but item would be drawn every frame which required too much processing power. What would be an efficient way to achieve this without drawing the shape every frame? Is it possible to clip a circle sprite to remove a section?
The mathematics behind the implementation is ok, I'm just looking for a high level explanation about how I should approach this.
you can have a try on CCTransitionProgressRadialCW. This class contains something similar to what you want.
Turns out theres a class specifically designed for this, CCProgressTimer.
Is there a simple way to detect collisions between 2 GL objects? e.g glutSolidCylinder & glutSolidTorus
if there is no simple way, how do i refer to this objects,to their location?
and if I have their location, what should be the mathematical consideration to take in account?
No, there is no simple way. Those aren't GL objects anyway, as OpenGL doesn't know of any objects, as it is no scene graph or geometry library. It just draws simple shapes, like triangles or points, onto the screen and that's exactly what glutSolidTorus and friends do. They don't construct some abstract object with properties like position and the like. They draw a bunch of triangles to the screen, transforming the vertices using the current transformation matrices.
When you are about to do such things like collision detection or even just simple object and scene management, you won't get around managing the objects, complete with positions and geometry and whatnot, yourself, as again OpenGL only draws triangles without the notion for any abstract objects they may compose.
Once you have complete control over your objects geometry (the triangles and vertices they're composed of), you can draw them yourself and/or feed them to any collision detection algorithms/libraries. For such mathematically describable objects, like spheres, cylinders, or even tori, you may also find specialized algorithms. But keep in mind. It's up to you to manage those things as objects with any abstract properties you like them to have, OpenGL just draws them and those glutSolid... functions are just hepler functions containing nothing else than a simple glBegin/glEnd block.
You will need some system that checks for and manages collisions, if you want to insist on using the glut objects then you will need to contain them in some other class/geometry-representation to check for intersections.
Some interesting reads/links on physics/collision detection:
www.realtimerendering.com/intersections.html
http://www.wildbunny.co.uk/blog/2011/04/20/collision-detection-for-dummies/ < he also has other articles, the principles for 2D can easily be extend to 3 dimensions
http://www.dtecta.com/files/GDC2012_vandenBergen_Gino_Physics_Tut.pdf
Edit, this book is good imo: http://www.amazon.co.uk/gp/product/1558607323/ref=wms_ohs_product
Ok, I have a renderer class which has all kinds of special functions called by the rest of the program:
DrawBoxFilled
DrawText
DrawLine
About 30 more...
Each of these functions calls glBegin/glEnd separably, which I know can be very inefficiently(its even deprecated). So anyways, I am planning a total rewrite of the renderer and I need to know the most efficient ways to set up the functions so that when something calls it, it draws it all at once, or whatever else it needs to do so it will run most efficiently. Thanks in advance :)
The efficient way to render is generally to use VBO's (vertex buffer objects) to store your vertex data, but that is only really meaningful if you are rendering (mostly) static data.
Without knowing more about what your application is supposed to render, it's hard to say how you should structure it. But ideally, you should never draw individual primitives, but rather draw the contents (a subset) of a vertexbuffer.
The most efficient way is not to expose such low-level methods at all. Instead, what you want to do is build a scene graph, which is a data structure that contains a representation of the entire scene. You update the scene graph in your "update" method, then render the whole thing in one go in your "render" method.
Another, slightly different approach is to re-build the entire scene graph each frame. This has the advantage that once the scene graph is composed, it doesn't change. So you can call your "render" method on another thread while your "update" method is going through and constructing the scene for the next frame at the same time.
Many of the more advanced effects are simply not possible without a complete scene graph. You can't do shadow mapping, for instance (which requires you to render the scene multiple times from a different angle), you can't do deferred rendering, it also makes anything which relies on sorted draw order (e.g. alpha-blending) very difficult.
From your method names, it looks like you're working in 2D, so while shadow mapping is probably not high on your feature list, alpha-blending deferred rendering might be.
I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.