How to separate OpenGL drawing into classes - c++

Say I wanted to draw just a simple OpenGL triangle. I know that I can draw a triangle in the main file, where all my OpenGL stuff is setup by doing something along the lines of:
glBegin( GL_TRIANGLES );
glVertex3f( 0.0f, 1.0f, 0.0f );
glVertex3f( -1.0f,-1.0f, 0.0f );
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
But instead of having all that clutter in my main file, I would like to draw a triangle instead by using a class named "Triangle" with a "Draw" function, so my code would look something like this:
Triangle TheTriangle;
TheTriangle.draw();
In short, how can I make a class with some OpenGL shapes that can be drawed by using a function?

Usual way is as follows:
TriangleArray tri;
tri.push_back(...);
tri.prepare();
while(1) {
clear();
tri.draw();
swapbuffers();
}
But usually the same class should handle array of objects, not just one object. So TriangleArray is good class name. prepare() is for setting up textures or vertex arrays. (Note: if your world is built from cubes, you'll create CubeArray instead.)

Like a few have said, OpenGL doesn't go well with object oriented programming, but that doesn't mean it can't be done. To be a little more theoretical, simply put, you could have a container of "Meshes" in which every frame you loop through and render each to the screen. The Render class could be thought of as a manager of the states, and the container of the various scene modules. In reality, most systems are much more complex than this and implement structures such as the scene graph.
To get started, try creating a mesh class and an object class (that maybe points to a mesh to be drawn) Add functionality to add and remove objects from a container. Every frame, loop through it and render each triangle(or whatever else you want) and there you have a very simple OO architecture. This would be a way to get you started.
Its very normal to find it odd to wrap very functional architecture with OOP but you do get used to it, and if done correctly, it can make your code much more maintainable and scalable. Having said that, the example I gave was quite simple, so here is an architecture that you may want to explore once you have that down.
The following link gives some useful info on what exactly a scene graph is: Chapter 6 covers the scene graph
Its a very powerful architecture that will allow you to partition and order your scenes in a very complex and efficient (if you take advantage of the benefits) manner. There are many other techniques but I find this one to be the overall most powerful for game dev. It can totally depend on what type of application you are seeking to create. Having said all this though, I would not advise making a FULLY object oriented renderer. Depending on your application, an OO scene graph could be enough. Anyways, Good Luck!

You can just put the OpenGL code in the Triangle::draw() function:
void Triangle::draw() {
glBegin( GL_TRIANGLES );
glVertex3f( 0.0f, 1.0f, 0.0f );
glVertex3f( -1.0f,-1.0f, 0.0f );
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
}
Of course, this assumes that you have correctly declared the draw() method in the Triangle class and that you have initialized the OpenGL enviornment.

OpenGL doesn't really map to well into OOP paradigms. It's perfectly possible to implement a object oriented rendering system, but the OpenGL API and many of its lower level concepts are very hard, to impossible to cast into classes.
See this answer to a similar question for details: https://stackoverflow.com/a/12091766/524368

Related

Capturing the screen behind the window

I want to write a Windows C++ application where the contents of the window is whatever is behind the window (as if the window is transparent). That is, I want to retrieve the bounding box of my window; capture those coordinates below, and draw them on my window. Therefore it is crucial that I can exclude the window itself during the capture.
"Why not just make the window transparent?" you ask. Because the next step for me is to make modifications to that image. I want to apply some arbitrary filters on it. For example, let's just say that I want to blur that image, so that my window looks like a frosted glass.
I tried to use the magnification API sample at https://code.msdn.microsoft.com/windowsdesktop/Magnification-API-Sample-14269fd2 which actually provides me the screen contents excluding my window. However, re-rendering the image is done in a timer, which causes a very jittery image; and I couldn't figure out how to retrieve and apply arbitrary transformations to that image.
I don't know where to start and really could use some pointers at this point. Sorry if I'm approaching this from a stupid perspective.
Edit: I am adding a mock-up of what I mean:
Edit 2: Just like in the magnification API example, view would be constantly refreshed (as frequently as possible, say every 16 ms just for argument's sake). See Visolve Deflector for an example; although it does not apply any effects on the captured region.
Again, I will be modifying the image data afterwards; therefore I cannot use the Magnification API's kernel matrix support.
You did not specify if this is a one time activity or you need a continuous stream of whats behind your window (like Magnifier/etc). And if continuous, whats the frequency of updates you may need.
Anyway in either case I see two primary use cases:
The contents behind your app are constant: You may not believe, but
most of the time the contents behind your window will not change.
The contents behind your window are changing/animating: this is a
trickier case.
Thus if you can let go the non-constant/animated background usecase, the solution is pretty simple in both one shot and continuous stream cases:
Hide your application window
Take a screenshot, and cache it!
Show your app back (crop everything apart from your application main window's bounding box), and now user can apply the filter
Even if user changes the filter, reapply that to to cached image.
Track your window's WM_MOVE/WM_SIZE and repeat above process for new dimensions.
Additionally if you need to be precise, use SetWindowsHookEx for CBT/etc.
Corner cases from top of my head:
Notify icon/Balloon tool tips
Desktop background scheduling (windows third party app)
Application specific message boxes etc!
Hope this helps!
You can start by modifying MAGCOLOREFFECT . In MagnifierSample.cpp we have:
if (ret)
{
MAGCOLOREFFECT magEffectInvert =
{{ // MagEffectInvert
{ -1.0f, 0.0f, 0.0f, 0.0f, 0.0f },
{ 0.0f, -1.0f, 0.0f, 0.0f, 0.0f },
{ 0.0f, 0.0f, -1.0f, 0.0f, 0.0f },
{ 0.0f, 0.0f, 0.0f, 1.0f, 0.0f },
{ 1.0f, 1.0f, 1.0f, 0.0f, 1.0f }
}};
ret = MagSetColorEffect(hwndMag,&magEffectInvert);
}
Using a Color Matrix to Transform a Single Color.
For more advanced effects, you can blit contents to memory device context.
I've achieved something akin to this using the "GetForeGroundWindow" and "PrintWindow".
It's kind of involved, but here is a picture. The Image updates with it's source, but it's slow, so there is a significant lag (i.e .2 seconds=.5seconds)
Rather than a blur effect I opted for a SinWave effect. Also, using GetForeGroundWindow basically means it can only copy the contents of one window. If you want to hear more just respond and I'll put together some steps and an example repo.

Opengl surface rendering issue

I just started loading some obj files and render it with opengl. When I render these meshes I get this result (see pictures).
I think its some kind of depth problem but i cant figure it out by myself.
Thats the parameters for rendering:
// Dark blue background
glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
// Enable depth test
glEnable( GL_DEPTH_TEST );
// Cull triangles which normal is not towards the camera
glEnable(GL_CULL_FACE);
I used this Tutorial code as template. https://code.google.com/p/opengl-tutorial-org/source/browse/#hg%2Ftutorial08_basic_shading
The problem is simple, you are doing FRONT or BACK culling.
And the object file contains CCW(Counter-Clock-Wise) or CW (Clock-Wise) cordinates, so written from left to right or right to left.
Your openGL code is expecting it in the other way round, so it hides the surfaces which you are looking backward on.
To check this solves your problem, just take out the glEnable(GL_CULL_FACE);
As this exactly seems to be producing the problem.
Additionally you can use glCullFace(ENUM); where ENUM has to be GL_FRONT or GL_BACK.
If you don't in at least one of both cases can't see your mesh (means in both cases: GL_FRONT or GL_BACK your just seeing the partial mesh) , thats a problem with your code of interpreting the .obj. or the .obj uses not strict surface vectors. (A mix of CCW and CW)
I am actually unsure what you mean, however glEnable(GL_CULL_FACE); and then GL_CULL_FACE(GL_BACK); will cull out or remove the back face of the object. This greatly reduces the lag while rendering objects, and only makes a difference if you are inside or "behind" the object.
Also, have you tried glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); before your render code?

Matrix operations on only part of vertex buffer (opengl-tutorial.org)

I recently learned that:
glBegin(GL_TRIANGLES);
glVertex3f( 0.0f, 1.0f, 0.0f);
glVertex3f(-1.0f,-1.0f, 0.0f);
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
is not really how its done. I've been working with the tutorials at opengl-tutorial.org which introduced me to VBOs. Now I'm migrating it all into a class and trying to refine my understanding.
My situation is similar to this. I understand how to use matrices for rotations and I could do it all myself then hand it over to gl and friends. But I'm sure thats far less efficient and it would involve more communication with the graphics card. Tutorial 17 on that website shows how to rotate things, it uses:
// Send our transformation to the currently bound shader,
// in the "MVP" uniform
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glUniformMatrix4fv(ModelMatrixID, 1, GL_FALSE, &ModelMatrix[0][0]);
glUniformMatrix4fv(ViewMatrixID, 1, GL_FALSE, &ViewMatrix[0][0]);
to rotate objects. I assume this is more efficient then anything I could ever produce. What I want is to do something like this, but only multiply the matrix by some of the mesh, without breaking the mesh into pieces (because that would disrupt the triangle-vertex-index relationship and I'd end up stitching it back together manually).
Is there a separate function for that? Is there some higher level library that handles meshes and bones that I should be using (as some of the replies to the other guys post seems to suggest)? I don't want to get stuck using something outdated and inefficient again, only to end up redoing everything again later.
Uniforms are so named because they are uniform: unchanging over the course of a render call. Shaders can only operate on input values (which are provided per input type. Per-vertex for vertex shaders, per-fragment for fragment shaders, etc), uniforms (which are fixed for a single rendering call), and global variables (which are reset to their original values for every instantiation of a shader).
If you want to do different stuff for different parts of an object within a single rendering call, you must do this based on input variables, because only inputs change within a single rendering call. It sounds like you're trying to do something with matrix skinning or hierarchies of objects, so you probably want to give each vertex a matrix index or something as an input. You use this index to look up a uniform matrix array to get the actual matrix you want to use.
OpenGL is not a scene graph. It doesn't think in meshes or geometry. When you specify a uniform, it won't get "applied" to the mesh. It merely sets a register to be accessed by a shader. Later when you draw primitives from a Vertex Array (maybe contained in a VBO), the call to glDraw… determines which parts of the VA are batched for drawing. It's perfectly possible and reasonable to glDraw… just a subset of the VA, then switch uniforms, and draw another subset.
In any case OpenGL will not change the data in the VA.

opengl rendering half a cylinder

ok so im new to opengl and im creating a pool game using only the core opengl and glut
i am writing in c++
i know how to draw a cylinder:
{
GLUquadric *quadric = gluNewQuadric();
glBegin;
gluCylinder(quadric, 0.5f, 0.5f, 5.0f, 40, 40);
glEnd();
}
i want to know if i can half this cylinder so i can use the curve to round off my table/pocket edges
any help wound be appreciated thanks
The function gluCylinder is too specific to accomplish this.
glu is built as a layer on top of opengl so you can always go to more low level drawing functions if the high level ones don't solve your problem.
This tutorial should give you an introduction to some of the lower level drawing functionality in opengl: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=05

C++ OpenGL load image in GL_QUAD, glVertex2f

Using WIN32_FIND_DATA and FindFirstFile I'm searching for files in a directory an with fileName.find(".jpg") != std::string::npos I filter the jpg images out.
I'm using OpenGL for creating Boxes with a red color:
glBegin( GL_QUADS );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, -0.3f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, -0.3f );
This is the box in the center with a red color.
My Question is how can I load the Images each in a Cube instead of the Red color (glColor4f)?
I think this is not the best way to make this, but this code is not my own Code, I'm trying to make this better for a friend.
Thank you!
You need to learn about texturing. See NeHe's tutorial on the subject as an example.
However, that tutorial is a bit old (as is your code, since you use glVertex(), so it might not matter to you right now... :).
Anyway, starting from OpenGL 3.1 and OpenGL ES 2.0, you should do it with using GLSL, fragment shaders and samplers instead. See another tutorial for that. It's actually simpler than learning all the fixed function stuff.
It's not really a good practice to use WinAPI together with OpenGL applications unless you really have reasons to - and loading textures from the disk is not a good reason.
Think this way: OpenGL is a platform-independent API, why to dimnish this advantage by using non-portable subroutines when portable alternatives exist and are more convenient to use in most cases?
For the loading textures, I recommend the SOIL library. This is likely to be much better a solution than what the NeHe tutorials recommend.
For finding files on the disk, you might want to use boost::filesystem if you want to get rid of the WinAPI dependency. But that's not a priority now.
When you have the texture loaded by SOIL (a GLuint value being the texture ID), you can do the following:
enable 2D texturing (glEnable(GL_TEXTURE_2D)),
bind the texture as active 2D texture (glBindTexture(GL_TEXTURE_2D,tex);),
set the active color to pure white so that the texture image will be full-bright,
draw the vertices as usual, but for each vertex you'll need to specify a texture coordinate (glTexCoord2f) instead of a color. (0,0) is upper left coord of the texture image, (1,1) is the lower right.
Note that the texture image must have dimensions being powers of two (like 16x16 or 256x512). If you want to use any texture size, switch to a newer OpenGL version which supports GL_TEXTURE_RECTANGLE.
Not really a lot of explaining, as far as the basics are concerned. :)
BTW- +1 for what Marcus said in his answer. You're learning an outdated OpenGL version right now; while you can do a lot of fun things with it, you can do more with at least OpenGL 2 and shaders... and it's usually easier with shaders too.