C++ OpenGL load image in GL_QUAD, glVertex2f - c++

Using WIN32_FIND_DATA and FindFirstFile I'm searching for files in a directory an with fileName.find(".jpg") != std::string::npos I filter the jpg images out.
I'm using OpenGL for creating Boxes with a red color:
glBegin( GL_QUADS );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, 0.7f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( -0.35f, -0.3f );
glColor4f( 1.0f, 0.0f, 0.0f, 0.0f ); glVertex2f( 0.35f, -0.3f );
This is the box in the center with a red color.
My Question is how can I load the Images each in a Cube instead of the Red color (glColor4f)?
I think this is not the best way to make this, but this code is not my own Code, I'm trying to make this better for a friend.
Thank you!

You need to learn about texturing. See NeHe's tutorial on the subject as an example.
However, that tutorial is a bit old (as is your code, since you use glVertex(), so it might not matter to you right now... :).
Anyway, starting from OpenGL 3.1 and OpenGL ES 2.0, you should do it with using GLSL, fragment shaders and samplers instead. See another tutorial for that. It's actually simpler than learning all the fixed function stuff.

It's not really a good practice to use WinAPI together with OpenGL applications unless you really have reasons to - and loading textures from the disk is not a good reason.
Think this way: OpenGL is a platform-independent API, why to dimnish this advantage by using non-portable subroutines when portable alternatives exist and are more convenient to use in most cases?
For the loading textures, I recommend the SOIL library. This is likely to be much better a solution than what the NeHe tutorials recommend.
For finding files on the disk, you might want to use boost::filesystem if you want to get rid of the WinAPI dependency. But that's not a priority now.
When you have the texture loaded by SOIL (a GLuint value being the texture ID), you can do the following:
enable 2D texturing (glEnable(GL_TEXTURE_2D)),
bind the texture as active 2D texture (glBindTexture(GL_TEXTURE_2D,tex);),
set the active color to pure white so that the texture image will be full-bright,
draw the vertices as usual, but for each vertex you'll need to specify a texture coordinate (glTexCoord2f) instead of a color. (0,0) is upper left coord of the texture image, (1,1) is the lower right.
Note that the texture image must have dimensions being powers of two (like 16x16 or 256x512). If you want to use any texture size, switch to a newer OpenGL version which supports GL_TEXTURE_RECTANGLE.
Not really a lot of explaining, as far as the basics are concerned. :)
BTW- +1 for what Marcus said in his answer. You're learning an outdated OpenGL version right now; while you can do a lot of fun things with it, you can do more with at least OpenGL 2 and shaders... and it's usually easier with shaders too.

Related

Drawing a simple rectangle in OpenGL 4

According to this wikibook it used to be possible to draw a simple rectangle as easily as this (after creating and initializing the window):
glColor3f(0.0f, 0.0f, 0.0f);
glRectf(-0.75f,0.75f, 0.75f, -0.75f);
This is has been removed however in OpenGL 3.2 and later versions.
Is there some other simple, quick and dirty, way in OpenGL 4 to draw a rectangle with a fixed color (without using shaders or anything fancy)?
Is there some ... way ... to draw a rectangle ... without using shaders ...?
Yes. In fact, AFAIK, it is supported on all OpenGL versions in existence: you can draw a solid rectangle by enabling scissor test and clearing the framebuffer:
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
This is different from glRect in multiple ways:
The coordinates are specified in pixels relative to the window origin.
The rectangle must be axis aligned and cannot be transformed in any way.
Most of the per-sample processing is skipped. This includes blending, depth and stencil testing.
However, I'd rather discourage you from doing this. You're likely to be better off by building a VAO with all the rectangles you want to draw on the screen, then draw them all with a very simple shader.

How to separate OpenGL drawing into classes

Say I wanted to draw just a simple OpenGL triangle. I know that I can draw a triangle in the main file, where all my OpenGL stuff is setup by doing something along the lines of:
glBegin( GL_TRIANGLES );
glVertex3f( 0.0f, 1.0f, 0.0f );
glVertex3f( -1.0f,-1.0f, 0.0f );
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
But instead of having all that clutter in my main file, I would like to draw a triangle instead by using a class named "Triangle" with a "Draw" function, so my code would look something like this:
Triangle TheTriangle;
TheTriangle.draw();
In short, how can I make a class with some OpenGL shapes that can be drawed by using a function?
Usual way is as follows:
TriangleArray tri;
tri.push_back(...);
tri.prepare();
while(1) {
clear();
tri.draw();
swapbuffers();
}
But usually the same class should handle array of objects, not just one object. So TriangleArray is good class name. prepare() is for setting up textures or vertex arrays. (Note: if your world is built from cubes, you'll create CubeArray instead.)
Like a few have said, OpenGL doesn't go well with object oriented programming, but that doesn't mean it can't be done. To be a little more theoretical, simply put, you could have a container of "Meshes" in which every frame you loop through and render each to the screen. The Render class could be thought of as a manager of the states, and the container of the various scene modules. In reality, most systems are much more complex than this and implement structures such as the scene graph.
To get started, try creating a mesh class and an object class (that maybe points to a mesh to be drawn) Add functionality to add and remove objects from a container. Every frame, loop through it and render each triangle(or whatever else you want) and there you have a very simple OO architecture. This would be a way to get you started.
Its very normal to find it odd to wrap very functional architecture with OOP but you do get used to it, and if done correctly, it can make your code much more maintainable and scalable. Having said that, the example I gave was quite simple, so here is an architecture that you may want to explore once you have that down.
The following link gives some useful info on what exactly a scene graph is: Chapter 6 covers the scene graph
Its a very powerful architecture that will allow you to partition and order your scenes in a very complex and efficient (if you take advantage of the benefits) manner. There are many other techniques but I find this one to be the overall most powerful for game dev. It can totally depend on what type of application you are seeking to create. Having said all this though, I would not advise making a FULLY object oriented renderer. Depending on your application, an OO scene graph could be enough. Anyways, Good Luck!
You can just put the OpenGL code in the Triangle::draw() function:
void Triangle::draw() {
glBegin( GL_TRIANGLES );
glVertex3f( 0.0f, 1.0f, 0.0f );
glVertex3f( -1.0f,-1.0f, 0.0f );
glVertex3f( 1.0f,-1.0f, 0.0f);
glEnd();
}
Of course, this assumes that you have correctly declared the draw() method in the Triangle class and that you have initialized the OpenGL enviornment.
OpenGL doesn't really map to well into OOP paradigms. It's perfectly possible to implement a object oriented rendering system, but the OpenGL API and many of its lower level concepts are very hard, to impossible to cast into classes.
See this answer to a similar question for details: https://stackoverflow.com/a/12091766/524368

How to draw transparent BMP with GDI+

I'm currently editing some old GDI code to use GDI+ and ran into a problem when it comes to draw a BMP file with transparent background. The old GDI code did not use any obvious extra code to draw the background transparent so I'm wondering how to achieve this using GDI+.
My current code looks like this
HINSTANCE hinstance = GetModuleHandle(NULL);
bmp = Gdiplus::Bitmap::FromResource(hinstance, MAKEINTRESOURCEW(IDB_BMP));
Gdiplus::Graphics graphics(pDC->m_hDC);
graphics.DrawImage(&bmp, posX, posY);
I also tried to create a new bitmap from the resource by using the clone method and by drawing the bitmap to a newly created one but neither did help. Both times I used PixelFormat32bppPARGB.
Then I tried to use alpha blending but this way the whole image gets transparent and not only the background:
Gdiplus::ColorMatrix clrMatrix = {
1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.5f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f, 1.0f
};
Gdiplus::ImageAttributes imgAttr;
imgAttr.SetColorMatrix(&clrMatrix);
graphics.DrawImage(&bmp, destRect, 0, 0, width(), height(), Gdiplus::UnitPixel, &imgAttr);
The transparency information is already contained in the image but I don't have a clue how to apply it when drawing the image. How does one achieve this?
A late answer but:
ImageAttributes imAtt;
imAtt.SetColorKey(Color(255,255,255), Color(255,255,255), ColorAdjustTypeBitmap);
Will make white (255,255,255) transparent on any bitmap you use this image attribute with.
The simplest solution is to use some format other than BMP.
You need the bits to contain alpha data, and you need the Bitmap to be in a format that has alpha data. When you load a BMP with GDI+, it will always use a format without alpha, even if the BMP has an alpha channel. I believe the data is there in the image bits, but it's not being used.
The problem when you clone or draw to a PixelFormat32bppPARGB Bitmap is that GDI+ will convert the image data to the new format, which means discarding the alpha data.
Assuming it's loading the bits correctly, what you need to do is copy the bits over directly to another bitmap with the correct format. You can do this with Bitmap::LockBits and Bitmap::UnlockBits. (Make sure you lock each bitmap with its native pixel format so no conversion is done.)
I had the same problem. Transparent BMPs have not been shown correctly and unfortunately, PNGs cannot be loaded directly from resources (except by adding quite a bit of code which copies them into a stream and loads them from the stream). I wanted to avoid this code.
The bitmaps that I'm using also use only two colours (background and logo). Having an alpha channel means that I would need to save them with a much higher colour depth instead of only 2 bit colour depth.
Evan's answer was exactly was I was looking for :-)
Instead of white, I'm using the colour of the top left pixel as transparent colour:
Gdiplus::Color ColourOfTopLeftPixel;
Gdiplus::Status eStatus = m_pBitmap->GetPixel(0, 0, &ColourOfTopLeftPixel);
_ASSERTE(eStatus == Gdiplus::Ok);
// The following makes every pixel with the same colour as the top left pixel (ColourOfTopLeftPixel) transparent.
Gdiplus::ImageAttributes ImgAtt;
ImgAtt.SetColorKey(ColourOfTopLeftPixel, ColourOfTopLeftPixel, Gdiplus::ColorAdjustTypeBitmap);

OpenGL - PBuffer render to Texture

After my last post, when someone recommended me to use pBuffers, I digged a bit on Google and I found some cool examples to make Offscreen Rendering, using pbuffers. Some example, available on nVidia's website, does a simple offscreen rendering, which just renders on the pbuffer context, reads the pixels into an array and then calls the opengl functions to DrawPixels.
I changed this example, in order to create a texture from the pixels read - Render it offscreen, read the pixels to the array, and then initialize a texture with this colorbit array. But this looks very redundant to me - We render the image, we copy it from Graphical Card memory into our memory (the array), to later copy it back to the graphical card in order to display it on screen, but just in a different rendering context. It looks kinda stupid the copies that I am making just to display the rendered texture, so I tried a different approach using glCopyTexImage2D(), which unfortunately doesn't work. I'll display code and explanations:
mypbuffer.Initialize(256, 256, false, false);
- The false values are for Sharing context and sharing object. They are false cause this fantastic graphical card doesn't support it.
Then I perform the usual initializations, to enable Blending, and GL_TEXTURE_2D.
CreateTexture();
mypbuffer.Activate();
int viewport[4];
glGetIntegerv(GL_VIEWPORT,(int*)viewport);
glViewport(0,0,xSize,ySize);
DrawScene(hDC);
//save data to texture using glCopyTexImage2D
glBindTexture(GL_TEXTURE_2D,texture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
0,0, xSize, ySize, 0);
glClearColor(.0f, 0.5f, 0.5f, 1.0f); // Set The Clear Color To Medium Blue
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(viewport[0],viewport[1],viewport[2],viewport[3]);
// glBindTexture(GL_TEXTURE_2D,texture);
first = false;
mypbuffer.Deactivate();
- The DrawScene function is very simple, it just renders a triangle and a rectangle, which is suposed to be offscreen rendered (I HOPE). CreateTexture() creates an empty texture. The function should work, as it was tested in the previous way I described and it works.
After this, in the main loop, i just do the following:
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texture);
glRotatef(theta, 0.0f, 0.0f, 0.01f);
glBegin(GL_QUADS);
//Front Face
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-0.5, -0.5f, 0.5f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.5f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f( 0.5f, 0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-0.5f, 0.5f, 0.5f);
glEnd();
SwapBuffers(hDC);
theta = 0.10f;
Sleep (1);
The final result is just a window with a blue background, nothing got actually Rendered. Any Idea why is this happening? My Graphical Card doesn't support the extension wgl_ARB_render_texture, but this shouldn't be a problem when calling the glCopyTexImage2D() right?
My Card doesn't support FBO either
What you must do is, sort of "connect" your two OpenGL contexts so that the textures of your PBuffer context also show up in the main render context. The term you need to look for is "display list sharing". In Windows you connect the contexts retroactively using wglShareLists, on X11 and MacOS X you must supply the handle of the context to be shared at context creation.
An entirely different possibility and working just as well is reusing the same context on the PBuffer. It's a little known fact, that you can use OpenGL render contexts not only on the drawable it has been created with first, but on any drawable with compatible settings. So if your PBuffer matches your main window's pixel format, you can detach the render context from the main window and attach it to the PBuffer. Of course you then need low level access to the main window's device context/drawable, which is normally hidden behind a framework.
You should check whether your OpenGL implementation supports framebuffer objects: these object are able to be render targets, and they can have attached textures as color buffers, indeed rendering directly into a texture.
This should be the way to go, otherwise your method is the alternative.

opengl rendering half a cylinder

ok so im new to opengl and im creating a pool game using only the core opengl and glut
i am writing in c++
i know how to draw a cylinder:
{
GLUquadric *quadric = gluNewQuadric();
glBegin;
gluCylinder(quadric, 0.5f, 0.5f, 5.0f, 40, 40);
glEnd();
}
i want to know if i can half this cylinder so i can use the curve to round off my table/pocket edges
any help wound be appreciated thanks
The function gluCylinder is too specific to accomplish this.
glu is built as a layer on top of opengl so you can always go to more low level drawing functions if the high level ones don't solve your problem.
This tutorial should give you an introduction to some of the lower level drawing functionality in opengl: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=05