to drag an image by opengl (MFC) - c++

I am using an openGL to drag an image (loaded bitmap) and wondering if there some methods/function to transform the image on the screen.
so far i have done this code to load an image:
void CDisplayControlPanelView::OnDraw(CDC* /*pDC*/)
{
CDisplayControlPanelDoc* pDoc = GetDocument();
ASSERT_VALID(pDoc);
if(!pDoc)
return;
wglMakeCurrent(m_hDC , m_hRC);
RenderScene();
SwapBuffers(m_hDC);
wglMakeCurrent(m_hDC,NULL);
}
void CDisplayControlPanelView::RenderScene()
{
AUX_RGBImageRec* pRGBImage;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
pRGBImage = auxDIBImageLoadA("D:\\map.bmp");
glDrawPixels(pRGBImage->sizeX, pRGBImage->sizeY, GL_RGB, GL_UNSIGNED_BYTE, pRGBImage->data);
glFlush();
}

Use glTranslate. There are many other ways but this is the most simple. Check out some tutorials if you are new to OpenGL, it could help.

The first thing you must understand is, that OpenGL is not a scene graph. It's a drawing API, very much like Windows GDI. The function glDrawPixels is not very unlike a BitBlt from a MemDC.
Anyway: You shouldn't use glDrawPixels. It's slow and deprecated. The way to draw images in OpenGL is uploading the image into a texture and drawing a textured quad. The quad you can freely move around as you like.

Related

On OpenGL is it possible to have a small viewport map to the entire window? [duplicate]

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

OpenGL Scale Single Pixel Line

I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).

How to display RGB24 video frame using opengl

My task is to render a set of 50 RGB frames using openGL's glut library.
I tried: In 3D cube rotation, i have a set of vertices using which i render it to the window. However, in case of rendering the RGB frames what should be done? Below is the code using which i render my 3d cube:
#include <glut.h>
GLfloat vertices[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLfloat colors[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLubyte cubeIndices[24]={0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
static GLfloat theta[3]={0,0,0};
static GLint axis=2;
void display()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(theta[0],1.0,0.0,0.0);
glRotatef(theta[1],0.0,1.0,0.0);
glRotatef(theta[2],0.0,0.0,1.0);
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,cubeIndices);
glutSwapBuffers();
glFlush();
}
void spinCude()
{
theta[axis]+=2.0;
if(theta[axis]>360.0)
theta[axis]-=360.0;
display();
}
void init()
{
glMatrixMode(GL_PROJECTION);
glOrtho(-2.0,2.0,-2.0,2.0,-10.0,10.0);
glMatrixMode(GL_MODELVIEW);
}
void mouse(int btn,int state, int x,int y)
{
if(btn==GLUT_LEFT_BUTTON&& state==GLUT_DOWN) axis=0;
if(btn==GLUT_MIDDLE_BUTTON&& state==GLUT_DOWN) axis=1;
if(btn==GLUT_RIGHT_BUTTON&& state==GLUT_DOWN) axis=2;
}
void main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowSize(500,500);
glutCreateWindow("Simple YUV Player");
init();
glutDisplayFunc(display);
glutIdleFunc(spinCude);
glutMouseFunc(mouse);
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,0,vertices);
//glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3,GL_FLOAT,0,colors);
glutMainLoop();
}
Can anyone suggest me some example or tutorial such that i can modify above code to display RGB frames.
Once you have your RGB-Frame as raw-data in memory things are pretty straight-forward. Create a texture using glGenTextures, bind it using glBindTexture and upload the data via glTexImage2D or glTexSubImage2D. Then render a fullscreen quad or whatever you like with that texture. The benefit of that is that you could render multiple 'virtual' TVs in your scene just by rendering multiple quads with that same texture, imagine a TV-Store where the same video runs on dozen of TVs.
glDrawPixels might also work but it is much less versatile.
I don't know if uploading via texture is the way to go (hardware accelerated movie playback programs like VLC are most likely doing something far more advanced), but it should be a good start.
As Marius already suggested, implement texture mapping first. It's rather straigth forward any texture mapping tutorial will do.
Rendering frames are not the best with OpenGL you should try to avoid them as much as you can since it may involve a client -> host memory copy which is really costy ( takes too much time ) or simply it just takes up too much memory. Anyways if you really have to do it just generate as much textures as you need with glGenTextures load them up with the textures by glTexImage2D and then flip over the frames with a simple loop in each frame.
P.S. Judging by your application's name "YUV Player" you may also need to convert the input data since OpenGL mostly uses RGB not YUV.

Untextured Quads appear dark

I just started working with OpenGL, but I ran into a problem after implementing a Font system.
My plan is to simply visualize several Pathfinding Algorithms.
Currently OpenGL gets set up like this (OnSize gets called once on window creation manually):
void GLWindow::OnSize(GLsizei width, GLsizei height)
{
// set size
glViewport(0,0,width,height);
// orthographic projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,width,height,0.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
m_uiWidth = width;
m_uiHeight = height;
}
void GLWindow::InitGL()
{
// enable 2D texturing
glEnable(GL_TEXTURE_2D);
// choose a smooth shading model
glShadeModel(GL_SMOOTH);
// set the clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
}
In theory I don't need blending, because I will only use untextured Quads to visualize obstacles and line etc to draw paths... So everything will be untextured, except the fonts...
The Font Class has a push and pop function, that look like this (if I remember right my Font system is based on a NeHe Tutorial that I was following quite a while ago):
inline void GLFont::pushScreenMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(viewport[0],viewport[2],viewport[1],viewport[3], -1.0, 1.0);
glPopAttrib();
}
inline void GLFont::popProjectionMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
So the Problem:
If I don't draw a Text I can see the Quads I want to draw, but they are quite dark, so there must be something wrong with my general OpenGL Matrix Properties.
If I draw Text (so the font related push and pop functions get called) I can't see any Quads.
The question:
How do I solve this problem and some background information why this happened would also be nice, because I am still a beginner/student, who just started.
If your quads are untextured, you will run into undefined behaviour. What will probably happen is that any previous texture will be used, and the colour at point (0,0) will be used, which could be what is causing them to be invisible.
Really, you need to disable texturing before trying to draw untextured quads using glDisable(GL_TEXTURE_2D). Again, if you don't, it'll just use the previous texture and texture co-ordinates, which without seeing your draw() loop, I'm assuming to be undefined.

OpenGL Texture transparency doesn't work

I'm having an OpenGL texture that is binded to a simple quad.
My problem is: My texture is 128x128 pixels image. I'm only filling up about 100x60 pixels on that image, the other pixels are transparent. I saved it in a .png file. When I'm drawing, the transparent part of the binded texture is white.
Let's say I have a background. When I draw this new quad on this background I can't see the through the transparent part of my texture.
Any suggestions?
Code:
// Init code...
gl.glEnable(gl.GL_TEXTURE_2D);
gl.glDisable(gl.GL_DITHER);
gl.glDisable(gl.GL_LIGHTING);
gl.glDisable(gl.GL_DEPTH_TEST);
gl.glTexEnvi(gl.GL_TEXTURE_ENV, gl.GL_TEXTURE_ENV_MODE, gl.GL_MODULATE);
// Drawing code...
gl.glBegin(gl.GL_QUADS);
gl.glTexCoord2d(0.0, 0.0);
gl.glVertex3f(0.0f, 0.0f, 0.0f);
gl.glTexCoord2d(1.0, 0.0);
gl.glVertex3f(1.0f, 0.0f, 0.0f);
gl.glTexCoord2d(1.0, 1.0);
gl.glVertex3f(1.0f, 1.0f, 0.0f);
gl.glTexCoord2d(0.0, 1.0);
gl.glVertex3f(0.0f, 1.0f, 0.0f);
gl.glEnd();
I've tried almost everything, from enabling blending to change to GL_REPLACE, however I can't get it to work.
Edit:
// Texture. Have tested both gl.GL_RGBA and gl.GL_RGB8.
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, (int)gl.GL_RGBA, imgWidth, imgHeight,
0, gl.GL_BGR_EXT, gl.GL_UNSIGNED_BYTE, bitmapdata.Scan0);
Check that your texture is of RGBA format, and enable blending and set the blending func:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And draw the texture. If your texture is not RGBA, then there is no alpha and blending won't do anything.
EDIT: Since you posted your code, i can a spot a serious error:
glTexImage2D(gl.GL_TEXTURE_2D, 0, (int)gl.GL_RGBA, imgWidth, imgHeight, 0, gl.GL_BGR_EXT, gl.GL_UNSIGNED_BYTE, bitmapdata.Scan0);
You're telling GL that the texture has internalFormat RGBA, but the bitmap data has BGR format, so, no alpha from your texture data. This assumes alpha = 1.0.
To correct it, load your PNG with RGBA format and use GL_RGBA as internalFormat and format parameters for glTexImage2D.
When I'm drawing, the transparent part of the binded texture is white.
That means your PNG-parser converted transparent regions to the value of white. If you want to render transparent layers with OpenGL you dont typically depend on texture-files to hold the transparency but instead use the GLBlendFunc(). More information here:
http://www.opengl.org/resources/faq/technical/transparency.htm
Also, should you render to a frame buffer and copy the result into a texture, check that the frame buffer has alpha turned on. For example when using osgViewer this can be achived by (do this before calling setUpViewInWindow):
osg::DisplaySettings *pSet = myviewer.getDisplaySettings();
if(pSet == NULL)
pSet = new osg::DisplaySettings();
pSet->setMinimumNumAlphaBits(8);
myviewer.setDisplaySettings(pSet);
and under qt it should work with (from http://forum.openscenegraph.org/viewtopic.php?t=6411):
QGLFormat f;
f.setAlpha( true ); //enables alpha channel for this format
QGLFormat::setDefaultFormat( f ); //set it as default before instantiations
setupUi(this); //instantiates QGLWidget (ViewerQT)
Normally it is better to render directly into a frame buffer but I came alonge this while preparing some legacy code and in the beginning it was very hard to find this.