How to display RGB24 video frame using opengl - opengl

My task is to render a set of 50 RGB frames using openGL's glut library.
I tried: In 3D cube rotation, i have a set of vertices using which i render it to the window. However, in case of rendering the RGB frames what should be done? Below is the code using which i render my 3d cube:
#include <glut.h>
GLfloat vertices[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLfloat colors[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLubyte cubeIndices[24]={0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
static GLfloat theta[3]={0,0,0};
static GLint axis=2;
void display()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(theta[0],1.0,0.0,0.0);
glRotatef(theta[1],0.0,1.0,0.0);
glRotatef(theta[2],0.0,0.0,1.0);
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,cubeIndices);
glutSwapBuffers();
glFlush();
}
void spinCude()
{
theta[axis]+=2.0;
if(theta[axis]>360.0)
theta[axis]-=360.0;
display();
}
void init()
{
glMatrixMode(GL_PROJECTION);
glOrtho(-2.0,2.0,-2.0,2.0,-10.0,10.0);
glMatrixMode(GL_MODELVIEW);
}
void mouse(int btn,int state, int x,int y)
{
if(btn==GLUT_LEFT_BUTTON&& state==GLUT_DOWN) axis=0;
if(btn==GLUT_MIDDLE_BUTTON&& state==GLUT_DOWN) axis=1;
if(btn==GLUT_RIGHT_BUTTON&& state==GLUT_DOWN) axis=2;
}
void main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowSize(500,500);
glutCreateWindow("Simple YUV Player");
init();
glutDisplayFunc(display);
glutIdleFunc(spinCude);
glutMouseFunc(mouse);
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,0,vertices);
//glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3,GL_FLOAT,0,colors);
glutMainLoop();
}
Can anyone suggest me some example or tutorial such that i can modify above code to display RGB frames.

Once you have your RGB-Frame as raw-data in memory things are pretty straight-forward. Create a texture using glGenTextures, bind it using glBindTexture and upload the data via glTexImage2D or glTexSubImage2D. Then render a fullscreen quad or whatever you like with that texture. The benefit of that is that you could render multiple 'virtual' TVs in your scene just by rendering multiple quads with that same texture, imagine a TV-Store where the same video runs on dozen of TVs.
glDrawPixels might also work but it is much less versatile.
I don't know if uploading via texture is the way to go (hardware accelerated movie playback programs like VLC are most likely doing something far more advanced), but it should be a good start.

As Marius already suggested, implement texture mapping first. It's rather straigth forward any texture mapping tutorial will do.
Rendering frames are not the best with OpenGL you should try to avoid them as much as you can since it may involve a client -> host memory copy which is really costy ( takes too much time ) or simply it just takes up too much memory. Anyways if you really have to do it just generate as much textures as you need with glGenTextures load them up with the textures by glTexImage2D and then flip over the frames with a simple loop in each frame.
P.S. Judging by your application's name "YUV Player" you may also need to convert the input data since OpenGL mostly uses RGB not YUV.

Related

Drawing a monochrome 2D array in OpenGL

I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.
This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.
The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.
The main backbone of the program follows the next scheme
#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;
int main(int argc, char** argv)
{
if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
{
//Initialize GLFW
}
float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)
for (int i = 0; i < totalTime; i++)
{
for (int x=0; x < SIZE; x++)
{
for (int y=0; y < SIZE; y++)
{
//Each position of the array is then updates here
}
}
if (GFX)
{
//The drawing should be done here
}
}
return 0;
}
I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.
So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.
Thanks!
The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:
for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);
I do not use GLUT nor code for your platform so I stick just to rendering:
//---------------------------------------------------------------------------
const int size=512; // data resolution
const int size2=size*size;
float data[size2]; // your float size*size data
GLuint txrid=-1; // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
{
int i;
// generate float data
Randomize();
for (i=0;i<size2;i++) data[i]=Random();
// create texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
glDisable(GL_TEXTURE_2D);
}
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
{
// release texture
glDeleteTextures(1,&txrid);
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// bind texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
// copy your actual data into it
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
// render single textured QUAD
glColor3f(1.0,1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glEnd();
// unbind texture (so it does not mess with othre rendering)
glBindTexture(GL_TEXTURE_2D,0);
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // ignore this GLUT should make it on its own
}
//---------------------------------------------------------------------------
Here preview:
In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.
In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:
complete GL+GLSL+VAO/VBO C++ example
It also covers the GL initialization without GLUT but on Windows ...
Now some notes to the program above:
I used GL_LUMINANCE32F_ARB texture format extention
its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...
size
in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...
Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures
GL_CLAMP_TO_EDGE
this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...
However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:
#include <gl\glext.h>
After gl.h is included or add the defines directly instead:
#define GL_CLAMP_TO_EDGE 0x812F
#define GL_LUMINANCE32F_ARB 0x8818
btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:
simple GLUT app example
Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.
Now the shaders:
if you generate your data in <-1,+1> range:
for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;
And use these shaders:
Vertex:
// Vertex
#version 400 core
layout(location = 0) in vec2 pos; // position
layout(location = 8) in vec2 tex; // texture
out vec2 vpos;
out vec2 vtex;
void main()
{
vpos=pos;
vtex=tex;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 400 core
uniform sampler2D txr;
in vec2 vpos; // position
in vec2 vtex; // texture
out vec4 col;
void main()
{
vec4 c;
c=texture(txr,vtex);
c=(c+1.0)*0.5;
col=c;
}
Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).

OpenGL programming in C++: anti-aliasing

I am quite newbie in OpenGL and all my knowledge comes from tutorials in the Internet. For now I can program rotating cube lighted by point source. My new goal is to implement antialiasing - possibly without any magical techniques like a hypothetical glEnable(Smooth_Lines) function. The method I'm using currently does not work.
My first attempt was to try glAccum function and implement jittering anti-aliasing - I don't know if this is not possible on my Radeon graphic card or I just wrote something wrong.
My second attempt is to use glGenFrameBuffers (and rest of family). But now (after ~10 hours of research) I just don't have any strength to continue. Can you tell me where I should implement antialiasing process? Or which functions should I use? Here is cutted version of my whole code:
class Window;
static Window* ptr;
class Window{
GLuint vbo[4],program;
GLint att[5],mvp,mvp2;
void init_res(){
a0=glutGet(GLUT_ELAPSED_TIME);
GLfloat cube[]= ... something
GLfloat colors[]= ... still something
GLushort elements[]= ... oh you can just guess how it should look :)
//now I am binding buffers and send cube data to shaders
program=glCreateProgram();
//little more code here
}
void Draw(){
// here is subfunction for drawing all cube.
}
void onDisplay(){
//here is main function which is send to glutDisplayFunc
Draw();
//should I add something here?
glutSwapBuffers();
}
Window(int s_w,int s_h){
glutInit(&ac, av);
glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH);
glutInitWindowSize(s_w, s_h);
glutCreateWindow("My Rotating Cube");
glewInit();
init_res();
ptr=this;
glutDisplayFunc(Display);
glutReshapeFunc(Reshape);
glutIdleFunc(Idle);
glutKeyboardFunc(Key);
glEnable(GL_LINE_SMOOTH_HINT);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnable(GL_BLEND);
glHint(GL_POLYGON_SMOOTH_HINT,GL_NICEST);
glEnable(GL_DEPTH_TEST);
glEnable(GL_MULTISAMPLE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//here is a lot of garbage - you can call it 'voodoo programming'
glutMainLoop();
free_resources();
}
};
int main(int argc, char** argv){
ac=argc;
av=argv;
Window *okno;
okno = new Window(800,600);
}
For now, everything in this program works (well, before cuts of course :P ). I did not attach shaders - they are as simple as it is possible (with lighting and added cube borders).
Any ideas how to implement anti-aliasing here?

wxWidgets OpenGl shaders, trouble getting anything to draw

I've been trying to incorporate shaders and OpenGl into a wxWidgets program. I've used the links below:
http://nehe.gamedev.net/article/glsl_an_introduction/25007/
http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/
Now I've been trying in a test program to use the shaders provided by the lighthouse3d tutorial and recreate the output, (a blue teapot spinning slowly on a white background). I can't seem to get anything to draw though and all I can see is a black screen. My code so far is below, (I'm going to ignore most of the shaders intially as I'm 99% sure they're fine):
void BasicGLPane::render( wxPaintEvent& evt )
{
//wxGLCanvas::SetCurrent(*m_context);
wxPaintDC(this);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//prepare2DViewport(0,0,getWidth()/2, getHeight());
glLoadIdentity();
gluLookAt(0.0,0.0,5.0,
0.0,0.0,-1.0,
0.0f,1.0f,0.0f);
glLightfv(GL_LIGHT0, GL_POSITION, lpos);
//glRotatef(a,0,1,1);
glutSolidTeapot(1);
glFlush();
//a+=0.1;
SwapBuffers();
}
void BasicGLPane::InitializeGLEW()
{
//prepare2DViewport(0,0,getWidth(), getHeight());
// The current canvas has to be set before GLEW can be initialized.
wxGLCanvas::SetCurrent(*m_context);
GLenum err = glewInit();
// If Glew doesn't initialize correctly.
if(GLEW_OK != err)
{
std::cerr << "Error:" << glewGetString(err) << std::endl;
const GLubyte* String = glewGetErrorString(err);
wxMessageBox("GLEW is not initialized");
}
BasicGLPane::BasicGLPane(wxFrame* parent, int* args) :
wxGLCanvas(parent, wxID_ANY, args, wxDefaultPosition, wxDefaultSize, wxFULL_REPAINT_ON_RESIZE)
{
m_context = new wxGLContext(this);
// To avoid flashing on MSW
SetBackgroundStyle(wxBG_STYLE_CUSTOM);
}
I've had thoughts as to why I'm not getting any output. One thought I'm having is something to do with the m_context. I'm having to set the current context for WxWidgets before I can run GLEW. There's also a number of properties that in the tutorial are initialized and I'm not using these functions in my wxWidgets version and I'm wondering if I should. These are:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("MM 2004-05");
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(changeSize);
glutKeyboardFunc(processNormalKeys);
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
But I'm quite keen to avoid using glut and have managed to avoid it up until now. The only reason I've previously added it is to try and replicate the tutorial's behaviour.
Edit:
I'm going to add a bit more as I have noticed one or two bits of odd behaviour. If I call this function in my draw:
void BasicGLPane::prepare2DViewport(int topleft_x, int topleft_y, int bottomrigth_x, int bottomrigth_y)
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Black Background
glEnable(GL_TEXTURE_2D); // textures
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glViewport(topleft_x, topleft_y, bottomrigth_x-topleft_x, bottomrigth_y-topleft_y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(topleft_x, bottomrigth_x, bottomrigth_y, topleft_y);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
I can get the background to change colour when I change the window size. I should also mention, it's NOT refreshing every frame, It only draws one frame and then won't call the render function again until I change the window size.
Your code looks good so far. One thing you find in a lot of tutorials, but is bad practice is, that there's apparently some initialization happening. This is not the case. OpenGL is not initialized, it's a state machine and you're supposed to set state when you need it. The lines
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
Are perfectly happy in the drawing function. You also need to setup a projection. In tutorials you often find them to be set in the window resize handler. Please don't fall into this bad habit. Projection and viewport are drawing state, so set them in the drawing function.
If you're using OpenGL-3 (core profile) or later you must supply at least a vertex and a fragment shader. In the older versions each shader stage is optional and there are builtin variables to provide a common grounds for communication between fixed function and programmable pipeline. However I strongly advise against mixed operation. Always use shaders and use both a vertex and a fragment shader. In the long term they make things soooo much easier.
Turns out I didn't need the gluLookAt in my render.

Efficient integration of Qt and OpenCV

I am working on an interactive application which needs to read and manipulate several very large images at once (25 images at a time, roughly 350 Mb total size). OpenCV is quite speedy and handles the algorithms with relative ease. But drawing them with Qt is proving to be a problem. Here are two less-than-ideal solutions I have tried.
Solution 1 (too slow)
Every time you need to draw a different OpenCV image, convert it to a
QImage and draw that. The conversion, unfortunately, takes a while and
we cannot switch between images at interactive speeds.
Solution 2 (too memory-intensive)
Maintain two stacks of images, one for OpenCV and one for Qt. Use the
appropriate one at the appropriate time.
I have direct access to the OpenCV pixel data. I know the width and height of the image, and I know that pixels are 3-byte RGB values. It seems like it should be possible to draw the OpenCV image quickly without copying it to a QImage container that (as far as I can tell) just contains a duplicate of the data.
Where do I need to look to get this kind of capability out of Qt?
I don't know if this might be useful to you now after 3 months. But I am having the same kind of application where I have to manipulate a stream of images using OpenCV and display it on a QT interface. After googling around quite a bit, I came across a very slick solution. Use opengl's glDrawPixels to draw raw image data directly on the Qt interface. Best part, u don't have to write any extra conversion code. Just the basic code for opengl for setting up a viewport and coordinate. Check out the code which has a function which takes an IplImage* pointer and uses that data to draw the image. You might need to tweak the parameters(especially the WIDTH and HEIGHT variables) a bit to display an image with a specific size.
And yeah, I don't know what build system you are using. I used cmake and had to setup dependencies for opengl although I am using Qt's opengl libraries.
I have implemented a class QIplImage which derives from QGLWidget and overriden its paintGL method to draw the pixel data on to the frame.
//File qiplimage.h
class QIplImage : public QGLWidget
{
Q_OBJECT
public:
QIplImage(QWidget *parent = 0,char *name=0);
~QIplImage();
void paintGL();
void initializeGL();
void resizeGL(int,int);
bool drawing;
public slots:
void setImage(IplImage);
private:
Ui::QIplImage ui;
IplImage* original;
GLenum format;
GLuint texture;
QColor bgColor;
char* name;
bool hidden;
int startX,startY,endX,endY;
QList<QPointF*> slopes;
QWidget* parent;
int mouseX,mouseY;
};
//End of file qiplimage.h
//file qiplimage.cpp
#include "qiplimage.h"
#include <Globals.h>
QIplImage::QIplImage(QWidget *parent) :
QGLWidget(parent)
{
}
QIplImage::QIplImage(QWidget *parent,char* name): QGLWidget(parent)
{
ui.setupUi(this);
//This is required if you need to transmit IplImage over
// signals and slots.(That's what I am doing in my application
qRegisterMetaType<IplImage>("IplImage");
resize(384,288);
this->name=name;
this->parent=parent;
hidden=false;
bgColor= QColor::fromRgb(0xe0,0xdf,0xe0);
original=cvCreateImage(cvSize(this->width(),this->height()),IPL_DEPTH_8U,3);
cvZero(original);
switch(original->nChannels) {
case 1:
format = GL_LUMINANCE;
break;
case 2:
format = GL_LUMINANCE_ALPHA;
break;
case 3:
format = GL_BGR;
break;
default:
return;
}
drawing=false;
setMouseTracking(true);
mouseX=0;mouseY=0;
initializeGL();
}
void QIplImage::initializeGL()
{
qglClearColor(bgColor);
//glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,this->width(),this->height(),0.0f,0.0f,1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glGenTextures(3,&texture);
glBindTexture(GL_TEXTURE_2D,texture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glBindTexture(GL_TEXTURE_2D,texture); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,this->width(),this->height(),0,GL_BGR,GL_UNSIGNED_BYTE,NULL);
glDisable(GL_TEXTURE_2D);
}
void QIplImage::setImage(IplImage image){
original=ℑ
//cvShowImage(name,original);
updateGL();
}
void QIplImage::paintGL (){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
if(!hidden){
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f,this->width(),this->height(),0.0f,0.0f,1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,texture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,original->width,original->height,0,GL_BGR_EXT,GL_UNSIGNED_BYTE,original->imageData);
glBegin(GL_QUADS);
glTexCoord2i(0,1); glVertex2i(0,this->height());
glTexCoord2i(0,0); glVertex2i(0,0);
glTexCoord2i(1,0); glVertex2i(this->width(),0);
glTexCoord2i(1,1); glVertex2i(this->width(),this->height());
glEnd();
glFlush();
}
}
void QIplImage::resizeGL(int width,int height){
glViewport(0,0,this->width(),this->height());
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f,this->width(),this->height(),0.0f,0.0f,1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
Hope that helps.
You can share the data between QImage and openCV - both of them have a ctor which uses existing data - supplied by a pointer.
cv::Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP)
QImage ( uchar * data, int width, int height, int bytesPerLine, Format format)
There might be an issue with the padding if the rows don't end up being multiples of 4bytes but I would expect the padding to align on both types with the same pixel size - at least on the same hardware
One issue is that openCV uses BGR by default which isn't very optimal for QImage (or any other display). Although I'm not sure that QImage::Format_ARGB32_Premultiplied is necessarily that much quicker anymore on Qt which use accelerated openGL for rendering QImage.
An alternative is to use opencv then copy the resulting data direct to an openGL texture and then use QGlWidget to display the image without another copy.

read pixels from back buffer

I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
My init function looks like this:
// No main function, so no real argv argc
char fakeParam[] = "nothing";
char *fakeargv[] = { fakeParam, NULL };
int fakeargc = 1;
glutInit( &fakeargc, fakeargv );
GLenum err = glewInit();
if (GLEW_OK != err)
{
MessageBoxA(NULL, "Failed to initialize OpenGL", "ERROR", NULL);
}
else
{
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
// Not sure if this call is needed since i don't use a glut window to render too..
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
}
Then in my render function i do:
void DisplayFunc(void)
{
/* Clear the buffer, clear the matrix */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// TEAPOT
glTranslatef(0.0f, 0.0f, -5.0f); // Translate back 5 units
glRotatef(rotation_degree, 1.0f, 1.0f, 0.0f); // Rotate according to our rotation_degree value
glFrontFace(GL_CW);
glutSolidTeapot(1.0f); // Render a teapot
glFrontFace(GL_CCW);
glReadBuffer(GL_BACK);
glReadPixels(0, 0, (GLsizei)1024, (GLsizei)768, GL_RGB, GL_UNSIGNED_BYTE, pixels);
int r = glGetError();
}
This is basically all i do. At the end of the last function is where i'm trying to read all the pixels. But the output is just a black image. glGetError() doesn't give any errors.
Anyone any idea what the problem could be...???
I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
It doesn't work like that. The backbuffer is not some kind of off-screen rendering area, it's part of on-screen window. Actually the whole doublebuffer concept only makes sense for on-screen windows. Each pixel of a double buffered window has two color values, but only one depth, stencil, etc.; upon buffer swap just the pointer to the back and front pixel plane are exchanged. But because we're still talking about a window, when rasterizing all fragments go through the pixel ownership test, i.e. are checked for, if they are actually visible on screen. If not, they're not rendered.
But your problems go further: You don't even create a window, so you don't have an OpenGL context at all. Your calling of OpenGL commands has no effect whatsoever. glReadPixels doesn't return you anything, because there's nothing to read from.
The bad news is, that the only way to get a context with GLUT is, by creating a window. The good news is, you don't have to use GLUT. People, why don't you get this: GLUT is not part of OpenGL, it's a quick and dirty framework for writing small tutorials, nothing more.
What you want is either:
not a window, but a PBuffer, i.e. a off screen drawable, that doesn't got through pixel ownership tests.
or
A hidden window with a OpenGL context created on it, and in this context a Frame Buffer Object for an off-screen rendering target.
Try calling glFlush before glReadPixels.
Also, where do you set the size of your window?