Stiching multiple textures/frames together in OpenGL using a Kinect - opengl

I came across the following situation:
I have a Kinect camera and I keep taking frames (but they are stored only when the user presses a key).
I am using the freenect library in order to retrieve the depth and the color of the frame (I am no interested in skeleton tracking or something like that).
For a single frame I am using the glpclview example that comes with the freenect library
After retrieving the space data from the Kinect sensor, in the glpclview example, the current frame it is drawn like this:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_SHORT, 0, xyz);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(3, GL_SHORT, 0, xyz);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, gl_rgb_tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb);
glPointSize(2.0f);
glDrawElements(GL_POINTS, 640*480, GL_UNSIGNED_INT, indices);
where
static unsigned int indices[480][640];
static short xyz[480][640][3];
char *rgb = 0;
short *depth = 0;
where:
rgb is the color information for the current frame
depth is the depth information for the current frame
xyz is constructed as :
xyz[i][j][0] = j
xyz[i][j]3 = i
xyz[i][j]4 = depth[i*640+j]
indices is (I guess only) array that keeps track of the rgb/depth data and is constructed as:
indices[i][j] = i*640+j
So far, so good, but now I need to render more that just one frame (some of them rotated and translated with a certain angle/offsets). How can I do this?
I'ved tried to increase the size of the arrays and keep reallocationg memory for each new frame, but how can I render them?
Should I change this current line to something else?
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb)
If so, to what values should I change 640 and 480 since now xyz and rgb is a contiguos pointer of 640x480x(number of frames)?
To get a better ideea, I am trying to get something similar to this in the end (except the robot :D ).
If somewone has a better ideea, hint anything on how I should approach this problem, please let me know.

It isn't as simple as allocating a bigger array.
If you want to stitch together multiple point-clouds to make a bigger map, you should look into the SLAM algorithms (that is what they are running in the video your link to). You can find many implementations at http://openslam.org. You might also look into an ICP algorithm (Iterative Closest Point) and the KinectFusion from Microsoft (and the open source KinFu implementation from PCL).

Related

draw part of image with openGL glDrawPixels

I have a function to draw an image in an openGL context. (used in that case to render to a texture) That works for the whole image, but should also be able to render only a rectangular part. Rendering parts works if the part has the same width as the image. For parts that are less wide than the image-data it fails.
Here is the function (reduced to only the part for small width, no cleanup,etc)
void drawImage(uint32 imageWidth, uint32 imageHeight, uint8* pData,
uint32 offX, uint32 partWidth) // (offX+partWidth<=imageWidth)
{
uint8* p(pData);
if (partWidth != imageWidth)
{
glPixelStorei(GL_PACK_ROW_LENGTH, imageWidth);
p = calcFrom(offX, pData); // point at pixel in row
}
glDrawPixels(partWidth, ImageHeight, GL_BGRA, GL_UNSIGNED_BYTE, p);
}
As said: if (widthPart==imageWidth) the rendering works fine. For some combinations of partWidth and imageWidth it works also but that seems to be a very special case, mainly width very small images and a some special partWidths.
I found no examples for this, but from the docs I think this shold be possible to do somehow like that. Did I missunderstand the whole thing, or have I just overseen a small pit-fall??
Thanks,
Moritz
P.S: it's running on windows
[Edited:] P.P.S: by now I have tried to do that as texture. If I replace glDrawPixels with glTexImage2D I have the same problem...(could upload the whole image and render only part, but for small small parts of big pictures that might not e the best way...)
AAArrrghh!!
GL_UNPACK_ROW_LENGTH not GL_PACK_ROW_LENGTH!!!!

Conversion to Cinder gl::Texture

I am trying to create a gl::Texture object using image data as a BYTE * with the parameters below.
FreeGLUT - I use this to create a 2d texture and bind it to a quad.:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, textureWidth, textureHeight, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0); etc etc
This works fine, however I cannot find a way to create a gl::Texture object in Cinder.
text = gl::Texture(loadImage(loadAsset("text.jpg"))); // works with images files
text = gl::Texture(data, GL_RGBA8, 640, 480); // BTYE * data This gives me a grey screen -
This seemed the most likely, however I have no idea what to do with Format format = format();
Texture (const unsigned char *data, int dataFormat, int aWidth, int aHeight, Format format=Format())
I really have no understanding of how this works and cant find any better tutorials online. Thanks.
apparently dataFormat is the format parameter to glTexImage2D and should be something like GL_RGBA or GL_RGB (or GL_BGRA_EXT as in the example you provided)
What you want is the "internal format" which is set via the Format struct. So you set it like:
gl::Texture::Format fmt;
fmt.setInternalFormat(GL_RGBA8);
text = gl::Texture(data, GL_RGBA, 640, 480, fmt);
The GL_UNSIGNED_BYTE format is pre-set in that constructor of the cinder wrapper
Sometimes it can be easier to just look at the code instead of trying to find the 100% applicable documentation

opengl glReadPixels

I want to get color of a pixel. The pixel is mouse position. I use glReadPixels but i can't
POINT pt;
GetCursorPos(&pt);
unsigned char pixel[3];
glReadPixels(pt.x, pt.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel);
After this codes value of pixel is: 'Ì'
any idea?
204 is CC in hex representation. This value is often used to fill non-initialized memory. If you'll initialize pixel with zero (for example)
unsigned char pixel[3] = {0};
99,(9)% you'll see zero after call to glReadPixels. Depending on documentation glReadPixels
If an error is generated, no change is made to the contents of data.
that is your data in pixel was not changed because of error. Follow #OlegTitov's fourth advice (look for what glGetError(); will tell you)
Upd: If you want to get a pixel value from the main screen using only glReadPixels, and if you didn't create any GLFrameBuffer, I'm not sure, but I think you'll fail. I'll repeat - I'm not sure, but I think, that glReadPixels can read pixel values only from frame buffers, that was previously created by gl-functions
When outputting char to console, compiler will print as a symbol, not as letter. This is done for C style strings could be printed normal way. To print integer value first cast variable to integer type.
When reading from screen: Note that openGL's coordinate origin is bottom left corner while window systems use upper left corner, so you need to convert from one coordinate system to another
glReadPixels(pt.x, window_height - pt.y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel);
If you experience further problems make sure that correct pixel buffer is bound as read beffer. For window output:
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
If still have some problems start checking your code with glGetError();

Faster encoding of realtime 3d graphics with opengl and x264

I am working on a system that sends a compressed video to a client from 3d graphics that are done in the server as soon as they are rendered.
I already have the code working, but I feel it could be much faster (and it is already a bottleneck in the system)
Here is what I am doing:
First I grab the framebuffer
glReadBuffer( GL_FRONT );
glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer );
Then I flip the framebuffer, because there is a weird bug with swsScale (which I am using for colorspace conversion) that flips the image vertically when I convert. I am flipping in advance, nothing fancy.
void VerticalFlip(int width, int height, byte* pixelData, int bitsPerPixel)
{
byte* temp = new byte[width*bitsPerPixel];
height--; //remember height array ends at height-1
for (int y = 0; y < (height+1)/2; y++)
{
memcpy(temp,&pixelData[y*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[y*width*bitsPerPixel],&pixelData[(height-y)*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[(height-y)*width*bitsPerPixel],temp,width*bitsPerPixel);
}
delete[] temp;
}
Then I convert it to YUV420p
convertCtx = sws_getContext(width, height, PIX_FMT_RGB24, width, height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
uint8_t *src[3]= {buffer, NULL, NULL};
sws_scale(convertCtx, src, &srcstride, 0, height, pic_in.img.plane, pic_in.img.i_stride);
Then I pretty much just call the x264 encoder. I am already using the zerolatency preset.
int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, _inputPicture, &pic_out);
My guess is that there should be a faster way to do this. Capturing the frame and converting it to YUV420p. It would be nice to convert it to YUV420p in the GPU and only after that copying it to system memory, and hopefully there is a way to do color conversion without the need to flip.
If there is no better way, at least this question may help someone trying to do this, to do it the same way I did.
First , use async texture read using PBOs.Here is example It speeds ups the read by using 2 PBOs which work asynchronously without stalling the pipeline like readPixels does when used directly.In my app I got 80% performance boost when switched to PBOs.
Additionally , on some GPUs glGetTexImage() works faster than glReadPixels() so try it out.
But if you really want to take the video encoding to the next level you can do it via CUDA using Nvidia Codec Library.I recently asked the same question so this can be helpful.

Load bmp file as texture using auxDIBImageLoad in OpenGL

I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping、Multi-texture,I got a problem.
When I load logo bmp file,I need to load two bmp files:one stores color information ,and another stores alpha information.
here is the two bmp files:
OpenGL_Alpha.bmp:
and OpenGL.bmp :
Here is the code:
if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
alpha=new char[4*Image->sizeX*Image->sizeY];
for (int a=0; a<Image->sizeX*Image->sizeY; a++)
alpha[4*a+3]=Image->data[a*3]; //???????
if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
for (a=0; a<Image->sizeX*Image->sizeY; a++) {
alpha[4*a]=Image->data[a*3];//??????????
alpha[4*a+1]=Image->data[a*3+1];
alpha[4*a+2]=Image->data[a*3+2];
}
glGenTextures(1, &glLogo);
glBindTexture(GL_TEXTURE_2D, glLogo);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, Image->sizeX, Image->sizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, alpha);
delete alpha;
}
My question is :why the index of Image->data is a*3???
Could someone interpret for me ?
I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping
Why? The NeHe tutorials are terribly outdated, and the Bump Mapping technique outlined there completely obsolete. It's been superseeded by shader based normal mapping for well over 13 years (until 2003 texture combiners were used instead of shaders).
Also instead of BMPs you should use a image file format better suited for textures (with alpha channel). Like:
TGA
PNG
OpenEXR
Also the various compressed DX texture formats are a good choice for several applications.
My question is :why the index of Image->data is a*3???
Extracting the red channel of a RGB DIB.
It's the channel offset. The RGB data is stored as three consecutive bytes. Here 'a' represents which pixel (group of 3 bytes, one for R, one for G, one for B).
Think of a*3 as a pointer to an array of 3 bytes:
char* myPixel = Image->data + (a*3);
char red = myPixel[0];
char green = myPixel[1];
char blue = myPixel[2];