Conversion to Cinder gl::Texture - c++

I am trying to create a gl::Texture object using image data as a BYTE * with the parameters below.
FreeGLUT - I use this to create a 2d texture and bind it to a quad.:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, textureWidth, textureHeight, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0); etc etc
This works fine, however I cannot find a way to create a gl::Texture object in Cinder.
text = gl::Texture(loadImage(loadAsset("text.jpg"))); // works with images files
text = gl::Texture(data, GL_RGBA8, 640, 480); // BTYE * data This gives me a grey screen -
This seemed the most likely, however I have no idea what to do with Format format = format();
Texture (const unsigned char *data, int dataFormat, int aWidth, int aHeight, Format format=Format())
I really have no understanding of how this works and cant find any better tutorials online. Thanks.

apparently dataFormat is the format parameter to glTexImage2D and should be something like GL_RGBA or GL_RGB (or GL_BGRA_EXT as in the example you provided)
What you want is the "internal format" which is set via the Format struct. So you set it like:
gl::Texture::Format fmt;
fmt.setInternalFormat(GL_RGBA8);
text = gl::Texture(data, GL_RGBA, 640, 480, fmt);
The GL_UNSIGNED_BYTE format is pre-set in that constructor of the cinder wrapper
Sometimes it can be easier to just look at the code instead of trying to find the 100% applicable documentation

Related

How to save texture to image using soil2 library

I am trying to save texture to a image using SOIL2 library.
int _width, _height;
unsigned char* _image = SOIL_load_image("C:\\Temp\\RED.png", &_width, &_height, 0, SOIL_LOAD_RGB);
int save_result = SOIL_save_image
(
"C:\\temp\\atlas.png",
SOIL_SAVE_TYPE_PNG,
_width, _height, GL_RGB,
_image
);
But image does not gets saved the return values from the saved function is 0.
In my case (GL_RGBA) I needed to set the channels parameter to 4 (not GL_RGBA) to get it working. In your case, you probably need to change it to 3 (not GL_RGB).

The Internal format of OpenGL CUDA interop

I use CUDA surface Reference to write and read OpenGL 3D Texture fllowing this post Write to 3D OpenGL textures in CUDA ... .
There is a problem confused me.
For example,this function
glTexImage3D(GL_TEXTURE_3D, 0, GL_INTENSITY, volDim_.x, volDim_.y
, volDim_.z, 0, GL_LUMINANCE, getVolumeTypeGL(vol), vol->getData());
the third parameter is internal format.
In CUDA kernel,I need write or read the texture,like below code,
ElementType v;
surf3Dread(&v,surfaceRef,x*sizeof(ElementType),y,z);
surf3Dwrite(&v,surfaceRef,x*sizeof(ElementType),y,z);
I don't know how to match Internal Format with ElementType.
Example
glTexImage3D(GL_TEXTURE_3D, 0, GL_INTENSITY, volDim_.x, volDim_.y
, volDim_.z, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, vol->getData());
In host memory,the type of volume data type is unsigned char.
Which ElementType shuld I choose?
uchar4 or uchar ,or float
My some code could find in here My GitHub
UPDATE:
I have solved this problem.
GL_RGBA32F ---- float4
GL_RGBA ---- uchar4
GL_INTENSITY -- uchar
If the internal format of glTeximage3D is GL_INTENSITY, the element type of surf3Dread or surf3Dwrite is unsigned char. GL_RGBA32F matches float4 and GL_RGBA matches uchar4.

Save OpenGL output to image using cross-platform libraries

I want to take screenshot from OpenGL window and save it to image file of any type. DevIL method described here gives correct PNG. Replace ilSaveImage with ilSave and you can save image in different formats. SOIL method here gives flipped vertically image. Replacing code below
vector< unsigned char > buf( w * h * 3 );
glPixelStorei( GL_PACK_ALIGNMENT, 1 );
glReadPixels( 0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, &buf[0] );
int err = SOIL_save_image ("img.bmp", SOIL_SAVE_TYPE_BMP, w, h, 3, &buf[0]);
with only one line creates the correct image.
int err = SOIL_save_screenshot("img.bmp",SOIL_SAVE_TYPE_BMP, 0, 0, w, h);
Q1: Is any more convenient alternatives using other libraries?
Q2: Which one is the best way? Comparison is appreciated e.g. performance\compatibility.

Faster encoding of realtime 3d graphics with opengl and x264

I am working on a system that sends a compressed video to a client from 3d graphics that are done in the server as soon as they are rendered.
I already have the code working, but I feel it could be much faster (and it is already a bottleneck in the system)
Here is what I am doing:
First I grab the framebuffer
glReadBuffer( GL_FRONT );
glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer );
Then I flip the framebuffer, because there is a weird bug with swsScale (which I am using for colorspace conversion) that flips the image vertically when I convert. I am flipping in advance, nothing fancy.
void VerticalFlip(int width, int height, byte* pixelData, int bitsPerPixel)
{
byte* temp = new byte[width*bitsPerPixel];
height--; //remember height array ends at height-1
for (int y = 0; y < (height+1)/2; y++)
{
memcpy(temp,&pixelData[y*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[y*width*bitsPerPixel],&pixelData[(height-y)*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[(height-y)*width*bitsPerPixel],temp,width*bitsPerPixel);
}
delete[] temp;
}
Then I convert it to YUV420p
convertCtx = sws_getContext(width, height, PIX_FMT_RGB24, width, height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
uint8_t *src[3]= {buffer, NULL, NULL};
sws_scale(convertCtx, src, &srcstride, 0, height, pic_in.img.plane, pic_in.img.i_stride);
Then I pretty much just call the x264 encoder. I am already using the zerolatency preset.
int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, _inputPicture, &pic_out);
My guess is that there should be a faster way to do this. Capturing the frame and converting it to YUV420p. It would be nice to convert it to YUV420p in the GPU and only after that copying it to system memory, and hopefully there is a way to do color conversion without the need to flip.
If there is no better way, at least this question may help someone trying to do this, to do it the same way I did.
First , use async texture read using PBOs.Here is example It speeds ups the read by using 2 PBOs which work asynchronously without stalling the pipeline like readPixels does when used directly.In my app I got 80% performance boost when switched to PBOs.
Additionally , on some GPUs glGetTexImage() works faster than glReadPixels() so try it out.
But if you really want to take the video encoding to the next level you can do it via CUDA using Nvidia Codec Library.I recently asked the same question so this can be helpful.

Stiching multiple textures/frames together in OpenGL using a Kinect

I came across the following situation:
I have a Kinect camera and I keep taking frames (but they are stored only when the user presses a key).
I am using the freenect library in order to retrieve the depth and the color of the frame (I am no interested in skeleton tracking or something like that).
For a single frame I am using the glpclview example that comes with the freenect library
After retrieving the space data from the Kinect sensor, in the glpclview example, the current frame it is drawn like this:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_SHORT, 0, xyz);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(3, GL_SHORT, 0, xyz);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, gl_rgb_tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb);
glPointSize(2.0f);
glDrawElements(GL_POINTS, 640*480, GL_UNSIGNED_INT, indices);
where
static unsigned int indices[480][640];
static short xyz[480][640][3];
char *rgb = 0;
short *depth = 0;
where:
rgb is the color information for the current frame
depth is the depth information for the current frame
xyz is constructed as :
xyz[i][j][0] = j
xyz[i][j]3 = i
xyz[i][j]4 = depth[i*640+j]
indices is (I guess only) array that keeps track of the rgb/depth data and is constructed as:
indices[i][j] = i*640+j
So far, so good, but now I need to render more that just one frame (some of them rotated and translated with a certain angle/offsets). How can I do this?
I'ved tried to increase the size of the arrays and keep reallocationg memory for each new frame, but how can I render them?
Should I change this current line to something else?
glTexImage2D(GL_TEXTURE_2D, 0, 3, 640, 480, 0, GL_RGB, GL_UNSIGNED_BYTE, rgb)
If so, to what values should I change 640 and 480 since now xyz and rgb is a contiguos pointer of 640x480x(number of frames)?
To get a better ideea, I am trying to get something similar to this in the end (except the robot :D ).
If somewone has a better ideea, hint anything on how I should approach this problem, please let me know.
It isn't as simple as allocating a bigger array.
If you want to stitch together multiple point-clouds to make a bigger map, you should look into the SLAM algorithms (that is what they are running in the video your link to). You can find many implementations at http://openslam.org. You might also look into an ICP algorithm (Iterative Closest Point) and the KinectFusion from Microsoft (and the open source KinFu implementation from PCL).