DevIL PNG format for display in OpenGL - opengl

I'm doing some pixel work in OpenGL, and all was going well until I had to load a PNG. I found a thing called 'DevIL', and followed an example I found. It does display something, but it's just kind of a random rainbow. I'm doing it right as far as I can tell; I output the data to a text file to check it. I tried some other libraries, but building them is a little beyond my capabilities. Here's my setup:
((In global scope))
unsigned char pixels[160000*3];
ILubyte * bytes ;
((Main))
ilInit();
ilLoadImage( "test.png" ) ;
size = ilGetInteger( IL_IMAGE_SIZE_OF_DATA ) ;
bytes = ilGetData() ;
And here's my drawing routine:
//Color the screen all pretty
for(int i=0;i<160000*3;)
{
pixels[i] = a;
i++;
pixels[i] = b;
i++;
pixels[i] = c;
i++;
}
//Break the png
for( int i = 0 ; i < size;)
{
if(bytes[i+3] != 255)
{ i+=4;
continue; }
pixels[i] = bytes[i];
i++;
pixels[i] = bytes[i];
i++;
pixels[i] = bytes[i];
i++;
i++;
}
glDrawPixels(
400,
300,
GL_RGB,
GL_UNSIGNED_BYTE,
pixels
);
I know the alignment is wrong; I'll fix that later. The problem is the colors are totally not right
P.S. I know you're supposed to use textured quads

PNG is a compressed format. Seems like ilGetData returns pointer to a compressed data. To get decoded image use ilCopyPixels.

Related

Gdiplus on C++ configuration for binary image?

Is there any setting that I can set the color of the GDIPlus Graphics method or Bitmap method only indicates black and white color (0,0,0/255,255,255) for binary image? I've already tried colorpalette option (getPalette / setPalette) of bitmap class but it doesnt work at all.
I've accessed to the image itself but it doesn't work either. please note that image itself.
for (int bufidx = 0; bufidx < m_BufferSize; bufidx++)
{
if (m_pImage[BINARY_VID][bufidx] > m_Threshold)
{
m_pImage[BINARY_VID][bufidx] = 255;
}
else
{
m_pImage[BINARY_VID][bufidx] = 0;
}
} // Algorithm for thresholding image
this is the code that changes data itself. m_buffersize is size of the image (width * height)
m_pImage[BINARY_VID]is the data itself that has 0~255 value 8-bit data. data is from the camera module.
m_pBitmap[vidType]->LockBits(&rc, 0, PixelFormat8bppIndexed, &bitmapdata);
memcpy(bitmapdata.Scan0, m_pImage[vidType], m_BufferSize);
m_pBitmap[vidType]->UnlockBits(&bitmapdata);
and it's the part that I convert it to Bitmap method
int paletteSize = m_pBitmap[vidType]->GetPaletteSize();
ColorPalette* pPalette = new ColorPalette[paletteSize];
m_pBitmap[vidType]->GetPalette(pPalette, paletteSize);
// gets palette info of bitmap image to set color info of the bitmap
switch (vidType)
{
case NORMAL_VID:
case ROI_VID:
for (unsigned int i = 0; i < pPalette->Count; i++)
{
pPalette->Entries[i] = Color::MakeARGB(255, i, i, i);
}
m_pBitmap[vidType]->SetPalette(pPalette);
break;
// Normal video || ROI video color set
case BINARY_VID:
for (unsigned int i = 0; i < pPalette->Count; i++)
{
if (i > m_Threshold)
{
pPalette->Entries[i] = Color::MakeARGB(255, 255, 255, 255);
}
else
{
pPalette->Entries[i] = Color::MakeARGB(255, 0, 0, 0);
}
}
m_pBitmap[vidType]->SetPalette(pPalette);
break;
default:
AfxMessageBox(TEXT("vidtype error : on converting palette!"));
delete[] pPalette;
return;
break;
}
delete[] pPalette;
MemoryLeakCheck();
and this is the part that I use for converting color.
and this is the result image that I get. I just don't know why I have gray noise on the binary image that only has data of 0 or 255.
There is no gray in this image, that is a false impression.

Convert FreeType GlyphSlot Bitmap To Vulkan BGRA

I'm trying to convert a FreeType GlyphSlot Bitmap to Vulkan BGRA format.
void DrawText(const std::string &text) {
// WIDTH & HEIGHT == dst image dimensions
FT_GlyphSlot Slot = face->glyph;
buffer.resize(WIDTH*HEIGHT*4);
int dst_Pitch = WIDTH * 4;
for (auto c : text) {
FT_Error error = FT_Load_Char(face, c, FT_LOAD_RENDER);
if (error) {
printf("FreeType: Load Char Error\n");
continue;
}
auto char_width = Slot->bitmap.width;
auto char_height = Slot->bitmap.rows;
uint8_t* src = Slot->bitmap.buffer;
uint8_t* startOfLine = src;
for (int y = 0; y < char_height; ++y) {
src = startOfLine;
for (int x = 0; x < char_width; ++x) {
// y * dst_Pitch == Destination Image Row
// x * 4 == Destination Image Column
int dst = (y*dst_Pitch) + (x*4);
// Break if we have no more space to draw on our
// destination texture.
if (dst + 4 > buffer.size()) { break; }
auto value = *src;
src++;
buffer[dst] = 0xff; // +0 == B
buffer[dst+1] = 0xff; // +1 == G
buffer[dst+2] = 0xff; // +2 == R
buffer[dst+3] = value; // +3 == A
}
startOfLine += Slot->bitmap.pitch;
}
}
}
This is giving me garbled output. I'm not sure what I need to do to properly convert to Vulkan B8G8R8A8. I feel like moving from left to right in the buffer we write to our Vulkan texture is incorrect and maybe Vulkan is expecting I add the pixels into the buffer in a different way?
I understand this code will write each letter on top of one another, I will implement taking advantage of Slot->advance after I can properly draw at least a single letter.
One problem is that you resize buffer with every character (which will leave the previous data at the start of the newly allocated space) but when storing the data for the new character c you overwrite the start of the buffer since dst is 0. You probably want to set dst the buffer.size() from before the resize call.
int dst = /*previous buffer size*/;
The issue was due to the fact that I had VkImageCreateInfo tiling set to VK_IMAGE_TILING_OPTIMAL. After changing it to VK_IMAGE_TILING_LINEAR I received the correct output.
Taken straight from https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkImageTiling.html
VK_IMAGE_TILING_OPTIMAL specifies optimal tiling (texels are laid out
in an implementation-dependent arrangement, for more optimal memory
access).
VK_IMAGE_TILING_LINEAR specifies linear tiling (texels are laid out in
memory in row-major order, possibly with some padding on each row).
While I may not be rendering garbage now, my letters are still backwards and seemingly drawing from right to left instead of left to right.
You can see the green 'the' in the top right corner.

DDS texture transparency rendered black Opengl

I am currently trying to render textured objects in Opengl. Everything worked fine until I wanted to render a texture with transparency. Instead of showing the the object transparent it just rendered in total black.
The method fo loading the texture file is this:
// structures for reading and information variables
char magic[4];
unsigned char header[124];
unsigned int width, height, linearSize, mipMapCount, fourCC;
unsigned char* dataBuffer;
unsigned int bufferSize;
fstream file(path, ios::in|ios::binary);
// read magic and header
if (!file.read((char*)magic, sizeof(magic))){
cerr<< "File " << path << " not found!"<<endl;
return false;
}
if (magic[0]!='D' || magic[1]!='D' || magic[2]!='S' || magic[3]!=' '){
cerr<< "File does not comply with dds file format!"<<endl;
return false;
}
if (!file.read((char*)header, sizeof(header))){
cerr<< "Not able to read file information!"<<endl;
return false;
}
// derive information from header
height = *(int*)&(header[8]);
width = *(int*)&(header[12]);
linearSize = *(int*)&(header[16]);
mipMapCount = *(int*)&(header[24]);
fourCC = *(int*)&(header[80]);
// determine dataBuffer size
bufferSize = mipMapCount > 1 ? linearSize * 2 : linearSize;
dataBuffer = new unsigned char [bufferSize*2];
// read data and close file
if (file.read((char*)dataBuffer, bufferSize/1.5))
cout<<"Loading texture "<<path<<" successful"<<endl;
else{
cerr<<"Data of file "<<path<<" corrupted"<<endl;
return false;
}
file.close();
// check pixel format
unsigned int format;
switch(fourCC){
case FOURCC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
break;
case FOURCC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
break;
case FOURCC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
break;
default:
cerr << "Compression type not supported or corrupted!" << endl;
return false;
}
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_2D, ID);
glPixelStorei(GL_UNPACK_ALIGNMENT,1);
unsigned int blockSize = (format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) ? 8 : 16;
unsigned int offset = 0;
/* load the mipmaps */
for (unsigned int level = 0; level < mipMapCount && (width || height); ++level) {
unsigned int size = ((width+3)/4)*((height+3)/4)*blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, level, format, width, height,
0, size, dataBuffer + offset);
offset += size;
width /= 2;
height /= 2;
}
textureType = DDS_TEXTURE;
return true;
In the fragment shader I just set the gl_FragColor = texture2D( myTextureSampler, UVcoords )
I hope that there is an easy explanation such as some code missing.
In the openGL initialization i glEnabled GL_Blend and set a blend function.
Does anyone have an idea of what I did wrong?
Make sure the blend function is the correct function for what you are trying to accomplish. For what you've described that should be glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
You probably shouldn't set the blend function in your openGL initialization function but should wrap it around your draw calls like:
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
//gl draw functions (glDrawArrays,glDrawElements,etc..)
glDisable(GL_BLEND)
Are you clearing the 2D texture binding before you swap buffers? i.e ...
glBindTexture(GL_TEXTURE_2D, 0);

Trouble fitting depth image to RGB image using Kinect 1.0 SDK

I'm trying to get the Kinect depth camera pixels to overlay onto the RGB camera. I am using the C++ Kinect 1.0 SDK with an Xbox Kinect, OpenCV and trying to use the new "NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution" method.
I have watched the image render itself in slow motion and looks as if pixels are being drawn multiple times in the one frame. It first draws itself from the top and left borders, then it gets to a point (you can see a 45 degree angle in there) where it starts drawing weird.
I have been trying to base my code off of the C# code written by Adam Smith at the MSDN forums but no dice. I have stripped out the overlay stuff and just want to draw the depth normalized depth pixels where it "should" be in the RGB image.
The image on the left is what I'm getting when trying to fit the depth image to RGB space, and the image on the right is the "raw" depth image as I like to see it. I was hoping this my method would create a similar image to the one on the right with slight distortions.
This is the code and object definitions that I have at the moment:
// From initialization
INuiSensor *m_pNuiInstance;
NUI_IMAGE_RESOLUTION m_nuiResolution = NUI_IMAGE_RESOLUTION_640x480;
HANDLE m_pDepthStreamHandle;
IplImage *m_pIplDepthFrame;
IplImage *m_pIplFittedDepthFrame;
m_pIplDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
m_pIplFittedDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
// Method
IplImage *Kinect::GetRGBFittedDepthFrame() {
static long *pMappedBits = NULL;
if (!pMappedBits) {
pMappedBits = new long[640*480*2];
}
NUI_IMAGE_FRAME pNuiFrame;
NUI_LOCKED_RECT lockedRect;
HRESULT hr = m_pNuiInstance->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &pNuiFrame);
if (FAILED(hr)) {
// return the older frame
return m_pIplFittedDepthFrame;
}
bool hasPlayerData = HasSkeletalEngine(m_pNuiInstance);
INuiFrameTexture *pTexture = pNuiFrame.pFrameTexture;
pTexture->LockRect(0, &lockedRect, NULL, 0);
if (lockedRect.Pitch != 0) {
cvZero(m_pIplFittedDepthFrame);
hr = m_pNuiInstance->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
m_nuiResolution,
NUI_IMAGE_RESOLUTION_640x480,
640 * 480, /* size is previous */ (unsigned short*) lockedRect.pBits,
(640 * 480) * 2, /* size is previous */ pMappedBits);
if (FAILED(hr)) {
return m_pIplFittedDepthFrame;
}
for (int i = 0; i < lockedRect.size; i++) {
unsigned char* pBuf = (unsigned char*) lockedRect.pBits + i;
unsigned short* pBufS = (unsigned short*) pBuf;
unsigned short depth = hasPlayerData ? ((*pBufS) & 0xfff8) >> 3 : ((*pBufS) & 0xffff);
unsigned char intensity = depth > 0 ? 255 - (unsigned char) (256 * depth / 0x0fff) : 0;
long
x = pMappedBits[i], // tried with *(pMappedBits + (i * 2)),
y = pMappedBits[i + 1]; // tried with *(pMappedBits + (i * 2) + 1);
if (x >= 0 && x < m_pIplFittedDepthFrame->width && y >= 0 && y < m_pIplFittedDepthFrame->height) {
m_pIplFittedDepthFrame->imageData[x + y * m_pIplFittedDepthFrame->widthStep] = intensity;
}
}
}
pTexture->UnlockRect(0);
m_pNuiInstance->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &pNuiFrame);
return(m_pIplFittedDepthFrame);
}
Thanks
I have found that the problem was that the loop,
for (int i = 0; i < lockedRect.size; i++) {
// code
}
was iterating on a per-byte basis, not on a per-short (2 bytes) basis. Since lockedRect.size returns the number of bytes the fix was simply changing the increment to i += 2, even better would be changing it to sizeof(short), like so,
for (int i = 0; i < lockedRect.size; i += sizeof(short)) {
// code
}

OpenGL Issue Drawing a Large Image Texture causing Skewing

I'm trying to store a 1365x768 image on a 2048x1024 texture in OpenGL ES but the resulting image once drawn appears skewed. If I run the same 1365x768 image through gluScaleImage() and fit it onto the 2048x1024 texture it looks fine when drawn but this OpenGL call is slow and hurts performance.
I'm doing this on an Android device (Motorola Milestone) which has 256MB of memory. Not sure if the memory is a factor though since it works fine when scaled using gluScaleImage() (it's just slower.)
Mapping smaller textures (854x480 onto 1024x512, for example) works fine though. Does anyone know why this is and suggestions for what I can do about it?
Update
Some code snippets to help understand context...
// uiImage is loaded. The texture dimensions are determined from upsizing the image
// dimensions to a power of two size:
// uiImage->_width = 1365
// uiImage->_height = 768
// width = 2048
// height = 1024
// Once the image is loaded:
// INT retval = gluScaleImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
copyImage(GL_RGBA, uiImage->_width, uiImage->_height, GL_UNSIGNED_BYTE, uiImage->_texels, width, height, GL_UNSIGNED_BYTE, data);
if (pixelFormat == RGB565 || pixelFormat == RGBA4444)
{
unsigned char* tempData = NULL;
unsigned int* inPixel32;
unsigned short* outPixel16;
tempData = new unsigned char[height*width*2];
inPixel32 = (unsigned int*)data;
outPixel16 = (unsigned short*)tempData;
if(pixelFormat == RGB565)
{
// "RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA" --> "RRRRRGGGGGGBBBBB"
for(unsigned int i = 0; i < numTexels; ++i, ++inPixel32)
{
*outPixel16++ = ((((*inPixel32 >> 0) & 0xFF) >> 3) << 11) |
((((*inPixel32 >> 8) & 0xFF) >> 2) << 5) |
((((*inPixel32 >> 16) & 0xFF) >> 3) << 0);
}
}
if(tempData != NULL)
{
delete [] data;
data = tempData;
}
}
// [snip..]
// Copy function (mostly)
static void copyImage(GLint widthin, GLint heightin, const unsigned int* datain, GLint widthout, GLint heightout, unsigned int* dataout)
{
unsigned int* p1 = const_cast<unsigned int*>(datain);
unsigned int* p2 = dataout;
int nui = widthin * sizeof(unsigned int);
for(int i = 0; i < heightin; i++)
{
memcpy(p2, p1, nui);
p1 += widthin;
p2 += widthout;
}
}
In the render code, without changing my texture coordinates I should see the correct image when using gluScaleImage() and a smaller image (that requires some later correction factors) for the copyImage() code. This is what happens when the image is small (854x480 for example works fine with copyImage()) but when I use the 1365x768 image, that's when the skewing appears.
Finally solved the issue. First thing to know is what's the maximum texture size allowed for the device:
GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);
When I ran this the texture size max for the Motorola Milestone was 2048x2048, which was fine in my case.
After messing with the texture mapping to no end I finally decided to try opening and resaving the image..and voilĂ  it suddenly began working. I don't know what was wrong with the format the original image was stored in but as advice to anyone else experiencing a similar problem: might be worth looking at your image itself.