glTexImage2D slicing and alignment issues appearing in window - c++

I'm working on making my own topographic map, and I have been using .hgt files from NASA.
I'm loading the files using
void MapImage::load_map_file(const char* filename) {
std::ifstream file(filename, std::ios::in | std::ios::binary);
if (!file) {
std::cout << "Error opening file!" << std::endl;
}
std::vector<short> tempHeight(TOTAL_SIZE);
unsigned char buffer[2];
int x, y;
for (int i = 0; i < TOTAL_SIZE; i++) {
if (!file.read(reinterpret_cast<char*>(buffer), sizeof(buffer))) {
std::cout << "Error reading file!" << std::endl;
}
tempHeight[i] = (buffer[0] << 8) | buffer[1];
}
height = tempHeight;
}
And then adding them to an inmemory bitmap using:
void MapImage::loadTextureImage() {
img_tex = 0;
glGenTextures(1, &img_tex);
int x, y, w, h;
w = h = SRTM_SIZE;
unsigned char* img;
img = (unsigned char *)malloc(3 * w * h);
memset(img, 0, sizeof(img));
int g = 0;
double height_color;
/*
for(int i = 0; i < TOTAL_SIZE; i++){
height_color = (float)height[i] / 10.0;
g = (height_color * 255);
if (g>255)g = 255;
img[i * 3 + 2] = (unsigned char)0;
img[i * 3 + 1] = (unsigned char)g;
img[i * 3 + 0]= (unsigned char)0;
}
*/
for (int i = 0; i < w; i++) {
for (int j = 0; j < h; ++j) {
x = i;
y = (h - 1) - j;
height_color = (float)height[j + (i * w)] / 10.0;
g = (height_color * 255);
if (g>255)g = 255;
img[(x + y*w) * 3 + 2] = (unsigned char)0;
img[(x + y*w) * 3 + 1] = (unsigned char)g;
img[(x + y*w) * 3] = (unsigned char)0;
}
}
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, img_tex);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGB,
w,
h,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
img
);
}
However this results in a image with the corner sliced, like this
Using the commented version in the loadTextureImage() looks slightly different, however with the same corner sliced.
Does anyone have a hint to whats going on? I've tried using a image as a texture, loading with the stbi library, and that works fine, so I'm not sure where it's going wrong.
(the coordinates for the image is N10E099)

This looks like row misalignment, caused by the 3-wide colour data. Try using the following call just before glTexImage2D:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This alignment value, which is 4 by default, is used by glTexImage2D and friends whenever texture data is read to be sent to the GPU.
There is no verification that it matches what the data actually looks like, so in cases like yours where a row doesn't end on a 4-byte boundary, the first few bytes of the next row will be skipped, leading to this diagonal distortion.
Texture data transfers in the other direction (from the GPU to client memory) are aligned via glPixelStorei(GL_PACK_ALIGNMENT, 1);.

Related

How to generate a 2D texture into a 1D buffer and load it in OpenGL?

For some context, I'm getting real-time image data from a camera in a form of 1D binary data and a specified format. I want to convert this format to RGBA or BGRA and use it to texture a screen-aligned quad. However, I seem to be misunderstanding some core concepts about how generating and loading texture in OpenGL works, since I can't get the following example to work correctly:
void OpenGLRenderer::renderScreenAlignedQuad(const XrCompositionLayerProjectionView& view)
{
CHECK_GL_ERROR(glBindVertexArray(m_screenAlignedQuad.vao));
CHECK_GL_ERROR(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_screenAlignedQuad.indexBuffer));
// Update texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glBindTexture(GL_TEXTURE_2D, m_screenAlignedQuad.texture);
#define BUFF_HEIGHT 1152
#define BUFF_WIDTH 1152
unsigned char *buffer = new unsigned char[BUFF_HEIGHT * BUFF_WIDTH * 4];
for (int32_t y = 0; y < BUFF_HEIGHT; y++) {
for (int32_t x = 0; x < BUFF_WIDTH; x++) {
int32_t ind = y * BUFF_WIDTH + x * 4;
buffer[ind] = 255; // R
buffer[ind + 1] = 0; // G
buffer[ind + 2] = 0; // B
buffer[ind + 3] = 255; // A
}
}
{// =! Critical section !=
// The mutex will be unlocked when this object goes out of scope;
// Note that it blocks other threads from writing, but allows reading
std::shared_lock<std::shared_mutex> sl(m_videoStreamContext.g_currentFrameDataMutex);
CHECK_GL_ERROR(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, BUFF_HEIGHT, BUFF_WIDTH, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer));
}// =! Critical section !=
CHECK_GL_ERROR(glDrawElements(GL_TRIANGLES, m_screenAlignedQuad.indexCount, GL_UNSIGNED_SHORT, 0));
}
What I wanted to achieve here is to texture to whole screen red. Instead, I get this:
The texture coordinates seem to be alright (I was able to texture a loaded image correctly before):
For some more debugging information I add some more colors:
void OpenGLRenderer::renderScreenAlignedQuad(const XrCompositionLayerProjectionView& view)
{
CHECK_GL_ERROR(glBindVertexArray(m_screenAlignedQuad.vao));
CHECK_GL_ERROR(glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_screenAlignedQuad.indexBuffer));
// Update texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glBindTexture(GL_TEXTURE_2D, m_screenAlignedQuad.texture);
unsigned char *buffer = new unsigned char[BUFF_HEIGHT * BUFF_WIDTH * 4];
for (int32_t y = 0; y < BUFF_HEIGHT; y++) {
for (int32_t x = 0; x < BUFF_WIDTH; x=x+4) {
int32_t ind = y * BUFF_WIDTH + x;
if (y < BUFF_HEIGHT / 2) {
buffer[ind] = 255; // R
buffer[ind + 1] = 0; // G
buffer[ind + 2] = 0; // B
buffer[ind + 3] = 255; // A
} else if (x < BUFF_WIDTH / 2) {
buffer[ind] = 0; // R
buffer[ind + 1] = 0; // G
buffer[ind + 2] = 255; // B
buffer[ind + 3] = 255; // A
} else {
buffer[ind] = 0; // R
buffer[ind + 1] = 255; // G
buffer[ind + 2] = 0; // B
buffer[ind + 3] = 255; // A
}
}
}
{// =! Critical section !=
// The mutex will be unlocked when this object goes out of scope;
// Note that it blocks other threads from writing, but allows reading
std::shared_lock<std::shared_mutex> sl(m_videoStreamContext.g_currentFrameDataMutex);
CHECK_GL_ERROR(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, BUFF_HEIGHT, BUFF_WIDTH, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer));
}// =! Critical section !=
CHECK_GL_ERROR(glDrawElements(GL_TRIANGLES, m_screenAlignedQuad.indexCount, GL_UNSIGNED_SHORT, 0));
delete buffer;
}
The output looks like this:
So it looks like the texture is rendered too small in both directions. The wrap setting on the texture is set to clamp, so it should not repeat as it does. What am I doing wrong here?
Edit: Please, ignore any obvious inefficiencies or ugly code structure as long as it does not affect the correctness of the program. I'm trying to get the simplest possible version working for now.
Your 2d-to-1d index calculation is just broken:
int32_t ind = y * BUFF_WIDTH + x * 4;
That is supposed to be
int32_t ind = (y * BUFF_WIDTH + x) * 4;
You're making the same basic mistake also in your second approach, just obfuscated a bit more:
for (int32_t y = 0; y < BUFF_HEIGHT; y++) {
for (int32_t x = 0; x < BUFF_WIDTH; x=x+4) {
int32_t ind = y * BUFF_WIDTH + x;
x here is now what x * 4 was before (but your loop should then go to x <= 4*BUFF_WIDTH), and ind = y * 4 * BUFF_WIDTH + x would be correct.

Find the average colour on screen in SDL

in SDL we're trying to find the average colour of the screen. To do so we're reading all the pixel colour values and putting them into an array (Performance is not of concern), for some reason however, GetPixel always returns a colour (0,0,0,0). Ive already established that the RenderReadPixels works correctly since saving a screenshot works just fine.
const Uint32 format = SDL_PIXELFORMAT_ARGB8888;
SDL_Surface* surface = SDL_CreateRGBSurfaceWithFormat(0, width, height, 32, format);
SDL_RenderReadPixels(renderer, NULL, format, surface->pixels, surface->pitch);
float* coverage = new float[width*height]; // * allocates memory
coverage[0] = 1;
for (int i = 0; i < width; i++)
{
for (int j = 0; j < height; j++)
{
SDL_Color col;
col = GetPixel(surface, i, j);
coverage[i * height + j] = (1/3)(col.r + col.b + col.g); //Return coverage value at i, j
std::cout << coverage[i * height + j]; //Always returns 0
std::cout << "\n";
}
}
SDL_Color GetPixel(SDL_Surface* srf, int x, int y)
{
SDL_Color color;
SDL_GetRGBA(get_pixel32(srf, x, y), srf->format, &color.r, &color.g, &color.b, &color.a);
return color;
}
Uint32 get_pixel32(SDL_Surface* surface, int x, int y)
{
//Convert the pixels to 32 bit
Uint32* pixels = (Uint32*)surface->pixels;
//Get the requested pixel
return pixels[(y * surface->w) + x];
}
1/3 is always 0 because of the way number promotion works in C++.
Best be explicit about what you want:
coverage[i * height + j] = float(col.r + col.b + col.g) / 3.0;

How to read tiled images with OpenImageIO and read_tiles()

I have a problem reading a tiled image on Windows with VisualStudio and OIIO 2.0.8.
For testing I rendered an image with Arnold with tiled option checked and without the tile option. While reading the scanline image works fine, the tiled rendering does not read anything. I can see in debug mode that the tilePixels array does not change at all before and after reading a tile. The result of the read_tiles call is always true.
Maybe anyone can have a look and tell me if there is an obvious problem.
This is the still bit chaotic code I use.
std::string filename = "C:/daten/images/tiledRender.exr";
auto in = ImageInput::open(filename);
if (in)
{
int tw = spec.tile_width;
int th = spec.tile_height;
int w = spec.width;
int h = spec.height;
int numBytesPerPixel = 3;
size_t numBytesPerImage = w*h*numBytesPerPixel;
size_t numBytesPerLine = w*numBytesPerPixel;
std::vector<unsigned char> pixels(numBytesPerImage, 120);
unsigned char* line = &pixels[0];
unsigned char *bit = image->bits(); //this comes from QImage
if (tw == 0) // no tiles read scanlines
{
qDebug() << "Found scanline rendering.\n";
for (int i = 0; i < h; i++)
{
bool success = in->read_scanlines(0, 0, i, i+1, 0, 0, 3, TypeDesc::UCHAR, line);
if (!success)
qDebug() << "read scanline problem at scanline " << i << "\n";
line += numBytesPerLine;
}
memcpy(bit, &pixels[0], numBytesPerImage);
}
else {
qDebug() << "Found tiled rendering.\n";
int numTilePixels = tw * th;
int numBytesPerTile = numTilePixels * 3;
std::vector<unsigned char> tilePixels(numBytesPerTile, 80);
unsigned char* tilePtr = &tilePixels[0];
for (int x = 0; x < w; x += tw)
{
for (int y = 0; y < h; y += th)
{
int ttw = tw;
int tth = th;
if ((x + tw) >= w)
ttw = w - x;
if ((y + th) >= h)
tth = h - y;
bool success = in->read_tiles(0, 0, x, x+ttw, y, y+tth, 0, 0, 0, 3, TypeDesc::UCHAR, tilePtr);
if (!success)
qDebug() << "read tiles problem\n";
}
}
}
The solution lies in the way the tiles are read. Instead of reading zStart = 0 and zEnd = 0, I have to use zEnd = 1.
so instead of:
bool success = in->read_tiles(0, 0, x, x+ttw, y, y+tth, 0, 0, 0, 3, TypeDesc::UCHAR, tilePtr);
It has to be
bool success = in->read_tiles(0, 0, x, x+ttw, y, y+tth, 0, 1, 0, 3, TypeDesc::UCHAR, tilePtr);

How do I create a dynamic array of arrays (of arrays)?

I'm trying to create a dynamic array of arrays (of arrays). But for some reason the data gets corrupted. I'm using the data to generate a texture in a OpenGL application.
The following code works fine:
unsigned char imageData[64][64][3];
for (int i = 0; i < 64; i++)
{
for (int j = 0; j < 64; j++)
{
unsigned char r = 0, g = 0, b = 0;
if (i < 32)
{
if (j < 32)
r = 255;
else
b = 255;
}
else
{
if (j < 32)
g = 255;
}
imageData[i][j][0] = r;
imageData[i][j][1] = g;
imageData[i][j][2] = b;
}
std::cout << std::endl;
}
glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData);
Problem is, I want to be able to create a texture of any size (not just 64*64). So I'm trying this:
unsigned char*** imageData = new unsigned char**[64]();
for (int i = 0; i < 64; i++)
{
imageData[i] = new unsigned char*[64]();
for (int j = 0; j < 64; j++)
{
imageData[i][j] = new unsigned char[3]();
unsigned char r = 0, g = 0, b = 0;
if (i < 32)
{
if (j < 32)
r = 255;
else
b = 255;
}
else
{
if (j < 32)
g = 255;
}
imageData[i][j][0] = r;
imageData[i][j][1] = g;
imageData[i][j][2] = b;
}
std::cout << std::endl;
}
glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData);
But that doesn't work, the image gets all messed up so I assume I'm creating the array of arrays (of arrays) incorrectly? What am I doing wrong?
Also, I guess I should be using vectors instead. But how can I cast the vector of vectors of vectors data into a (void *) ?
This line contains multiple bugs:
unsigned char* pixel = &(imageData[(y * height) + x]);
You should multiply x by height and add y. And there's also the fact that each pixel is actually 3 bytes. Some issues that led to this bug in your code (and will lead to to others)
You should also be using std::vector. You can call std::vector::data to get a pointer to the underlying data to interface to C API's.
You should have a class that represents a pixel. This will handle the offsetting correctly and give things names and made the code clearer.
Whenever you are working with a multi dimensional array that you encode into a single dimensional one, you should try to carefully write an access function that takes care of indexing so you can test it separately.
(end bulleted list... oh SO).
struct Pixel {
unsigned char red;
unsigned char blue;
unsigned char green;
};
struct TwoDimPixelArray {
TwoDimArray(int width, int height)
: m_width(width), m_height(height)
{
m_vector.resize(m_width * m_height);
}
Pixel& get(int x, int y) {
return m_vector[x*height + y];
}
Pixel* data() { return m_vector.data(); }
private:
int m_width;
int m_height;
std::vector<Pixel> m_vector;
}
int width = 64;
int height = 64;
TwoDimPixelArray imageData(width, height);
for (int x = 0; x != width ; ++ x) {
for (int y = 0; y != height ; ++y) {
auto& pixel = imageData.get(x, y);
// ... pixel.red = something, pixel.blue = something, etc
}
}
glTexImage2D(target, 0, GL_RGB, 64, 64, 0, GL_RGB, GL_UNSIGNED_BYTE, imageData.data());
You need to use continuous memory for it to work with opengl.
My solution is inspired by previous answers, with a different indexing system
unsigned char* imageData = new unsigned char[width*height*3];
unsigned char r, g, b;
const unsigned int row_size_bytes = width * 3;
for( unsigned int x = 0; x < width; x++ ) {
unsigned int current_row_offset_bytes = x * 3;
for( unsigned int y = 0; y < height; y++ ) {
unsigned int one_dim_offset = y * row_size_bytes + current_row_offset_bytes
unsigned char* pixel = &(imageData[one_dim_offset]);
pixel[0] = r;
pixel[1] = g;
pixel[2] = b;
}
}
Unfortunnately it's untested, but i'm confident assuming sizeof(char) is 1.

C++: BMP rotate image

Ok guys, it's the third time I'm posting the same question (previous are here and here).
Now at this time I will try to explain what's my problem:
So first them all, I need to rotate a .bmp image and it's not rotate correctly. But I don't need to rotate a random image with extension .bmp, I need to rotate this one. I've tried with many other images and all of them was rotated correctly, except mine.
In this moment my code it works just for 180-degree, how could make it to works on any degree which is multiple of 90-degree (I need to rotate my image just with 90, 180 or 270 degrees, not more).
I don't need any kind of external library for this code like CImage, OpenCV, ImageMagik and so on... I need to make this code to work.
So yeh, that's it. And here you can find my actual result.
CODE:
#include <array>
using namespace std;
struct BMP {
int width;
int height;
unsigned char header[54];
unsigned char *pixels;
int row_padded;
int size_padded;
};
void writeBMP(string filename, BMP image) {
string fileName = "Output Files\\" + filename;
FILE *out = fopen(fileName.c_str(), "wb");
fwrite(image.header, sizeof(unsigned char), 54, out);
unsigned char tmp;
for (int i = 0; i < image.height; i++) {
for (int j = 0; j < image.width * 3; j += 3) {
//Convert(B, G, R) to(R, G, B)
tmp = image.pixels[j];
image.pixels[j] = image.pixels[j + 2];
image.pixels[j + 2] = tmp;
}
}
fwrite(image.pixels, sizeof(unsigned char), image.size_padded, out);
fclose(out);
}
BMP readBMP(string filename) {
BMP image;
string fileName = "Input Files\\" + filename;
FILE *in = fopen(fileName.c_str(), "rb");
fread(image.header, sizeof(unsigned char), 54, in); // read the 54-byte header
// extract image height and width from header
image.width = *(int *) &image.header[18];
image.height = *(int *) &image.header[22];
image.row_padded = (image.width * 3 + 3) & (~3); // ok size of a single row rounded up to multiple of 4
image.size_padded = image.row_padded * image.height; // padded full size
image.pixels = new unsigned char[image.size_padded]; // yeah !
if (fread(image.pixels, sizeof(unsigned char), image.size_padded, in) == image.size_padded) {
unsigned char tmp;
for (int i = 0; i < image.height; i++) {
for (int j = 0; j < image.width * 3; j += 3) {
//Convert (B, G, R) to (R, G, B)
tmp = image.pixels[j];
image.pixels[j] = image.pixels[j + 2];
image.pixels[j + 2] = tmp;
}
}
}
fclose(in);
return image;
}
BMP rotate(BMP image, double degree) {
BMP newImage = image;
unsigned char *pixels = new unsigned char[image.size_padded];
int height = image.height;
int width = image.width;
for (int x = 0; x < height; x++) {
for (int y = 0; y < width; y++) {
pixels[(x * width + y) * 3 + 0] = image.pixels[((height - 1 - x) * width + (width - 1 - y)) * 3 + 0];
pixels[(x * width + y) * 3 + 1] = image.pixels[((height - 1 - x) * width + (width - 1 - y)) * 3 + 1];
pixels[(x * width + y) * 3 + 2] = image.pixels[((height - 1 - x) * width + (width - 1 - y)) * 3 + 2];
}
}
newImage.pixels = pixels;
return newImage;
}
int main() {
BMP image = readBMP("Input-1.bmp");
image = rotate(image, 180);
writeBMP("Output.bmp", image);
return 0;
}
You have major memory leak. pixels = new unsigned char[size]; must be freed otherwise there is potentially several megabytes leak with every rotation. You have to rewrite the function to keep track of memory allocations.
When you rotate the image by 90 or 270 of the image, the widht/height of image changes. The size may change too because of padding. The new dimension has to be recorded in header file.
In C++ you can use fopen, but std::fstream is preferred.
Here is an example which works in Windows for 24bit images only. In Big-endian systems you can't use memcpy the way I used it below.
Note, this is for practice only. As #datenwolf explained you should use a library for real applications. Most standard libraries such Windows GDI library (basic drawing functions) offer solution for these common tasks.
#include <iostream>
#include <fstream>
#include <string>
#include <Windows.h>
bool rotate(char *src, char *dst, BITMAPINFOHEADER &bi, int angle)
{
//In 24bit image, the length of each row must be multiple of 4
int padw = 4 - ((bi.biWidth * 3) % 4);
if(padw == 4) padw = 0;
int padh = 4 - ((bi.biHeight * 3) % 4);
if(padh == 4) padh = 0;
int pad2 = 0;
if(padh == 1 || padh == 3) pad2 = 2;
bi.biHeight += padh;
int w = bi.biWidth;
int h = bi.biHeight;
if(angle == 90 || angle == 270)
{
std::swap(bi.biWidth, bi.biHeight);
}
else
{
bi.biHeight -= padh;
}
for(int row = 0; row < h; row++)
{
for(int col = 0; col < w; col++)
{
int n1 = 3 * (col + w * row) + padw * row;
int n2 = 0;
switch(angle)
{
case 0: n2 = 3 * (col + w * row) + padw * row; break;
case 90: n2 = 3 * ((h - row - 1) + h * col) + pad2 * col; break;
case 180: n2 = 3 * (col + w * (h - row - 1)) + padw * (h - row - 1); break;
case 270: n2 = 3 * (row + h * col) + pad2 * col; break;
}
dst[n2 + 0] = src[n1 + 0];
dst[n2 + 1] = src[n1 + 1];
dst[n2 + 2] = src[n1 + 2];
}
}
for(int row = 0; row < bi.biHeight; row++)
for(int col = 0; col < padw; col++)
dst[bi.biWidth * 3 + col] = 0;
bi.biSizeImage = (bi.biWidth + padw) * bi.biHeight * 3;
return true;
}
int main()
{
std::string input = "input.bmp";
std::string output = "output.bmp";
BITMAPFILEHEADER bf = { 0 };
BITMAPINFOHEADER bi = { sizeof(BITMAPINFOHEADER) };
std::ifstream fin(input, std::ios::binary);
if(!fin) return 0;
fin.read((char*)&bf, sizeof(bf));
fin.read((char*)&bi, sizeof(bi));
int size = 3 * (bi.biWidth + 3) * (bi.biHeight + 3);
char *src = new char[size];
char *dst = new char[size];
fin.read(src, bi.biSizeImage);
//use 0, 90, 180, or 270 for the angle
if(rotate(src, dst, bi, 270))
{
bf.bfSize = 54 + bi.biSizeImage;
std::ofstream fout(output, std::ios::binary);
fout.write((char*)&bf, 14);
fout.write((char*)&bi, 40);
fout.write((char*)dst, bi.biSizeImage);
}
delete[]src;
delete[]dst;
return 0;
}
The BMP file format is a complicated, convoluted beast and there's no such thing as a "simple" BMP file reader. The code you have there makes certain hard coded assumptions on the files you're trying to read (24bpp true color, tightly packed, no compression) that it will flat (on its face) when it encounters anything that isn't that specific format. Unfortunately, for you, the majority of BMP files out there is not of that kind. To give you an idea of what a fully conforming BMP reader must support have a look at this page:
http://entropymine.com/jason/bmpsuite/bmpsuite/html/bmpsuite.html
And the code you have up there does not even check if there's a valid file magic bytes signature and if the header is valid. So that's your problem right there: You don't have a BMP file reader. You have something that actually spits out pixels if you're lucky enough the feed it something that by chance happens to be in the right format.