Let's assume I have a big image whose size is 2560*800 and format is RGBA.
I'd like to this big image to 2 textures whose size are 1280*800.
The simple, but stupid, way is m
#define BPP_RGBA 4
int* makeNTexturesFromBigRgbImage(uint8_t *srcImg,
Size srcSize,
uint32_t numTextures,
uint32_t texWidth,
uint32_t texHeigh) {
int i, h, srcStride;
uint8_t *pSrcPos, *pDstPos;
int [] texIds = new int[numTextures];
srcStride = srcSize.w*BPP_RGBA;
glGenTextures(numTextures, texIds);
for (i=0; i<numTexures; i++) {
uint8_t *subImageBuf = alloc(texWidth*texHeight*BPP_RGBA);
pSrcPos = srcImg+(texWidth*BPP_RGBA)*i
pDstPos = subImageBuf;
for (h=0; h<texHeight; h++) {
memcpy(pDstPos, pSrcPos, texWidth*BPP_RGBA)
pSrcPos += srcStride;
pDstPos += (texWidth*BPP_RGBA);
}
glBindTexture(GL_TEXTURE_2D, texIds[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, subImageBuf);
free(subImageBuf);
}
}
But, as I mentioned above, this approach is very stupid.
So, I'd like to know the best way that DOES NOT copy operation on CPU like above.
Is it possible with only openGL APIs.
For example, is it possible?
Step 1. Make a texture, 2560*800, with an big image.
Step 2. Make 2 textures, 1280*800, from the texture in 1.
Thanks.
Related
I'm having access violation on every gl call after this texture initialization (actually the last GLCALL(glBindTexture(m_Target, bound)); is also causing access violation so the code at the top is what probably causing it):
Texture2D::Texture2D(unsigned int format, unsigned int width, unsigned int height, unsigned int unit, unsigned int mimapLevels, unsigned int layers)
: Texture(GL_TEXTURE_2D_ARRAY, unit)
{
unsigned int internalFormat;
if (format == GL_DEPTH_COMPONENT)
{
internalFormat = GL_DEPTH_COMPONENT32;
}
else
{
internalFormat = format;
}
m_Format = format;
m_Width = width;
m_Height = height;
unsigned int bound = 0;
glGetIntegerv(GL_TEXTURE_BINDING_2D_ARRAY, (int*)&bound);
GLCALL(glGenTextures(1, &m_ID));
GLCALL(glActiveTexture(GL_TEXTURE0 + m_Unit));
GLCALL(glBindTexture(m_Target, m_ID));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR));
GLCALL(glTexParameteri(m_Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glTexStorage3D(m_Target, mimapLevels, internalFormat, width, height, layers));
for (size_t i = 0; i < layers; i++)
{
glTexSubImage3D(m_Target, 0, 0, 0, i, m_Width, m_Height, 1, m_Format, s_FormatTypeMap[internalFormat], NULL);
}
GLCALL(glBindTexture(m_Target, bound));
}
OGL pointers are initialized with glad at the beginning of the program:
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}
And this only happens with GL_TEXTURE_2D_ARRAY, even when this is the first line of my code (after initialization of-course), example code:
auto t = Texture2D(GL_DEPTH_COMPONENT, 1024, 1024, 10, 1, 4);
Any idea what may be causing it?
Thanks in advance!
You're passing a NULL for the last argument of glTexSubImage3D, but OpenGL does not allow that:
TexSubImage*D and TextureSubImage*D arguments width, height, depth, format, type, and data match the corresponding arguments to the corresponding TexImage*D command (where those arguments exist), meaning that they accept the same values, and have the same meanings. The exception is that a NULL data pointer does not represent unspecified image contents.
...and there's no text that allows a NULL pointer, therefore you cannot pass NULL.
It's unclear what you're trying to achieve with those glTexSubImage3D calls. Since you're using an immutable texture (glTexStorage3D) you don't need to do anything extra. If instead you want to clear your texture then you can use glClearTexSubImage which does accept NULL for data to means 'clear with zeros'.
I've a problem with the stbi library and I thought, maybe you have an idea why this isn't working. I have declared a function like this:
bool LoadTextureFile(std::string file, unsigned char ** pixel_data, int * width, int * height, int * n);
In this function I get the result of stbi_load directly saved in the *pixel_data variable:
*pixel_data = stbi_load(file.c_str(), width, height, n, 0);
// Do some more stuff till return
return true;
So, now my pixel_data pointer points to the memory of the result of stbi_load. Now I wanna use this result with the glTexImage2D method in my previous function. This function calls the LoadTextureFile method before calling the glTexImage2D method of OpenGL like this:
bool LoadTexture(...)
{
int tex_width, tex_height, tex_n;
unsigned char * pixel_data = NULL;
LoadTextureFile(filename, &pixel_data, &tex_width, &tex_height, &tex_n);
// Do something special ...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tex_width, tex_height, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixel_data);
stbi_image_free(&pixel_data);
// ...
}
But if I do it like that, then I get a memory violation message at the point of calling the glTexImage2D.
If I move this whole magic into the LoadTextureFile, after loading a new texture file with stbi_load, then it works:
bool LoadTextureFile(std::string file, unsigned char ** pixel_data, int * width, int * height, int * n)
{
unsigned char * = = stbi_load(file.c_str(), width, height, n, 0);
// Do some magic ...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 80, 80, 0, GL_RGB, GL_UNSIGNED_BYTE, pixel_data);
stbi_image_free(pixel_data);
return true;
}
Can someone tell me why I get this message and how to solve this problem?
I guess, it is a matter of keep the reserved memory safe, but I'm not really sure, how to solve it. I tried this in a simple console application before, and there it works.
Thank you for your help!
This:
unsigned char * pixel_data = NULL;
[...]
glTexImage2D(..., &pixel_data);
is certainly not what you want. You are using the address of the pionter to your pixel data, not the value of the pointer, so you are basically telling the GL to use some random segment of your stack memory as source for the texture. It should be just
glTexImage2D(..., pixel_data);
In your second variant, what actually happens is unclear since the line
unsigned char * = = stbi_load(file.c_str(), width, height, n, 0);
just doesn't make sense and will never compile. So I assume it is copy and paste error when writing the question. But it is hard to guess what your real code would do.
I am having troubles in OpenGL due to the fact that textures have to be power of 2 in OpenGL.
What I am doing is the following:
I Load a PNG file into an array of unsigned char, using PNGLIB or SOIL. The idea is that I can run though this array and "Select" the parts that are relevant for me. For example, imagining I've loaded a person, but I just want to store the head in a separate texture. So im looping through the array and selecting only the necessary parts.
First Question: I believe that the data in the array is stored in RGBA mode, but I'm yet not sure if the data is filled rowise or columnwise. Is it possible to know this information?
Second Question: Since there is the need to always create power of 2 textures, it can happen that i have an image with 513pixels width so that I will need a texture with 1024px width. So what is happening is that the picture looks like it gets completly "destroyed" because the pixels are not on the places they should be - The texture has a different size than the relevant data filled in the array. So how can I manage to reorganize the array in order to get the contents of the image again? I tried the following but it doesn't work:
unsigned char* new_memory = 0;
int index = 0;
int new_index = 0;
new_memory = new unsigned char[new_tex_width * new_tex_height * 4];
for(int i=0; i<picture.width; i++) // WIDTH
{
for(int j=0; j<picture.height; j++) // HEIGHT
{
for(int k=0; k<4; k++) // DEPTH
new_memory[new_index++] = picture.memory[index++];//picture.memory[i + picture.height * (j + 4 * k)];
}
new_index += new_tex_height - picture.height;
}
glGenTextures(1, &png_texture);
glBindTexture(GL_TEXTURE_2D, png_texture);
glTexImage2D(GL_TEXTURE_2D, 0, 3, new_tex_width, new_tex_height, 0 , GL_RGBA, GL_UNSIGNED_BYTE, new_memory);
Non power of two textures has been supported since a good while back. However, creating textures atlases and rearranging textures still have a lot of merit, the way we do it is to simply use freeimage as they handle all of this for you and supports some of the compressed formats.
If you want to do it your way, and know that it's just a bitmap, then I'd do it more along the lines of ( not tested, and does not check inputs, but should give you an idea ):
void Blit( int xOffset, int yOffset, int targetW, int sourceW, int sourceH, unsigned char* source, unsigned char* target, unsigned int bpp )
{
for( unsigned int i = 0; i < sourceH; ++i )
{
memcpy( target + bpp * ( targetW * ( yOffset + i ) + xOffset ), source + sourceW * i * bpp, sourceW * bpp );
}
}
Basically, just take each row and memcpy it over.
I'm trying to store pixel data by using glReadPixels, but so far I managed to only store it one pixel at a time. I'm not sure if this is the way to go. I currently have this:
unsigned char pixels[3];
glReadPixels(50,50, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixels);
What would be a good way to store it in an array, so that I can get the values like this:
pixels[20][50][0]; // x=20 y=50 -> R value
pixels[20][50][1]; // x=20 y=50 -> G value
pixels[20][50][2]; // x=20 y=50 -> B value
I guess I could simple put it in a loop:
for ( all pixels on Y axis )
{
for ( all pixels in X axis )
{
unsigned char pixels[width][height][3];
glReadPixels(x,y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixels[x][y]);
}
}
But I have the feeling that there must be a much better way to do this. But I do however need my array to be like I described above the code. So would the for loop idea be good, or is there a better way?
glReadPixels simply returns bytes in the order R, G, B, R, G, B, ... (based on your setting of GL_RGB) from the bottom left of the screen going up to the top right. From the OpenGL documentation:
glReadPixels returns pixel data from the frame buffer, starting with
the pixel whose lower left corner is at location (x, y), into client
memory starting at location data. Several parameters control the
processing of the pixel data before it is placed into client memory.
These parameters are set with three commands: glPixelStore,
glPixelTransfer, and glPixelMap. This reference page describes the
effects on glReadPixels of most, but not all of the parameters
specified by these three commands.
The overhead of calling glReadPixels thousands of times will most likely take a noticeable amount of time (depends on the window size, I wouldn't be surprised if the loop took 1-2 seconds).
It is recommended that you only call glReadPixels once and store it in a byte array of size (width - x) * (height - y) * 3. From there you can either reference a pixel's component location with data[(py * width + px) * 3 + component] where px and py are the pixel locations you want to look up, and component being the R, G, or B components of the pixel.
If you absolutely must have it in a 3-dimensional array, you can write some code to rearrange the 1d array after the glReadPixels call.
If you'll define pixel array like: this:
unsigned char pixels[MAX_Y][MAX_X][3];
And the you'll access it like this:
pixels[y][x][0] = r;
pixels[y][x][1] = g;
pixels[y][x][2] = b;
Then you'll be able to read pixels with one glReadPixels call:
glReadPixels(left, top, MAX_Y, MAX_X, GL_RGB, GL_UNSIGNED_BYTE, pixels);
What you can do is declare a simple one dimensional array in a struct and use operator overloading for convenient subscript notation
struct Pixel2d
{
static const int SIZE = 50;
unsigned char& operator()( int nCol, int nRow, int RGB)
{
return pixels[ ( nCol* SIZE + nRow) * 3 + RGB];
}
unsigned char pixels[SIZE * SIZE * 3 ];
};
int main()
{
Pixel2d p2darray;
glReadPixels(50,50, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, &p.pixels);
for( int i = 0; i < Pixel2d::SIZE ; ++i )
{
for( int j = 0; j < Pixel2d::SIZE ; ++j )
{
unsigned char rpixel = p2darray(i , j , 0);
unsigned char gpixel = p2darray(i , j , 1);
unsigned char bpixel = p2darray(i , j , 2);
}
}
}
Here you are reading a 50*50 pixel in one shot and using operator()( int nCol, int nRow, int RGB) operator provides the needed convenience. For performance reasons you don't want to make too many glReadPixels calls
Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512.
This code is from the header:
#define TEXTURE_WIDTH 512
#define TEXTURE_HEIGHT 512
GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4];
Here's the usage of glTexImage2D:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
TEXTURE_WIDTH, TEXTURE_HEIGHT,
0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
And here's how I'm populating the array (rough example, not exact copy from my code):
for (int i = 0; i < getTexturePixelCount(); i++)
{
textureArray[column][row][0] = (GLubyte)pixelValue1;
textureArray[column][row][1] = (GLubyte)pixelValue2;
textureArray[column][row][2] = (GLubyte)pixelValue3;
textureArray[column][row][3] = (GLubyte)pixelValue4;
}
How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory...
Edit:
I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes:
GLbyte *array = new GLbyte[xMax * yMax * zMax];
And to access, for example x/y/z of 1/2/3, you'd need to do:
GLbyte byte = array[1 * 2 * 3];
However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function?
Edit 2:
Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels...
[0]: column 0 > [1]: row 0 > [2]: channel 0 ... n > [n]: row 1 ... n > [n]: column 1 .. n
... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.
Ok since this took me ages to figure this out, here it is:
My task was to implement the example from the OpenGL Red Book (9-1, p373, 5th Ed.) with a dynamic texture array.
The example uses:
static GLubyte checkImage[checkImageHeight][checkImageWidth][4];
Trying to allocate a 3-dimensional array, as you would guess, won't do the job. Someth. like this does NOT work:
GLubyte***checkImage;
checkImage = new GLubyte**[HEIGHT];
for (int i = 0; i < HEIGHT; ++i)
{
checkImage[i] = new GLubyte*[WIDTH];
for (int j = 0; j < WIDTH; ++j)
checkImage[i][j] = new GLubyte[DEPTH];
}
You have to use a one dimensional array:
unsigned int depth = 4;
GLubyte *checkImage = new GLubyte[height * width * depth];
You can access the elements using this loops:
for(unsigned int ix = 0; ix < height; ++ix)
{
for(unsigned int iy = 0; iy < width; ++iy)
{
int c = (((ix&0x8) == 0) ^ ((iy&0x8)) == 0) * 255;
checkImage[ix * width * depth + iy * depth + 0] = c; //red
checkImage[ix * width * depth + iy * depth + 1] = c; //green
checkImage[ix * width * depth + iy * depth + 2] = c; //blue
checkImage[ix * width * depth + iy * depth + 3] = 255; //alpha
}
}
Don't forget to delete it properly:
delete [] checkImage;
Hope this helps...
You can use
int width = 1024;
int height = 1024;
GLubyte * texture = new GLubyte[4*width*height];
...
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
width, height,
0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray);
delete [] texture; //remove the un-needed local copy of the texture;
However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL.
Edit:
C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b].
Use a class to hide the allocation, access to the texture.
For academic purposes, if you really want dynamic multi dimensional arrays the following should work:
int rows = 16, cols = 16;
char * storage = new char[rows * cols];
char ** accessor2D = new char *[rows];
for (int i = 0; i < rows; i++)
{
accessor2D[i] = storage + i*cols;
}
accessor2D[5][5] = 2;
assert(storage[5*cols + 5] == accessor2D[5][5]);
delete [] accessor2D;
delete [] storage;
Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.
You could always wrap it up in a class. If you are loading the image from a file you get the height and width out with the rest of the data (how else could you use the file?), you could store them in a class that wraps the file loading instead of using preprocessor defines. Something like:
class ImageLoader
{
...
ImageLoader(const char* filename, ...);
...
int GetHeight();
int GetWidth();
void* GetDataPointer();
...
};
Even better you could hide the function calls to glTexImage2d in there with it.
class GLImageLoader
{
...
ImageLoader(const char* filename, ...);
...
GLuint LoadToTexture2D(); // returns texture id
...
};