Related
I'm trying to get the RGBA and Z (depth) channels from a stereo 3D image EXR file, but I'm not exactly sure how to do it. I'm very new to working with OpenEXR, and I saw that there was this Python code here (https://gist.github.com/jadarve/de3815874d062f72eaf230a7df41771b) where the person was able to extract the color and depth channels but I don't understand how it works. I also read the official OpenExr documentation, but I can't get the colors from the file. I don't understand where the Array2D is coming from in this example (https://openexr.readthedocs.io/en/latest/ReadingAndWritingImageFiles.html#reading-an-image-file). I wrote my code like this, and I'm getting errors like this: "cannot cast from type 'float' to pointer type 'char *' " but the code in the example looks like it works fine. (I want the RGBA and Z to be in floats).
InputFile file (filename.c_str());
Box2i dw = file.header().dataWindow();
int width = dw.max.x - dw.min.x + 1;
int height = dw.max.y - dw.min.y + 1;
printf("rank: %d, width: %d, height: %d\n", rank, width, height);
FrameBuffer framebuffer;
Array2D<float> r;
Array2D<float> g;
Array2D<float> b;
Array2D<float> a;
r.resizeErase(height, width);
g.resizeErase(height, width);
b.resizeErase(height, width);
a.resizeErase(height, width);
//insert into FB
framebuffer.insert("R",
Slice(HALF,
(char *)(r[0][0]- dw.min.x - dw.min.y * width),
sizeof (r[0][0]) * 1, // xStride
sizeof (r[0][0]) * width, // yStride
1, 1, // x/y sampling
0.0));
framebuffer.insert("G",
Slice(HALF,
(char *)(g[0][0]- dw.min.x - dw.min.y * width),
sizeof (g[0][0]) * 1, // xStride
sizeof (g[0][0]) * width, // yStride
1, 1, // x/y sampling
0.0));
framebuffer.insert("B",
Slice(HALF,
(char *)(b[0][0]- dw.min.x - dw.min.y * width),
sizeof (b[0][0]) * 1, // xStride
sizeof (b[0][0]) * width, // yStride
1, 1, // x/y sampling
0.0));
framebuffer.insert("A",
Slice(HALF,
(char *)(a[0][0]- dw.min.x - dw.min.y * width),
sizeof (a[0][0]) * 1, // xStride
sizeof (a[0][0]) * width, // yStride
1, 1, // x/y sampling
0.0));
file.setFrameBuffer(framebuffer);
file.readPixels(dw.min.y, dw.max.y);
Do you have any advice on how to go about it?
I need to update an array of pixels to the screen every frame. It works initially, however when I try to resize the screen it glitches and eventually throws EXC_BAD_ACCESS 1. I already checked that the buffer is allocated to the correct size before every frame, however it does not seem to affect the result.
#include <stdio.h>
#include <stdlib.h>
#include <GLUT/GLUT.h>
unsigned char *buffer = NULL;
int width = 400, height = 400;
unsigned int screenTexture;
void Display()
{
for (int y = 0; y < height; y+=4) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3] = 255;
}
}
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
// This function results in EXC_BAD_ACCESS 1, although the buffer is always correctly allocated
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, 0, 1);
glMatrixMode(GL_MODELVIEW);
glBegin (GL_QUADS);
glTexCoord2f(0,0); glVertex2i(0, 0);
glTexCoord2f(1,0); glVertex2i(width,0);
glTexCoord2f(1,1); glVertex2i(width,height);
glTexCoord2f(0,1); glVertex2i(0, height);
glEnd ();
glFlush();
glutPostRedisplay();
}
void Resize(int w, int h)
{
width = w;
height = h;
buffer = (unsigned char *)realloc(buffer, sizeof(unsigned char) * width * height * 3);
if (!buffer) {
printf("Error Reallocating buffer\n");
exit(1);
}
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
glutInitWindowSize(width, height);
glutCreateWindow("Rasterizer");
glutDisplayFunc(Display);
glutReshapeFunc(Resize);
glGenTextures(1, &screenTexture);
glBindTexture(GL_TEXTURE_2D, screenTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glDisable(GL_DEPTH_TEST);
buffer = (unsigned char *)malloc(sizeof(unsigned char) * width * height * 3);
glutMainLoop();
}
After resizing the screen does not display properly either:
What is causing this problem? The code compiles and runs you just have to link GLUT and OpenGL.
As #genpfault mentioned, OpenGL reads 4 bytes per pixel instead of your assumption of 3.
Instead of changing GL_UNPACK_ALIGNMENT, you can also change your code to the correct assumption of 4 bytes per pixel via a simple struct:
struct pixel {
unsigned char r, g, b;
unsigned char unused;
};
Then, instead of using the magic constant 3, you can use the much clearer sizeof(struct pixel). This makes it easier to read and to convey the intent of the code, and it doesn't result in any extra code (as the structure is "effectively" an array of 4 bytes).
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, buffer);
^^^^^^
GL_UNPACK_ALIGNMENT defaults to 4, not 1. So OpenGL will read 4 bytes for every pixel, not the 3 that you're assuming.
Set GL_UNPACK_ALIGNMENT to 1 using glPixelStorei().
It sounds like you found something that works, but I don't think the problem was properly diagnosed. I believe the biggest issue is in the way you initialize your texture data here:
for (int y = 0; y < height; y+=4) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3] = 255;
}
}
This only sets data in every 4th row, and then only for every 3rd byte within those rows. To initialize all the data to white, you need to increment the row number (y) by 1 instead of 4, and set all 3 components inside the loop:
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
buffer[(x + y * width) * 3 ] = 255;
buffer[(x + y * width) * 3 + 1] = 255;
buffer[(x + y * width) * 3 + 2] = 255;
}
}
You also need to set GL_UNPACK_ALIGNMENT to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This controls the row alignment (not the pixel alignment, as suggested in a couple other answers). The default value for GL_UNPACK_ALIGNMENT is 4. But with 3 bytes per pixel in the GL_RGB format you are using, the size of a row is only a multiple of 4 bytes if the number of pixels is a multiple of 4. So for tightly packed rows with 3 bytes/pixel, the value needs to be set to 1.
So I have this piece of code, which pretty much draws various 2D textures on the screen, though there are multiple sprites that have to be 'dissected' from the texture (spritesheet). The problem is that rotation is not working properly; while it rotates, it does not rotate on the center of the texture, which is what I am trying to do. I have narrowed it down to the translation being incorrect:
glTranslatef(x + sr->x/2 - sr->w/2,
y + sr->y/2 - sr->h/2,0);
glRotatef(ang,0,0,1.f);
glTranslatef(-x + -sr->x/2 - -sr->w/2,
-y + -sr->y/2 - -sr->h/2,0);
X and Y is the position that it's being drawn to, the sheet rect struct contains the position X and Y of the sprite being drawn from the texture, along with w and h, which are the width and heights of the 'sprite' from the texture. I've tried various other formulas, such as:
glTranslatef(x, y, 0);
The below three switching the negative sign to positive (x - y to x + y)
glTranslatef(sr->x/2 - sr->w/2, sr->y/2 - sr->h/2 0 );
glTranslatef(sr->x - sr->w/2, sr->y - sr->h/2, 0 );
glTranslatef(sr->x - sr->w, sr->y - sr->w, 0 );
glTranslatef(.5,.5,0);
It might also be helpful to say that:
glOrtho(0,screen_width,screen_height,0,-2,10);
is in use.
I've tried reading various tutorials, going through various forums, asking various people, but there doesn't seem to be a solution that works, nor can I find any useful resources that explain to me how I find the center of the image in order to translate it to '(0,0)'. I'm pretty new to OpenGL so a lot of this stuff takes awhile for me to digest.
Here's the entire function:
void Apply_Surface( float x, float y, Sheet_Container* source, Sheet_Rect* sr , float ang = 0, bool flipx = 0, bool flipy = 0, int e_x = -1, int e_y = -1 ) {
float imgwi,imghi;
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,source->rt());
// rotation
imghi = source->rh();
imgwi = source->rw();
Sheet_Rect t_shtrct(0,0,imgwi,imghi);
if ( sr == NULL ) // in case a sheet rect is not provided, assume it's width
//and height of texture with 0/0 x/y
sr = &t_shtrct;
glPushMatrix();
//
int wid, hei;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&wid);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&hei);
glTranslatef(-sr->x + -sr->w,
-sr->y + -sr->h,0);
glRotatef(ang,0,0,1.f);
glTranslatef(sr->x + sr->w,
sr->y + sr->h,0);
// Yeah, out-dated way of drawing to the screen but it works for now.
GLfloat tex[] = {
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h *!flipy )/imghi,
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h *!flipy)/imghi
};
GLfloat vertices[] = { // vertices to put on screen
x, (y + sr->h),
x, y,
(x +sr->w), y,
(x +sr->w),(y +sr->h)
};
// index array
GLubyte index[6] = { 0,1,2, 2,3,0 };
float fx = (x/(float)screen_width)-(float)sr->w/2/(float)imgwi;
float fy = (y/(float)screen_height)-(float)sr->h/2/(float)imghi;
// activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// pass verteices and texture information
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, index);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Sheet container class:
class Sheet_Container {
GLuint texture;
int width, height;
public:
Sheet_Container();
Sheet_Container(GLuint, int = -1,int = -1);
void Load(GLuint,int = -1,int = -1);
float rw();
float rh();
GLuint rt();
};
Sheet rect class:
struct Sheet_Rect {
float x, y, w, h;
Sheet_Rect();
Sheet_Rect(int xx,int yy,int ww,int hh);
};
Image loading function:
Sheet_Container Game_Info::Load_Image(const char* fil) {
ILuint t_id;
ilGenImages(1, &t_id);
ilBindImage(t_id);
ilLoadImage(const_cast<char*>(fil));
int width = ilGetInteger(IL_IMAGE_WIDTH), height = ilGetInteger(IL_IMAGE_HEIGHT);
return Sheet_Container(ilutGLLoadImage(const_cast<char*>(fil)),width,height);
}
Your quad (two triangles) is centered at:
( x + sr->w / 2, y + sr->h / 2 )
You need to move that point to the origin, rotate, and then move it back:
glTranslatef ( (x + sr->w / 2.0f), (y + sr->h / 2.0f), 0.0f); // 3rd
glRotatef (0,0,0,1.f); // 2nd
glTranslatef (-(x + sr->w / 2.0f), -(y + sr->h / 2.0f), 0.0f); // 1st
Here is where I think you are getting tripped up. People naturally assume that OpenGL applies transformations in the order they appear (top-to-bottom), that is not the case. OpenGL effectively swaps the operands everytime it multiplies two matrices:
M1 x M2 x M3
~~~~~~~
(1)
~~~~~~~~~~
(2)
(1) M2 * M1
(2) M3 * (M2 * M1) --> M3 * M2 * M1 (row-major / textbook math notation)
The technical term for this is post-multiplication, it all has to do with the way matrices are implemented in OpenGL (column-major). Suffice it to say, you should generally read glTranslatef, glRotatef, glScalef, etc. calls from bottom-to-top.
With that out of the way, your current rotation does not make any sense.
You are telling GL to rotate 0 degrees around an axis: <0,0,1> (the z-axis in other words). The axis is correct, but a 0 degree rotation is not going to do anything ;)
I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.
I am trying to do just that. I have an image of various tiles of an explosion in my game. I am trying to preprocess the explosion tiles and create the image and then blit it onto the screen.
Here is the tile sheet with the alpha mask:Explosion
Now, I want to blit these and have them maintain their alpha transparency onto a surface which I can then render.
Here is my code:
SDL_Surface* SpriteManager::buildExplosion(int id, SDL_Surface* image, int size)
{
// Create the surface that will hold the explosion image
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
// Our source and destination surfaces
SDL_Rect srcrect;
SDL_Rect dstrect;
int parentX = sprites[id].x;
int parentY = sprites[id].y;
int middle = size / 2;
// Create the first image
srcrect.x = sprites[id].imgBlockX * 32; // default for now
srcrect.y = sprites[id].imgBlockY * 32; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = middle * 32;
dstrect.y = middle * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
errorLog.writeError("Applying surface from x: %i y: %i to x: %i y:%i", srcrect.x, srcrect.y, dstrect.x, dstrect.y);
// Iterate through each explosion
for(int i = 0; i < sprites[id].children.size(); i++)
{
// Get the texture source
srcrect.x = 0; // default for now
srcrect.y = 0; // default for now
srcrect.w = 32;
srcrect.h = 32;
// Get the location it should be applied to
dstrect.x = sprites[id].children[i].x - parentX * 32;
dstrect.y = sprites[id].children[i].y - parentY * 32;
dstrect.w = 32;
dstrect.h = 32;
// Apply the texture
SDL_BlitSurface(image, &srcrect, explosion, &dstrect);
}
//return img;
return explosion;
}
I suspect it has to do with this line but I am really at a loss:
SDL_Surface* explosion = SDL_CreateRGBSurface(SDL_HWSURFACE, size * 32 , size * 32, 32, 0, 0, 0, 255 );
The SDL_Surface called image is the image I linked above just to make that clear. If anyone sees the error of my ways, many thanks!
My problem: The code above does either blits a completely invisible surface or a black surface with the images on them.
I guess I am curious if it possible to do what I described above and if I can modify this code to make that work.