How can I render RGBA4444 image in OpenGL? - c++

I try to render my image in RGBA4444 without converting to RGBA8888,but....
// Define vertices
float vertices[] = {-1.f, 1.f, 0.f, 0.f, 1.f, // left top
1.f, 1.f, 0.f, 1.f, 1.f, // right top
-1.f, -1.f, 0.f, 0.f, 0.f, // left bottom
1.f, -1.f, 0.f, 1.f, 0.f}; // right bottom
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, image.width, image.height, 0, GL_BGRA,
GL_UNSIGNED_SHORT_4_4_4_4_REV, image.pixels.data());

By default OpenGL assumes that the start of each row of an image is aligned to 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since a pixel of an image with the format GL_UNSIGNED_SHORT_4_4_4_4_REV only needs 2 bytes, the size of a row of the image may not be aligned to 4 bytes.
When the image is loaded to a texture object and 2*image.width is not divisible by 4, GL_UNPACK_ALIGNMENT has to be set to 2, before specifying the texture image with glTexImage2D:
glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA4, image.width, image.height, 0, GL_BGRA,
GL_UNSIGNED_SHORT_4_4_4_4_REV, image.pixels.data());

Related

OpenGL texture gets mirrored

So I am trying to render my texture and i know it works, however when i run my program it renders like this:
Mirrored texture
my code to set the texture coordinates is here:
Vertex vertices[] =
{
//POSITION //COLOR //Texture Co-ordinates
glm::vec3(-.5f, .5f, 0.f), glm::vec3(1.f, 0.f, 0.f), glm::vec2(0.f, 1.f), // [R]
glm::vec3(-.5f, -.5f, 0.f), glm::vec3(0.f, 1.f, 0.f), glm::vec2(0.f, 0.f), // [G]
glm::vec3(.5f, -.5f, 0.f), glm::vec3(0.f, 0.f, 1.f), glm::vec2(1.f, 0.f), // [B]
glm::vec3(.5f, .5f, 0.f), glm::vec3(1.f, 1.f, 0.f), glm::vec2(1.f, 1.f) // [Y]
};
These are my indices:
GLuint indices[] =
{
0, 1, 2, // Triagle 1
0, 2, 3 // Triangle 2
};
unsigned noOfIndices = sizeof(indices) / sizeof(GLuint); // Calculate the number of Indices
I call my image through my header file:
// TEXTURE VALUES //
int imgWidth = 0, imgHeight = 0;
unsigned char* image = stbi_load("images/letterCube.png", &imgWidth, &imgHeight, NULL, STBI_rgb_alpha);
I generate my texture here:
glGenTextures(1, &texture0); // Generate Texture
glBindTexture(GL_TEXTURE_2D, texture0); // Bind Texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Set Texture Params (Wrap S = X)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Set Texture Params (Wrap T = Y)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); // A type of anti-ailiasing (magnificaiton)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); // A type of anti-ailiasing (minificaiton)
if (image)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imgWidth, imgHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D); // Clones image for different resolutions
}
else
{
GE_CORE_ERROR("Texture loading failed");
}
glActiveTexture(0);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(image);
in my update function I activate my textures:
glUseProgram(core_program);
glUniform1i(glGetUniformLocation(core_program, "texture0"), 0);
// ACTIVATE TEXTURE
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture0);
In my fragment_core.glsl file I have this:
#version 440
in vec3 vs_position;
in vec3 vs_color;
in vec2 vs_texturePoints;
out vec4 fs_color;
uniform sampler2D texture0;
void main()
{
//fs_color = vec4(vs_color, 1.f);
fs_color = texture(texture0, vs_texturePoints) * vec4(vs_color, 1.f);
}
I am trying to render the texture correctly however it seems like it mirrors from the other triangle. Is it a co-ordinates problem or am i missing the obvious?

Why this code print white rectangle instead of red in OpenGL?

I am doing some test with opengl and trying to draw a red rectangle. This is the code:
qDebug(__FUNCTION__);
float *rgbImage = (float *)malloc(width * height * 3 * sizeof(float));
float *rgbImagePtr = rgbImage;
qDebug("Initializing");
int y, x;
for(y = 0; y < height; y++)
{
for(x = 0; x < width; x++)
{
*rgbImagePtr++ = 255; // R
*rgbImagePtr++ = 0; // G
*rgbImagePtr++ = 0; // B
}
}
// Generate false image
qDebug("Creating texture");
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
qDebug("drawing image");
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_FLOAT, rgbImage);
qDebug("drawing image finished");
When I run my example, I get white rectangle instead of red as I figure out when I am doing. What's the issue?
EDIT 1
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex3f( 0.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 0.0f); glVertex3f(256.0f, 0.0f, 0.0f);
glTexCoord2f(1.0f, 1.0f); glVertex3f(256.0f, 256.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f); glVertex3f( 0.0f, 256.0f, 0.0f);
glEnd();
Draw code added
Most likely the problem is, that your texture is considered incomplete due to the lack of mipmap layers. You either have to turn off mipmapping for the texture or supply a full stack of mipmap layers.
Additionally it seems you didn't enable the GL_TEXTURE_2D target on the active texture unit (required if using the fixed function pipeline).
The call to glTexImage2d has an incorrect internalFormat parameter:
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_FLOAT, rgbImage);
^-- here
The documentation says that it specifies number of color components in the texture, but it also says that the value should one of the enums given at the end of the page.
In your, case instead of 3, it should be GL_RGB.

Texture mapping does not work correctly using OpenGL

The goal of my program is to mix the video stream from my Kinect with a simple triangle using OpenGL. To display my video stream I load a simple quad and I put my video frame buffer on it like a classical texture mapping. Until here all is ok. But if I add in my scene a simple colored triangle (red, green and blue for my three vertices) the triangle is displayed correctly but my texture is tainted in blue. In fact the API seems to keep the last color loaded for the last vertex of my triangle, so the blue color here. But I don't understand why it keep it.
Here's a screen of the first frame (all is correct):
And the appearence of the second and the following frames :
And my c++ rendering code :
getKinectVideoData(_videoData); //Method that fills the video frame buffer for each frame
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45.0f, (float)width/(float)height, 1.0f, 100.0f);
gluLookAt(0.0f, 0.0f, 5.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f);
glMatrixMode(GL_MODELVIEW);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, (GLvoid*)_videoData);
glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, (GLvoid*)_videoBuffer->GetBuffer());
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-1, -1, 0.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f(1, -1, 0.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f(1, 1, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-1, 1, 0.0f);
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
glBegin(GL_TRIANGLES);
glColor3ub(255, 0, 0);
glVertex3f(0.0f, 0.75f, 0.0f);
glColor3ub(0, 255, 0);
glVertex3f(0.75f, 0.0f, 0.0f);
glColor3ub(0, 0, 255);
glVertex3f(-0.75f, 0.0f, 0.0f);
glEnd();
I clear the buffer at the begin of each frame using the glClear call so it's very strange.
Does anyone can help me?
You have to reset the color back to (255, 255, 255) (using glColor), because it impacts texture processing.
On the first frame, your color is full white, and thus the image is displayed correctly. However, the last call to glColor is (0,0,255), and then the loop goes back to the beginning.

Rotate Texture2d using rotation matrix

I want to rotate 2d texture in cocos2d-x(opengl es) as i searched i should use rotation matrix
as this :
(x = cos(deg) * x - sin(deg) * y y = sin(deg) * x + cos(deg) * y)
but when i want to implement this formula i fail may code is like(i want original x and y to be same ) :
i updated my code to this but still not working!
GLfloat coordinates[] = {
0.0f, text->getMaxS(),
text->getMaxS(),text->getMaxT(),
0.0f, 0.0f,
text->getMaxS(),0.0f };
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
GLfloat vertices[] = { rect.origin.x, rect.origin.y, 1.0f,
rect.origin.x + rect.size.width, rect.origin.y, 1.0f,
rect.origin.x, rect.origin.y + rect.size.height, 1.0f,
rect.origin.x + rect.size.width, rect.origin.y + rect.size.height, 1.0f };
glMultMatrixf(vertices);
glTranslatef(p1.x, p1.y, 0.0f);
glRotatef(a,0,0,1);
glBindTexture(GL_TEXTURE_2D, text->getName());
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
glColor4f( 0, 0, 250, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glRotatef(-a, 0.0f, 0.0f, -1.0f);
glPopMatrix();
If you're using OpenGL in two dimensions, you can still use the various transformation functions. For glRotate(), you have to pass it an axis of (0,0,1):
glRotate(angle,0,0,1);
You seem to be confused about a few aspects of OpenGL. At the minute you have this:
glMultMatrixf(vertices);
glTranslatef(p1.x, p1.y, 0.0f);
glRotatef(a,0,0,1);
glBindTexture(GL_TEXTURE_2D, text->getName());
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
glColor4f( 0, 0, 250, 1);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glRotatef(-a, 0.0f, 0.0f, -1.0f);
glPopMatrix();
That would ostensibly:
// reinterpret your vertices as a matrix; multiply the
// current matrix by the one formed from your vertices
glMultMatrixf(vertices);
// rotate by a degrees, then translate to p1
glTranslatef(p1.x, p1.y, 0.0f);
glRotatef(a,0,0,1);
// bind the relevant texture, supply the vertices
// as vertices this time, supply texture coordinates
glBindTexture(GL_TEXTURE_2D, text->getName());
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
// set a colour of (0, 0, 1, 1) — probably you
// wanted glColor4ub to set a colour of (0, 0, 250/255, 1)?
glColor4f( 0, 0, 250, 1);
// draw some geometry
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// perform the same rotation as before a second time;
// giving either a and -1, or -a and 1 would have been
// the opposite rotation
glRotatef(-a, 0.0f, 0.0f, -1.0f);
// remove the current matrix from the stack
glPopMatrix();
What you probably want is:
// push the current matrix, so that the following
// transformations can be undone with a pop
glPushMatrix();
// rotate by a degrees, then translate to p1
glTranslatef(p1.x, p1.y, 0.0f);
glRotatef(a,0,0,1);
// bind the relevant texture, supply the vertices
// as vertices this time, supply texture coordinates
glBindTexture(GL_TEXTURE_2D, text->getName());
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, coordinates);
// set a colour of (0, 0, 250/255, 1)
glColor4ub( 0, 0, 250, 1);
// draw some geometry
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// remove the current matrix from the stack
glPopMatrix();
The current transform matrices are applied to all geometry. You don't flag up which geometry they apply to and which they don't. Pushes and pops should be paired, and are the way you affect transformations from being affected by prior transformations when that's what you want. So you don't manually perform the inverse rotation, and you need to push before you do all the stuff that changes the matrix. I've also switched you to glColor4ub for the reason given above — you appear to be using unsigned bytes to push colours in but OpenGL uses the range from 0.0 to 1.0 to represent a colour. glColor4ub will automatically map from the former to the latter.

drawing interleaved VBO with glDrawArrays

I'm currently using glDrawElements to render using multiple VBOs (vertex, color, texture, and index). I've found that very few vertices are shared, so I'd like to switch to glDrawArrays, and a single interleaved VBO.
I've been unable to find a clear example of 1) creating an interleaved VBO and adding a quad or tri to it (vert, color, texture), and 2) doing whatever is needed to draw it using glDrawArrays. What is the code for these two steps?
Off the top of my head:
//init
glBindBuffer(GL_ARRAY_BUFFER, new_array);
GLfloat data[] = {
0.f, 0.f, 0.f, 0.f, 0.f,
0.f, 0.f, 100.f, 0.f, 1.f,
0.f, 100.f, 100.f, 1.f, 1.f,
0.f, 100.f, 100.f, 1.f, 1.f,
0.f, 100.f, 0.f, 1.f, 0.f,
0.f, 0.f, 0.f, 0.f, 0.f,
};
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
// draw
glBindBuffer(GL_ARRAY_BUFFER, new_array);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), NULL);
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), ((char*)NULL)+3*sizeof(GLfloat) );
glDrawArrays(GL_TRIANGLES, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
There is nothing particularly magic in this code. Just watch how:
The data from the data array gets loaded: as is, all contiguous
the stride for the various attributes is set to 5*sizeof(GLfloat), as this is what is in data: 3 floats for position and 2 for texcoord. Side note, you usually want this to be a power of 2, unlike here.
the offset is computed from the start of the array. so since we store vertex first, the offset for vertex is 0. texcoord is stored after 3 floats of position data, so its offset is 3*sizeof(GLfloat).
I did not include color in there for a reason: they typically are stored as UNORMs, which makes for a messier initialization code. You need to store both GLfloat and GLubyte in the same memory chunk. At that point, structs can help if you want to do it in code, but it largely depends on where your data is ultimately coming from.