Rotating Pixel Art with 2x Pixels in OpenGL - c++

I have a game which is using a pixel art style, upscaled so 1 pixel is equal to a 2x2 area onscreen. However, when I rotate a sprite in OpenGl it draws it using the screen pixels so it breaks the illusion of a low-res style game:
How can I rotate the sprite and draw it using the larger pixels? Right now sprites are drawn using a Sprite class with a method called Draw, and the code looks like this:
void Sprite::Draw(int x, int y, int w, int h, int tx = 0, int ty = 0, int tw = 1, int th = 1, int rotation = 0, int rx = 0, int ry = 0, int sr = 0, float r = 1, float g = 1, float b = 1) {
glEnable(GL_TEXTURE_2D);
glTranslatef(x+(w/2), y+(h/2), 0);
glRotatef((float) sr, 0.0f, 0.0f, 1.0f);
glTranslatef(-x-(w/2), -y-(h/2), 0);
glTranslatef(x+(w/2)+rx, y+(h/2)+ry, 0);
glRotatef((float) rotation, 0.0f, 0.0f, 1.0f);
glTranslatef(-(w/2)-rx, -(h/2)-ry, 0);
glColor3f(r, g, b);
glBindTexture(GL_TEXTURE_2D, texture);
const float verts[] = {
0, (float) h,
(float) w, (float) h,
0, 0,
(float) w, 0
};
const float tVerts[] = {
(float)tx/(float)width, ((float)ty+(float)th)/(float)height,
((float)tx+(float)tw)/(float)width, ((float)ty+(float)th)/(float)height,
(float)tx/(float)width, (float)ty/(float)height,
((float)tx+(float)tw)/(float)width, (float)ty/(float)height
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, verts);
glTexCoordPointer(2, GL_FLOAT, 0, tVerts);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glLoadIdentity();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisable(GL_TEXTURE_2D);
}
Thanks!
EDIT: I guess I should mention I'm using SDL2 for window management.

You can render the whole scene at half the resolution, and then scale it up. To do this, you use a FBO (Frame Buffer Object) for your primary rendering, and then blit it to the default framebuffer.
Once, during setup, create your FBO and render target, and attach the render target to the FBO:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
GLuint rbId = 0;
glGenRenderbuffers(1, &rbId);
// Need to bind this once so object is created.
glBindRenderbuffer(GL_RENDERBUFFER, rbId);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboId);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbId);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
On every window resize, with the window resized to width/height, allocate the render target at half the window size. If your window is not resizable, you can combine this with the setup code:
glBindRenderbuffer(GL_RENDERBUFFER, rbId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, width / 2, height / 2);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
On every redraw, render to the FBO, and blit the result to the default framebuffer:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboId);
glViewport(0, 0, width / 2, height / 2);
// draw content
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glBlitFramebuffer(0, 0, width / 2, height / 2,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
The above assumes that width and height are even numbers. You may have to tweak it slightly if they can be odd.

Related

How to fill the deph buffer in OpenGL 1 with a depth texture?

I separate moveable 3d object and non moveable 3d. I can generate 2 textures for non moveables : background color texture and background depth texture . I have no problem. The values are ok for both. I check the value with a gray gradient texture for the depth.
And I would like to write / fill the color buffer and the depth buffer in opengl with those texture but the depth doesn't work (the color buffer is fine on the screen).
My goal is to never recompute the non movable 3d objects when the camera doesn't move.
(rem : The depth buffer bits size is always in 16 bits)
This is my code for generate the depth texture :
U16 m_backgroundDepthBuffer[4096 * 4096]; // it's a member of my class
glReadPixels(0,0, width(), height(), GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, (void*)m_backgroundDepthBuffer);
// Allocate GPU-memory for the depth-texture.
glDeleteTextures(1, &m_backgroundDepthTextureId);
glGenTextures(1, &m_backgroundDepthTextureId);
glBindTexture(GL_TEXTURE_2D, m_backgroundDepthTextureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16,
width(), height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, m_backgroundDepthBuffer);
This is my code for filling the color buffer :
void Application::Render()
{
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glClear(GL_DEPTH_BUFFER_BIT|GL_COLOR_BUFFER_BIT);
// render background
Render::RenderImage(m_backgroundTextureId, 0,0, width(), height());
// for debug
// Render::RenderImage(m_backgroundDepthGrayTextureId, 0,0, width(), height());
// This is how I fill the depth buffer : it's work but slow
glDrawPixels(width(), height(), GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, (GLvoid*) m_backgroundDepthBuffer); //
// doesn't work
// Render::RenderDepth(m_backgroundDepthTextureId, 0,0, width(), height());
}
This is my Render::RenderImage() : it works
void Render::RenderImage(U32 tex, int x, int y, int w, int h, float anchorX, float anchorY)
{
glClear(GL_DEPTH_BUFFER_BIT);
GLboolean depth = glIsEnabled(GL_DEPTH_TEST);
if (depth)
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, s_width, s_height, 0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor3ub(255, 255, 255);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
x -= (int)(anchorX * w);
y -= (int)(anchorY * h);
gVertices[0] = VECTOR3F(x, y, 0);
gVertices[1] = VECTOR3F(x + w - 1, y, 0);
gVertices[2] = VECTOR3F(x + w - 1, y + h - 1, 0);
gVertices[3] = VECTOR3F(x, y + h - 1, 0);
gTexCoords[0] = VECTOR2F(0, 1);
gTexCoords[1] = VECTOR2F(1, 1);
gTexCoords[2] = VECTOR2F(1, 0);
gTexCoords[3] = VECTOR2F(0, 0);
gIndexes[0] = 2;
gIndexes[1] = 1;
gIndexes[2] = 0;
gIndexes[3] = 0;
gIndexes[4] = 3;
gIndexes[5] = 2;
glVertexPointer(3, GL_FLOAT, 0, gVertices);
glTexCoordPointer(2, GL_FLOAT, 0, gTexCoords);
glDrawElements(GL_TRIANGLES, 3 * 2, GL_UNSIGNED_SHORT, gIndexes);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
if (depth)
glEnable(GL_DEPTH_TEST);
}
This is my Render::RenderDepth() : it doesn't work / I have a texture with depth = 0.0f instead to use the depth of each texel.
I want only to change the depth buffer and do nothing to the color buffer
Render::RenderDepth() is similar to Render::RenderImage() : I render 2 triangles
void Render::RenderDepth(U32 tex, int x, int y, int w, int h, float anchorX, float anchorY)
{
glClear(GL_DEPTH_BUFFER_BIT);
/*glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_DEPTH_TO_TEXTURE_EXT);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_COMPARE_FUNC, GL_ALWAYS);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_DEPTH_TEXTURE_MODE, GL_ALPHA);
*/
GLboolean depth = glIsEnabled(GL_DEPTH_TEST);
//if (depth)
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, s_width, s_height, 0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor3ub(255, 255, 255);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
x -= (int)(anchorX * w);
y -= (int)(anchorY * h);
gVertices[0] = VECTOR3F(x, y, 0.0);
gVertices[1] = VECTOR3F(x + w - 1, y, 0.0);
gVertices[2] = VECTOR3F(x + w - 1, y + h - 1, 0.0);
gVertices[3] = VECTOR3F(x, y + h - 1, 0.0);
gTexCoords[0] = VECTOR2F(0, 1);
gTexCoords[1] = VECTOR2F(1, 1);
gTexCoords[2] = VECTOR2F(1, 0);
gTexCoords[3] = VECTOR2F(0, 0);
gIndexes[0] = 2;
gIndexes[1] = 1;
gIndexes[2] = 0;
gIndexes[3] = 0;
gIndexes[4] = 3;
gIndexes[5] = 2;
glVertexPointer(3, GL_FLOAT, 0, gVertices);
glTexCoordPointer(2, GL_FLOAT, 0, gTexCoords);
glDrawElements(GL_TRIANGLES, 3 * 2, GL_UNSIGNED_SHORT, gIndexes);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
}

Framebuffer Texture rendered to screen is stretched at certain points

I'm currently trying to test out rendering to a framebuffer for various uses, but whenever I have an object(say a square) at a certain y-value, it appears "stretched", and then past a certain y-value or a certain x-value it seems to "thin out" and disappears. I have determined the x and y-values that it disappears at, but the coordinates seem to not have any rhyme or reason.
When I remove the framebuffer binding and render directly to the screen it draws the square perfectly fine, no matter the x or y-value.
Drawing a basic square(using immediate mode to remove possible errors) with a wide x-value looks like this:
Code here:
Window window("Framebuffer Testing", 1600, 900); //1600x900 right now
int fbowidth = 800, fboheight = 600;
mat4 ortho = mat4::orthographic(0, width, 0, height, -1.0f, 1.0f);
//trimmed out some code from shader creation that is bugless and unneccessary to include
Shader shader("basic"); shader.setUniform("pr_matrix", ortho);
Shader drawFromFBO("fbotest"); shader.setUniform("pr_matrix", ortho);
GLfloat screenVertices[] = {
0, 0, 0, 0, height, 0,
width, height, 0, width, 0, 0};
GLushort indices[] = {
0, 1, 2,
2, 3, 0 };
GLfloat texcoords[] = { //texcoords sent to the drawFromFBO shader
0, 0, 0, 1, 1, 1,
1, 1, 1, 0, 0, 0 };
IndexBuffer ibo(indices, 6);
VertexArray vao;
vao.addBuffer(new Buffer(screenVertices, 4 * 3, 3), 0);
vao.addBuffer(new Buffer(texcoords, 2 * 6, 2), 1);
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, fbowidth, fboheight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture, 0);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
std::cout << "false" << std::endl;
glEnable(GL_TEXTURE_2D);
//the x-values mess up at ~783 thru 800 and the y-values at 0 thru ~313
while(!window.closed()) {
glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //grey
window.clear(); //calls glclear for depth and color buffer
//bind framebuffer and shader
shader.enable(); //literally just calls glUseProgram(id) with the compiled shader id
glViewport(0, 0, fbowidth, fboheight);
glBindFramebuffer(GL_FRAMEBUFFER, fbo); //bind the fbo
glClearColor(1.0f, 0.0f, 1.0f, 1.0f); //set clear color to pink
glClear(GL_COLOR_BUFFER_BIT);
//render a red square to the framebuffer texture
glBegin(GL_QUADS); {
glColor3f(1.0f, 0.0f, 0.0f); //set the color to red
glVertex3f(700, 400, 0);
glVertex3f(700, 450, 0);
glVertex3f(750, 450, 0);
glVertex3f(750, 400, 0);
} glEnd();
shader.disable();
glBindFramebuffer(GL_FRAMEBUFFER, 0); //set framebuffer to the default
//render from framebuffer to screen
glViewport(0, 0, width, height);
drawFromFBO.enable();
glActiveTexture(GL_TEXTURE0);
drawFromFBO.setUniform1i("texfbo0", 0);
glBindTexture(GL_TEXTURE_2D, texture);
vao.bind();
ibo.bind();
glDrawElements(GL_TRIANGLES, ibo.getCount(), GL_UNSIGNED_SHORT, NULL);
ibo.unbind();
vao.unbind();
drawFromFBO.disable();
window.update();
}
If you want to see any thing extra, the file is located at my Github: here

How to multisample FBOS

Notice: I am using LWJGL.
Here is my code for creating a new FBO:
/**
* Creates a new FBO.
* #param width The width of the FBO to create.
* #param height The height of the FBO to create.
* #return an int[] array containing the buffer IDs in the
* following order: {frameBufferID, colorBufferID (texture), depthBufferID}.
*/
public static int[] newFBO(int width, int height) {
int[] out = new int[3];
out[0] = glGenFramebuffersEXT();
out[1] = glGenTextures();
out[2] = glGenRenderbuffersEXT();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, out[0]);
glBindTexture(GL_TEXTURE_2D, out[1]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, org.lwjgl.opengl.GL12.GL_TEXTURE_MAX_LEVEL,20);
glTexParameteri(GL_TEXTURE_2D, GL14.GL_GENERATE_MIPMAP,GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0,GL_RGBA, GL_INT, (java.nio.ByteBuffer) null);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_TEXTURE_2D, out[1], 0);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, out[2]);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL14.GL_DEPTH_COMPONENT24, width, height);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, out[2]);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
return out;
}
Here is how I draw the FBO to the screen:
public static void rectOnScreen(int tex) {
glBindTexture(GL_TEXTURE_2D, tex);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(-1, -1);
glTexCoord2f(0, 1);
glVertex2f(-1, 1);
glTexCoord2f(1, 1);
glVertex2f(1, 1);
glTexCoord2f(1, 0);
glVertex2f(1, -1);
glEnd();
glDisable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
}
Basically, I use out[2] as the argument for that function.
Now, how do I apply multisampling here? I don't really like the jaggedish look of the results I'm getting. I want to take multiple samples of the FBO when I draw it. I would be very happy to get the code written out, but I guess a link to a tutorial or something is fine too.

Correctly use stencil_texturing in OpenGL

I am trying to implement the stencil_texturing extension of OpenGL as a proof of concept. My video card supports up to GL 4.3 so stencil_texturing is available to me. If more clarification is necessary here is the spec provided: http://www.opengl.org/registry/specs/ARB/stencil_texturing.txt.
So the goal of my test is to render my color buffer to a texture in frame 0, then the depth buffer in frame 1 and finally the stencil buffer in frame 2. The easy part is done and I have my color and depth buffer textures rendered fine. My issue lies with the stencil buffer and I believe the issue is coming from either my lack of understanding stencil buffers (which could very well be the case) or is my misuse of stencil_texturing. I tried to find some info online but there is very little available.
To give you an idea of what I am rendering here are my current frame captures:
Color buffer, Depth buffer, Stencil buffer
So my vision for the stencil buffer is to just stencil out the middle triangle, so everything in the middle triangle has a value of 1 and every part of the texture has a value of 0. I am not sure how this will come up when rendering but I imagine the areas with a stencil value of 1 will be different than those with 0.
Here is my code below. It is just a test class that I throw into a framework I made for them. I believe the only thing not definied is GLERR() which basically calls glGetError() to make sure everything is correct.
typedef struct
{
GLuint program;
GLuint vshader;
GLuint fshader;
} StencilTexturingState;
class TestStencilTexturing : public TestInfo
{
public:
TestStencilTexturing(TestConfig& config, int argc, char** argv)
:width(config.windowWidth), height(config.windowHeight)
{
state = (StencilTexturingState*) malloc(sizeof(StencilTexturingState));
}
~TestStencilTexturing()
{
destroyTestStencilTexturing();
}
void loadFBOShaders()
{
const char* vshader = "assets/stencil_texturing/fbo_vert.vs";
const char* fshader = "assets/stencil_texturing/fbo_frag.fs";
state->vshader = LoadShader(vshader, GL_VERTEX_SHADER);
GLERR();
state->fshader = LoadShader(fshader, GL_FRAGMENT_SHADER);
GLERR();
state->program = Link(state->vshader, state->fshader, 1, "inPosition");
GLERR();
glUseProgram(state->program);
}
void loadTextureShaders()
{
const char* vshader = "assets/stencil_texturing/tex_vert.vs";
const char* fshader = "assets/stencil_texturing/tex_frag.fs";
state->vshader = LoadShader(vshader, GL_VERTEX_SHADER);
GLERR();
state->fshader = LoadShader(fshader, GL_FRAGMENT_SHADER);
GLERR();
state->program = Link(state->vshader, state->fshader, 1, "inPosition");
GLERR();
glUseProgram(state->program);
}
void destroyTestStencilTexturing()
{
glUseProgram(0);
glDeleteShader(state->vshader);
glDeleteShader(state->fshader);
glDeleteProgram(state->program);
free(state);
}
void RenderToTexture(GLuint renderedTexture, int frame)
{
GLint posId, colId;
GLuint fboId, depth_stencil_rb;
const float vertexFBOPositions[] =
{
-0.7f, -0.7f, 0.5f, 1.0f,
0.7f, -0.7f, 0.5f, 1.0f,
0.6f, 0.7f, 0.5f, 1.0f,
};
const float vertexFBOColors[] =
{
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
};
// Load shaders for the FBO
loadFBOShaders();
// Setup the FBO
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glViewport(0, 0, width, height);
// Set up renderbuffer for depth_stencil formats.
glGenRenderbuffers(1, &depth_stencil_rb);
glBindRenderbuffer(GL_RENDERBUFFER, depth_stencil_rb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_RENDERBUFFER, depth_stencil_rb);
// Depending on the frame bind the 2D texture differently.
// Frame 0 - Color, Frame 1 - Depth, Frame 2 - Stencil
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
// Create our RGBA texture to render our color buffer into.
if (frame == 0)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, renderedTexture, 0);
}
// Create our Depth24_Stencil8 texture to render our depth buffer into.
if (frame == 1)
{
glEnable(GL_DEPTH_TEST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, renderedTexture, 0);
}
// Create our Depth24_Stencil8 texture and change depth_stencil_texture mode
// to render our stencil buffer into.
if (frame == 2)
{
glEnable(GL_DEPTH_TEST | GL_STENCIL_TEST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, renderedTexture, 0);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_STENCIL_TEXTURE_MODE, GL_STENCIL_INDEX);
}
GLERR();
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
printf("There is an error with the Framebuffer, fix it!\n");
}
GLERR();
// Give the values of the position and color of our triangle to the shaders.
posId = glGetAttribLocation(state->program, "position");
colId = glGetAttribLocation(state->program, "color");
GLERR();
glVertexAttribPointer(posId, 4, GL_FLOAT, 0, 0, vertexFBOPositions);
glEnableVertexAttribArray(posId);
glVertexAttribPointer(colId, 4, GL_FLOAT, 0, 0, vertexFBOColors);
glEnableVertexAttribArray(colId);
// Clear the depth buffer back to 1.0f to draw our RGB stripes far back.
glClearDepth(1.0f);
glClear(GL_DEPTH_BUFFER_BIT);
if (frame == 2)
{
glStencilFunc(GL_NEVER, 1, 0xFF); // never pass stencil test
glStencilOp(GL_REPLACE, GL_KEEP, GL_KEEP); // replace stencil buffer values to ref=1
glStencilMask(0xFF); // stencil buffer free to write
glClear(GL_STENCIL_BUFFER_BIT); // first clear stencil buffer by writing default stencil value (0) to all of stencil buffer.
glDrawArrays(GL_TRIANGLES, 0, 3); // at stencil shape pixel locations in stencil buffer replace stencil buffer values to ref = 1
// no more modifying of stencil buffer on stencil and depth pass.
glStencilMask(0x00);
// can also be achieved by glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// stencil test: only pass stencil test at stencilValue == 1 (Assuming depth test would pass.) and write actual content to depth and color buffer only at stencil shape locations.
glStencilFunc(GL_EQUAL, 1, 0xFF);
}
// Use the Scissors to clear the FBO with a RGB stripped pattern.
glEnable(GL_SCISSOR_TEST);
glScissor(width * 0/3, 0, width * 1/3, height);
glClearColor(0.54321f, 0.0f, 0.0f, 0.54321f); // Red
glClear(GL_COLOR_BUFFER_BIT);
glScissor(width * 1/3, 0, width * 2/3, height);
glClearColor(0.0f, 0.65432f, 0.0f, 0.65432f); // Green
glClear(GL_COLOR_BUFFER_BIT);
glScissor(width * 2/3, 0, width * 3/3, height);
glClearColor(0.0f, 0.0f, 0.98765f, 0.98765f); // Blue
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_SCISSOR_TEST);
GLERR();
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisable(GL_DEPTH_TEST);
GLERR();
// Remove FBO and shaders and return to original viewport.
glUseProgram(0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteShader(state->vshader);
glDeleteShader(state->fshader);
glDeleteProgram(state->program);
glDeleteFramebuffers(1, &fboId);
glViewport(0, 0, width, height);
GLERR();
}
void drawFrameTestStencilTexturing(int frame)
{
GLint posLoc, texLoc;
GLuint renderedTexture;
const GLubyte indxBuf[] = {0, 1, 2, 1, 3, 2};
const float positions[] =
{
-0.8f, -0.8f,
-0.8f, 0.8f,
0.8f, -0.8f,
0.8f, 0.8f,
};
const float texCoords[] =
{
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f
};
// Allocate and initialize the texture that will be rendered to, and then
// textured onto a quad on the default framebuffer.
glGenTextures(1, &renderedTexture);
// Render to the texture using FBO.
RenderToTexture(renderedTexture, frame);
// Create and load shaders to draw the texture.
loadTextureShaders();
// Draw texture to the window.
glClearColor(0.25f, 0.25f, 0.25f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
posLoc = glGetAttribLocation(state->program, "position");
texLoc = glGetAttribLocation(state->program, "a_texCoords");
glVertexAttribPointer(posLoc, 2, GL_FLOAT, 0, 0, positions);
glEnableVertexAttribArray(posLoc);
glVertexAttribPointer(texLoc, 2, GL_FLOAT, 0, 0, texCoords);
glEnableVertexAttribArray(texLoc);
// Draw our generated texture onto a quad.
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indxBuf);
glFlush();
glDeleteTextures(1, &renderedTexture);
GLERR();
}
void renderTest(int frame)
{
drawFrameTestStencilTexturing(frame);
}
private:
StencilTexturingState* state;
const int height, width;
};
RUN_TEST(StencilTexturing, "stencil_texturing", 2);
The line
glEnable(GL_DEPTH_TEST | GL_STENCIL_TEST);
is not going to work, the GL enable enums are NOT bit values, but just enums, so you might enable something else, or just get some GL_INVALID_ENUM error, but you don't enable the stencil test here.

OpenGL ES render bitmap glyph as texture positioning issues

I'm just testing this stuff out, so I don't need an alternate approach (no GL extensions). Just hoping someone sees an obvious mistake in my usage of GLES.
I want to take an bitmap of a glyph that is smaller than 32x32 (width and height are not necessarily powers of 2) and put it into a texture so I can render it. I've first created an empty 32x32 texture then I copy the pixels into the larger texture.
Gluint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTextImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 32, 32, 0,
GL_ALPHA, GL_UNSIGNED_BYTE NULL);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, bitmap.width(), bitmap.height(),
GL_ALPHA, GL_UNSIGNED_BYTE, bitmap.pixels());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Then I try to draw only the bitmap portion of the texture using the texture coordinates:
const GLfloat vertices[] = {
x + bitmap.width(), y + bitmap.height(),
x, y + bitmap.height(),
x, y,
x + bitmap.width(), y
};
const GLfloat texCoords[] = {
0, bitmap.height() / 32,
bitmap.width() / 32, bitmap.height() / 32,
0, 0,
bitmap.width() / 32, 0
};
const GLushort indices[] = { 0, 1, 2, 0, 2, 3 };
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices);
Now if all were well in the world, the size of the square created by the vertices would be the same size as the bitmap portion of the texture and it would draw my bitmap exactly.
Lets say for example that my glyph was 16x16, then it should take up the bottom left quadrant of the 32x32 texture. Then the texCoords would seem to be correct with (0, 0.5), (0.5, 0.5), (0, 0) and (0.5, 0).
However my 12x12 'T' glyph looks like this:
Anyone know why?
BTW. I start by setting up the projection matrix for 2D work as such:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 480, 800, 0.0f, 0.0f, 1.0f);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslate(0.375f, 0.375f, 0.0f); // for exact pixelization
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_ALPHA);
glEnable(GL_TEXTURE_2D);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
The mapping between vertex coordinates and texture coordinates seems to be mixed up. Try changing your vertex coordinates to:
const GLfloat vertices[] = {
x, y + bitmap.height(),
x + bitmap.width(), y + bitmap.height(),
x, y,
x + bitmap.width(), y
};
As an aside:
I don't think you need to go the route via vertex indices in your case. Easier would be a call to glDrawArrays:
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
(as you have already set up your glVertexPointer and glTexCoordPointer).