Imagine the following scenario: you have a set of RPG character spritesheets in PNG format and you want to use them in an OpenGL application.
The separate characters are (usually) 16 by 24 pixels in size (that is, 24 pixels tall) and may be at any width and height without leaving padding. Kinda like this:
(source: kafuka.org)
I already have the code to determine an integer-based clipping rectangle given a frame index and size:
int framesPerRow = sheet.Width / cellWidth;
int framesPerColumn = sheet.Height / cellHeight;
framesTotal = framesPerRow * framesPerColumn;
int left = frameIndex % framesPerRow;
int top = frameIndex / framesPerRow;
//Clipping rect's width and height are obviously cellWidth and cellHeight.
Running this code with frameIndex = 11, cellWidth = 16, cellHeight = 24 would return a cliprect (32, 24)-(48, 48) assuming it's Right/Bottom opposed to Width/Height.
The actual question
Now, given a clipping rectangle and an X/Y coordinate to place the sprite on, how do I draw this in OpenGL? Having the zero coordinate in the top left is preferred.
You have to start thinking in "texture space" where the coordinates are in the range [0, 1].
So if you have a sprite sheet:
class SpriteSheet {
int spriteWidth, spriteHeight;
int texWidth, texHeight;
int tex;
public:
SpriteSheet(int t, int tW, int tH, int sW, int sH)
: tex(t), texWidth(tW), texHeight(tH), spriteWidth(sW), spriteHeight(sH)
{}
void drawSprite(float posX, float posY, int frameIndex);
};
All you have to do is submit both vertices and texture vertices to OpenGL:
void SpriteSheet::drawSprite(float posX, float posY, int frameIndex) {
const float verts[] = {
posX, posY,
posX + spriteWidth, posY,
posX + spriteWidth, posY + spriteHeight,
posX, posY + spriteHeight
};
const float tw = float(spriteWidth) / texWidth;
const float th = float(spriteHeight) / texHeight;
const int numPerRow = texWidth / spriteWidth;
const float tx = (frameIndex % numPerRow) * tw;
const float ty = (frameIndex / numPerRow + 1) * th;
const float texVerts[] = {
tx, ty,
tx + tw, ty,
tx + tw, ty + th,
tx, ty + th
};
// ... Bind the texture, enable the proper arrays
glVertexPointer(2, GL_FLOAT, verts);
glTexCoordPointer(2, GL_FLOAT, texVerts);
glDrawArrays(GL_TRI_STRIP, 0, 4);
}
};
Franks solution is already very good.
Just a (very important) sidenote, since some of the comments suggested otherwise.
Please don't ever use glBegin/glEnd.
Don't ever tell someone to use it.
The only time it is OK to use glBegin/glEnd is in your very first OpenGL program.
Arrays are not much harder to handle, but...
... they are faster.
... they will still work with newer OpenGL versions.
... they will work with GLES.
... loading them from files is much easier.
I'm assuming you're learning OpenGL and only needs to get this to work somehow. If you need raw speed, there's shaders and vertex buffers and all sorts of both neat and complicated things.
The simplest way is to load the PNG into a texture (assuming you have the ability to load images into memory, you do need htat), then draw it with a quad setting appropriate texture coordinates (they go from 0 to 1 with floating point coordinates, so you need to divide by texture width or height accordingly).
Use glBegin(GL_QUADS), glTexcoord2f(), glVertex2f(), glEnd() for the simplest (but not fastest) way to draw this.
For making zero top left, either use gluOrtho() to set up the view matrix differently from normal GL (look up the docs for that function, set top to 0 and bottom to 1 or screen_height if you want integer coords) or just make change your drawing loop and just do glVertex2f(x/screen_width, 1-y/screen_height).
There are better and faster ways to do this, but this is probably one of the easiest if you're learning raw OpenGL from scratch.
A suggestion, if I may. I use SDL to load my textures, so what I did is :
1. I loaded the texture
2. I determined how to separate the spritesheet into separate sprites.
3. I split them into separate surfaces
4. I make a texture for each one (I have a sprite class to manage them).
5. Free the surfaces.
This takes more time (obviously) on loading, but pays of later.
This way it's a lot easier (and faster), as you only have to calculate the index of the texture you want to display, and then display it. Then, you can scale/translate it as you like and call a display list to render it to whatever you want. Or, you could do it in immediate mode, either works :)
Related
A quick summary:
I've a simple Quad tree based terrain rendering system that builds terrain patches which then sample a heightmap in the vertex shader to determine the height of each vertex.
The exact same calculation is done on the CPU for object placement and co.
Super straightforward, but now after adding some systems to procedurally place objects I've discovered that they seem to be misplaced by just a small amount. To debug this I render a few crosses as single models over the terrain. The crosses (red, green, blue lines) represent the height read from the CPU. While the terrain mesh uses a shader to translate the vertices.
(I've also added a simple odd/even gap over each height value to rule out a simple offset issue. So those ugly cliffs are expected, the submerged crosses are the issue)
I'm explicitly using GL_NEAREST to be able to display the "raw" height value:
As you can see the crosses are sometimes submerged under the terrain instead of representing its exact height.
The heightmap is just a simple array of floats on the CPU and on the GPU.
How the data is stored
A simple vector<float> which is uploaded into a GL_RGB32F GL_FLOAT buffer. The floats are not normalized and my terrain usually contains values between -100 and 500.
How is the data accessed in the shader
I've tried a few things to rule out errors, the inital:
vec2 terrain_heightmap_uv(vec2 position, Heightmap heightmap)
{
return (position + heightmap.world_offset) / heightmap.size;
}
float terrain_read_height(vec2 position, Heightmap heightmap)
{
return textureLod(heightmap.heightmap, terrain_heightmap_uv(position, heightmap), 0).r;
}
Basics of the vertex shader (the full shader code is very long, so I've extracted the part that actually reads the height):
void main()
{
vec4 world_position = a_model * vec4(a_position, 1.0);
vec4 final_position = world_position;
// snap vertex to grid
final_position.x = floor(world_position.x / a_quad_grid) * a_quad_grid;
final_position.z = floor(world_position.z / a_quad_grid) * a_quad_grid;
final_position.y = terrain_read_height(final_position.xz, heightmap);
gl_Position = projection * view * final_position;
}
To ensure the slightly different way the position is determined I tested it using hardcoded values that are identical to how C++ reads the height:
return texelFetch(heightmap.heightmap, ivec2((position / 8) + vec2(1024, 1024)), 0).r;
Which gives the exact same result...
How is the data accessed in the application
In C++ the height is read like this:
inline float get_local_height_safe(uint32_t x, uint32_t y)
{
// this macro simply clips x and y to the heightmap bounds
// it does not interfer with the result
BB_TERRAIN_HEIGHTMAP_BOUND_XY_TO_SAFE;
uint32_t i = (y * _size1d) + x;
return buffer->data[i];
}
inline float get_height_raw(glm::vec2 position)
{
position = position + world_offset;
uint32_t x = static_cast<int>(position.x);
uint32_t y = static_cast<int>(position.y);
return get_local_height_safe(x, y);
}
float BB::Terrain::get_height(const glm::vec3 position)
{
return heightmap->get_height_raw({position.x / heightmap_unit_scale, position.z / heightmap_unit_scale});
}
What have I tried:
Comparing the Buffers
I've dumped the first few hundred values from the vector. And compared it with the floating point buffer uploaded to the GPU using Nvidia Nsight, they are equal, rounding/precision errors there.
Sampling method
I've tried texture, textureLod and texelFetch to rule out some issue there, they all give me the same result.
Rounding
The super strange thing, when I round all the height values. They are perfectly aligned which just screams floating point precision issues.
Position snapping
I've tried rounding, flooring and ceiling the position, to ensure the position always maps to the same texel. I also tried adding an epsilon offset to rule out a positional precision error (probably stupid because the terrain is stable...)
Heightmap sizes
I've tried various heightmaps, also of different sizes.
Heightmap patterns
I've created a heightmap containing a pattern to ensure the position is not just offsetet.
I am working on a project where I implement a FreeType rendering object to draw text of which the rendering environment is specified with an orthographic projection matrix:
glm::ortho(0, Width, Height, 0);
This makes sure the coordinates are similar to standard GUI systems with (0,0) being the top-left part of the window instead of the bottom-left.
However when rendering using FreeType, things get difficult, because FreeType operates with their origin at the bottom-left of a glyph (minus the descender). My issue is similar to https://stackoverflow.com/questions/25353472/render-freetype-gl-text-with-flipped-projection but no answer was yet provided and his solution was not to my liking (the used library is also slightly different, I assume he is using a wrapper).
So I render my text as follows:
renderText("Testing 123 if text performs sufficiently", 0.0f, 0.0f, 1.0f, 1.0f);
Of which renderText function contains:
renderText(const GLchar *text, GLfloat x, GLfloat y, GLfloat sx, GLfloat sy)
{
[...]
GLfloat xpos = x + glyph->bitmap_left * sx;
GLfloat ypos = y - glyph->bitmap_top * sy;
GLfloat w = glyph->bitmap.width * sx;
GLfloat h = glyph->bitmap.rows * sy;
// Update VBO
GLfloat vertices[4][4] = {
{ xpos, ypos, 0, 0 },
{ xpos + w, ypos, 1, 0 },
{ xpos, ypos + h, 0, 1 },
{ xpos + w, ypos + h, 1, 1 }
};
[...]
}
If I render it like this, it will render the text below the y coordinate of 0 so it won't be visible unless I add an offset to the y coordinate. So looking at FreeType's glyph metrics:
I want to offset the y position by a positive amount equal to the difference between the origin and the top of the glyph image so it always neatly renders the text at my given position. Looking at the image I believe this to be the yMax value so I added the following statement to the code before updating the VBO:
ypos += (glyph->face->bbox.yMax >> 6) * sy;
Which seemed to fix the issue when I loaded the FreeType glyphs with font size 24, but as soon as I tried to use different font sizes it failed to work as this image shows:
As you can see, it clearly doesn't work as I thought it would. I've been thouroughly searching through FreeType's documentation if I was missing something, but I could not find it. Am I using the wrong metrics (using Ascender didn't work as well)?
I want to offset the y position by a positive amount equal to the difference between the origin and the top of the glyph image so it always neatly renders the text at my given position. Looking at the image I believe this to be the yMax value so I added the following statement to the code before updating the VBO:
ypos += (glyph->face->bbox.yMax >> 6) * sy;
In actuality, yMax is not what you are interested in. You could use yMax - yMin to find the height of your glyph, but that is really all it is good for.
From the FreeType 2 API documentation, FT_GlyphSlotRec::bitmap_top is:
The bitmap's top bearing expressed in integer pixels. Remember that this is the distance from the baseline to the top-most glyph scanline, upwards y coordinates being positive.
Look at the image you included in your question again, that is effectively bearingY. Your problem is that you are subtracting this value from your ypos when you should not be. You do need the value as I will explain below, but you definitely do not want to subtract it.
If you eliminate bitmap_top from your calculation of ypos you get the following:
Which is obviously incorrect because it ignores differences in ascent between each character in your string.
Now, take a close look at the following correctly aligned diagram:
In the diagram above, I have illustrated your string's top-most line in red, bottom-most in green and the baseline for all of your glyphs in grey.
As you can see, the capital letter 'T' has the greatest ascent and this generalization holds true for most fonts. Directly below the red line, I have illustrated the difference in ascent between capital 'T' and the current letter as the yellow area. This is the important quantity that you must calculate to properly align your string.
The yellow region in the correctly aligned figure above can be calculated thus:
Chars['T'].bitmap_top - glyph->bitmap_top
If you stop subtracting glyph->bitmap_top from ypos and add the value above, you should get the same results as the correctly aligned diagram.
For bonus points, if you want to align your text to the bottom of the screen, the concept is very similar only you are interested in the difference between the character with the greatest descent (often lowercase 'g') and the current character. That is the distance between the grey baseline and the green line and can be expressed as height - bearingY in your glyph metrics figure.
You should be able to compute descent using this:
(glyph->metrics.height >> 6) - glyph->bitmap_top // bitmap_top is in integer coords
What i have now
#define QUAD_VERT_COUNT 4
#define QUAD_POS_COMP 3
typedef struct quad_pos
{
GLfloat x, y, z;
}quad_pos;
#define SIZE_QUAD_POS = sizeof(quad_pos) * QUAD_VERT_COUNT
static QUAD_BUFFER = 0;
void init_quad_buffer()
{
quad_pos* pos_data = malloc(SIZE_QUAD_POS);
pos_data[0].x = -1.0f;
pos_data[0].y = -1.0f;
pos_data[0].z = 0.0f;
pos_data[1].x = 1.0f;
pos_data[1].y = -1.0f;
pos_data[1].z = 0.0f;
pos_data[2].x = -1.0f;
pos_data[2].y = 1.0f;
pos_data[2].z = 0.0f;
pos_data[3].x = 1.0f;
pos_data[3].y = 1.0f;
pos_data[3].z = 0.0f;
QUAD_BUFFER = create_buffer(GL_ARRAY_BUFFER, GL_STATIC_DRAW, pos_data, SIZE_QUAD_POS);
free(pos_data);
}
void get_quad_buffer
{
return QUAD_BUFFER;
}
And drawning (part of it)
glBindBuffer(GL_ARRAY_BUFFER, get_quad_buffer());
glEnableVertexAttribArray(ss->attrib[0]);//attrib[o] is vertex pos
glVertexAttribPointer(ss->attrib[0], QUAD_POS_COMP, GL_FLOAT, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, QUAD_VERT_COUNT);
Scaling, translating and rotating achieved with matrices and shaders, so yes this buffer never changes for every sprite.
But why we need to use GL_float for just -1.0, 1.0? GL_Byte will be enough.
typedef struct quad_pos
{
GLbyte x, y, z;
}quad_pos;
void init_quad_buffer()
{
quad_pos* pos_data = malloc(SIZE_QUAD_POS);
pos_data[0].x = -1;
pos_data[0].y = -1;
pos_data[0].z = 0;
....
}
Drawning
...
glVertexAttribPointer(ss->attrib[0], QUAD_POS_COMP, GL_BYTE, GL_FALSE, 0, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, QUAD_VERT_COUNT);
Question 1: Do i need normalize set to GL_TRUE?
Question 2: GLclampf and GLfloat both 4 byted floats, but color values are from 0.0 to 1.0 so if i put them in GLbyte too (val/256, so 255 for 1.0, 128 for 0.5, 0 for 0) do i need GL_TRUE for normalize in glVertexAttribPointer?
Question 3: Do i really need padding in vertex data/other data? Adding fictitious pos_data.g, just for sizeof(pos_data) = 16 == Good for gpu?
In general, you could always aim for the half-float (16bit float) extensions to save memory.
Your implementation looks like causing some draw-call overhead. Normalizing (on the fly!) will cause additional overhead. For drawing multiple instances of this constant quad, I recommend the following to speed things up:
Implementation of a geometry-shader; let it generate, transform and emit the 4 vertices of the quad for you.
instanced drawing with a transform-buffer using a texture buffer object (TBO) containing the transform matrices for each quad instance (each matrix column will be accessed using the builtin uniform 'gl_InstanceID').
OR:
Supply the matrices via vertex attribute arrays (probably faster).
These two approaches can be implemented upon the same buffer data layout (just an array of matrices)
But why we need to use GL_float for just -1.0, 1.0? GL_Byte will be enough.
Please note this is not true in general, in most cases you will need a float for precision. And if you only have so few values and so simple geometry, the odds are quite high that there even isn't a reason at all to optimize it to glByte in the first place. You likely have very few vertices at all, so why would you want to save storage on them? This sounds like a very good example of premature optimization (I know, it's an overused term).
Now, for your actual questions:
No, not if you want the same functionality, if normalize is false, the -1 will convert to -1.0f, if it is true it will be more something like -0.0078125f (or -1/128.0f). So if you want to keep the same scale, you don't want it normalized.
Where do you get the idea that GLclampf and GLfloat are 8-byte floats? They are usually 4 byte floats. If you want to pass in RGB colors through vertex attributes, yes you should normalize them as OpenGL expects color components to be in the range [0.0f,1.0f]. But again: why don't you simply pass them as floats? What do you think to gain? In a simple game you probably have not enough colors to notice the difference and in a non-simple game you're more likely to be using textures.
Of this I am not sure. I know it was true for old GPU's (and I mean almost 10y back), but I don't know of any recent claims that this would actually improve something. And in any case, the best-known alignment was to prop all vertex-attributes for one vertex together into (a multiple of) 32 bytes, and that was for ATI cards. Byte alignment might be necessary for some trickier things/extensions, but I do not think you need to worry about it just yet.
I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.
I have recently began using Box2D version 2.1 in combination with Allegro5. Currently, I built a test with a ground and 4 boxes. 3 boxes are stacked, and the other one smashes causing them to flip. During this demonstration, I noticed got two glitches.
One is that creating a box in Box2D "SetAsBox( width, height )", only gives half the size of a normal box drawn to the screen using allegro. Example: In Box2D, I create a box the size of (15, 15). When I come to draw the shape using allegro, I must make an offset of -15 on the y, and scale the shape twice its size.
The other issue is during the collision detection while my boxes rotate due to impact. Most squares hit the ground, but some of them have an offset from the ground of its height making it floating.
Here is the code for making my boxes:
cBox2D::cBox2D( int width, int height ) {
// Note: In Box2D, 30 pixels = 1 meter
velocityIterations = 10;
positionIterations = 10;
worldGravity = 9.81f;
timeStep = ( 1.0f / 60.0f );
isBodySleep = false;
gravity.Set( 0.0f, worldGravity );
world = new b2World( gravity, isBodySleep );
groundBodyDef.position.Set( 0.0f, height ); // ground location
groundBody = world->CreateBody( &groundBodyDef );
groundBox.SetAsBox( width, 0.0f ); // Ground size
groundBody->CreateFixture( &groundBox, 0.0f );
}
cBox2D::~cBox2D( void ) {}
void cBox2D::makeSquare( int width, int height, int locX, int locY, float xVelocity, float yVelocity, float angle, float angleVelocity ) {
sSquare square;
square.bodyDef.type = b2_dynamicBody;
square.bodyDef.position.Set( locX, locY ); // Box location
square.bodyDef.angle = angle; // Box angle
square.bodyDef.angularVelocity = angleVelocity;
square.bodyDef.linearVelocity.Set( xVelocity, yVelocity ); // Box Velocity
square.body = world->CreateBody( &square.bodyDef );
square.dynamicBox.SetAsBox( width, height ); // Box size
square.fixtureDef.shape = &square.dynamicBox;
square.fixtureDef.density = 1.0f;
square.fixtureDef.friction = 0.3f;
square.fixtureDef.restitution = 0.0f; // Bouncyness
square.body->CreateFixture( &square.fixtureDef );
squareVec.push_back( square );
}
int cBox2D::getVecSize( void ) {
return squareVec.size();
}
b2Body* cBox2D::getSquareAt( int loc ) {
return squareVec.at( loc ).body;
}
void cBox2D::update( void ) {
world->Step(timeStep, velocityIterations, positionIterations);
world->ClearForces();
}
Edit:
Thank you Chris Burt-Brown for explaining the first issue to me, as for the second issue, It was a good idea, but it did not solve it. I tried both rounding methods you showed me.
Edit:
I think I found the answer to my second issue. Turns out that Allegro has a different coordinate system than OpenGL. As a result, instead of doing -gravity, I had to do +gravity which caused Box2D to become unstable and behave weird.
Edit:
My bad, I thought it was the issue, but turns out it did not change a thing.
It's actually SetAsBox(halfwidth, halfheight). I know it sounds weird but take a look inside SetAsBox. Passing in the parameters 15 and 15 will give a box with corners (-15,-15) and (15,15) i.e. a box of size 30x30.
I think it's intended as an optimisation, but it's a pretty silly one.
I don't know what's causing your other problem, but when you draw the boxes with Allegro, try seeing if it's fixed when you round the coordinates. (If that doesn't work, try ceil.)