Non uniform pixel painting in low Frame rate - c++

I am making an image editing program and when making the brush tool I have encountered a problem. The problem is when the frame rate is very low, since the program reads the mouse at that moment and paints the pixel below it. What solution could I use to fix this? I am using IMGUI and OpenGL.
Comparision.
Also Im using this code to update the image on the screen.
UpdateImage() {
glBindTexture(GL_TEXTURE_2D,this->CurrentImage->texture);
if (this->CurrentImage->channels == 4) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, this->CurrentImage->width, this->CurrentImage->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, this->CurrentImage->data);
}
else {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, this->CurrentImage->width, this->CurrentImage->height, 0, GL_RGB, GL_UNSIGNED_BYTE, this->CurrentImage->data);
}
}
And the code for the pencil/brush
toolPencil(int _MouseImagePositionX, int _MouseImagePositionY) {
int index = ((_MouseImagePositionY - 1) * this->CurrentImage.width * this->CurrentImage.channels) + ((_MouseImagePositionX - 1) * this->CurrentImage.channels);
//Paint the pixel black
CurrentImage.data[index] = 0;
CurrentImage.data[index + 1] = 0;
CurrentImage.data[index + 2] = 0;
if (CurrentImage.channels == 4) {
CurrentImage.data[index + 3] = 255;
}
}

sample your mouse without redrawing in its event ...
redraw on mouse change when you can (depends on fps or architecture of your app)
instead of using mouse points directly use them as piecewise cubic curve control points
see:
How can i produce multi point linear interpolation?
Catmull-Rom interpolation on SVG Paths
So you simply use the sampled points as cubic curve control points and interpolate/rasterize the missing pixels. Either sample each segment by 10 lines (increment parameter by 1.0/10.0) or sample it with small enough step so each step is smaller than pixel(based on distance between control points).

Related

Why drawString method does not seem to always start at the given coordinates?

In my code I cannot draw a String at precise coordinates. Its upper left corner does not start at the given coordinates but somewhere else. However if I draw a rectangle from the same given coordinates it is well placed. How on earth can this behaviour be possible ?
Here is my code I call in the beforeShow() method :
Image photoBase = fetchResourceFile().getImage("Voiture_4_3.jpg");
Image watermark = fetchResourceFile().getImage("Watermark.png");
f.setLayout(new LayeredLayout());
final Label drawing = new Label();
f.addComponent(drawing);
// Image mutable dans laquelle on va dessiner (fond blancpar défaut)
Image mutableImage = Image.createImage(photoBase.getWidth(), photoBase.getHeight());
// Paint all the stuff
paintAll(mutableImage.getGraphics(), photoBase, watermark, photoBase.getWidth(), photoBase.getHeight());
drawing.getUnselectedStyle().setBgImage(mutableImage);
drawing.getUnselectedStyle().setBackgroundType(Style.BACKGROUND_IMAGE_SCALED_FIT);
// Save the graphics
// Save the image with the ImageIO class
long time = new Date().getTime();
OutputStream os;
try {
os = Storage.getInstance().createOutputStream("screenshot_" + Long.toString(time) + ".png");
ImageIO.getImageIO().save(mutableImage, os, ImageIO.FORMAT_PNG, 1.0f);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
And the paintAll method
public void paintAll(Graphics g, Image background, Image watermark, int width, int height) {
// Full quality
float saveQuality = 1.0f;
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
imageGraphics.setColor(0xFF0000);
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(220, Font.STYLE_BOLD);
imageGraphics.setFont(f);
// Draw a string right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 0, h = 0;
imageGraphics.drawString("HelloWorld", w, h);
// Coin haut droit de la string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g (the screen I guess)
g.drawImage(imageBuffer, 0, 0);
}
Result for w = 0, h = 0 (no apparent offset) :
Result for w = 841 , h = 610 (offset appears on both axis : there is an offset between the blue point near Mercedes M on the windscreen and the Hello World String)
EDIT1:
I also read this SO question for Android where it is advised to convert the dpi into pixel. Does it also applies in Codename One ? If so how can I do that ? I tried
Display.getInstance().convertToPixel(measureInMillimeterFromGimp)
without success (I used mm because the javadoc tells that dpi is roughly 1 mm)
Any help would be appreciated,
Cheers
Both g and imageGraphics are the same graphics created twice which might have some implications (not really sure)...
You also set the mutable image to the background of a style before you finished drawing it. I don't know if this will be the reason for the oddities you are seeing but I would suspect that code.
Inspired from Gabriel Hass' answer I finally made it work using another intermediate Image to only write the String at (0 ; 0) and then drawing this image on the the imageBuffer Image now at the right coordinates. It works but to my mind drawString(Image, Coordinates) should directly draw at the given coordinates, shouldn't it #Shai ?
Here is the method paintAll I used to solve my problem (beforeShow code hasn't changed) :
// Full quality
float saveQuality = 1.0f;
String mess = "HelloWorld";
// Create image as buffer
Image imageBuffer = Image.createImage(width, height, 0xffffff);
// Create graphics out of image object
Graphics imageGraphics = imageBuffer.getGraphics();
// Do your drawing operations on the graphics from the image
imageGraphics.drawImage(background, 0, 0);
imageGraphics.drawImage(watermark, 0, 0);
imageGraphics.setColor(0xFF0000);
// Upper left corner
imageGraphics.fillRect(0, 0, 10, 10);
// Lower right corner
imageGraphics.setColor(0x00FF00);
imageGraphics.fillRect(width - 10, height - 10, 10, 10);
// Create an intermediate image just with the message string (will be moved to the right coordinates later)
Font f = Font.createTrueTypeFont("Geometos", "Geometos.ttf").derive(150, Font.STYLE_BOLD);
// Get the message dimensions
int messWidth = f.stringWidth(mess);
int messHeight = f.getHeight();
Image messageImageBuffer = Image.createImage(messWidth, messHeight, 0xffffff);
Graphics messageImageGraphics = messageImageBuffer.getGraphics();
messageImageGraphics.setColor(0xFF0000);
messageImageGraphics.setFont(f);
// Write the string at (0; 0)
messageImageGraphics.drawString(mess, 0, 0);
// Move the string to its final location right below the M from Mercedes on the car windscreen (measured in Gimp)
int w = 841, h = 610;
imageGraphics.drawImage(messageImageBuffer, w, h);
// This "point" is expected to be on the lower left corner of the M letter from Mercedes and on the upper left corner of the message string
imageGraphics.setColor(0x0000FF);
imageGraphics.fillRect(w, h, 20, 20);
// Draw the complete image on your Graphics object g
g.drawImage(imageBuffer, 0, 0);

How do you upload texture data to a Sparse Texture using TexSubImage in OpenGL?

I am following apitest on github, and am seeing some very strange behavior in my renderer.
It seems like the Virtual Pages are not receiving the correct image data.
Original Image is 500x311:
When i render this image using a Sparse Texture, i must resize the backing store to 512x384 (to be a mutliple of the page size) and my result is:
As you can see it looks like a portion of the subimage (a sub sub image) was loaded to each individual virtual page.
To test this, i cropped the image to the size of just 1 virtual page (256x128): here is the result:
as expected, the single virutal page was filled with the exact, correct, cropped image.
Lastly, I increased the crop size to be 2 virtual pages worth, 256x256, one on top of another. here is the result:
This proves that calling texSubimage with an amount of texelData larger than Virtual_Page_Size causes errors.
Does care need to be taken when passing data to glsubimage that is larger than the virtual page size? I see no logic for this in apitest so think this could be a driver issue. Or I am missing something major.
Here is some code:
I stored the Texture in a Texture Array and to simplify turned the array into just a texture2d. both produce the same exact result. Here is the Texture Memory Allocation:
_check_gl_error();
glGenTextures(1, &mTexId);
glBindTexture(GL_TEXTURE_2D, mTexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SPARSE_ARB, GL_TRUE);
// TODO: This could be done once per internal format. For now, just do it every time.
GLint indexCount = 0,
xSize = 0,
ySize = 0,
zSize = 0;
GLint bestIndex = -1,
bestXSize = 0,
bestYSize = 0;
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_NUM_VIRTUAL_PAGE_SIZES_ARB, 1, &indexCount);
if(indexCount == 0) {
fprintf(stdout, "No Virtual Page Sizes for given format");
fflush(stdout);
}
_check_gl_error();
for (GLint i = 0; i < indexCount; ++i) {
glTexParameteri(GL_TEXTURE_2D, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, i);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_X_ARB, 1, &xSize);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_Y_ARB, 1, &ySize);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_Z_ARB, 1, &zSize);
// For our purposes, the "best" format is the one that winds up with Z=1 and the largest x and y sizes.
if (zSize == 1) {
if (xSize >= bestXSize && ySize >= bestYSize) {
bestIndex = i;
bestXSize = xSize;
bestYSize = ySize;
}
}
}
_check_gl_error();
mXTileSize = bestXSize;
glTexParameteri(GL_TEXTURE_2D, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, bestIndex);
_check_gl_error();
//Need to ensure that the texture is a multiple of the tile size.
physicalWidth = roundUpToMultiple(width, bestXSize);
physicalHeight = roundUpToMultiple(height, bestYSize);
// We've set all the necessary parameters, now it's time to create the sparse texture.
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA8, physicalWidth, physicalHeight);
_check_gl_error();
for (GLsizei i = 0; i < slices; ++i) {
mFreeList.push(i);
}
_check_gl_error();
mHandle = glGetTextureHandleARB(mTexId);
_check_gl_error();
glMakeTextureHandleResidentARB(mHandle);
_check_gl_error();
mWidth = physicalWidth;
mHeight = physicalHeight;
mLevels = levels;
Here is what happens after the allocation:
glTextureSubImage2DEXT(mTexId, GL_TEXTURE_2D, level, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
I have tried making width and height the physical width of the backing store AND the width/height of the incoming image content. Neither produce desired results. I exclude mip levels for now. When I was using Mip levels and the texture array i was getting different results but similar behavior.
Also the image is loaded from SOIL and before i implemented sparse textures, that worked very well (before sparse i implemented bindless).

Compressed texture batching in OpenGL

I'm trying to create an atlas of compressed textures but I can't seem to get it working. Here is a code snippet:
void Texture::addImageToAtlas(ImageProperties* imageProperties)
{
generateTexture(); // delete and regenerate an empty texture
bindTexture(); // bind it
atlasProperties.push_back(imageProperties);
width = height = 0;
for (int i=0; i < atlasProperties.size(); i++)
{
width += atlasProperties[i]->width;
height = atlasProperties[i]->height;
}
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glCompressedTexImage2D MUST be called with valid data for the 'pixels'
// parameter. Won't work if you use zero/null.
glCompressedTexImage2D(GL_TEXTURE_2D, 0,
GL_COMPRESSED_RGBA8_ETC2_EAC,
width,
height,
0,
(GLsizei)(ceilf(width/4.f) * ceilf(height/4.f) * 16.f),
atlasProperties[0]->pixels);
// Recreate the whole atlas by adding all the textures we have appended
// to our vector so far
int x, y = 0;
for (int i=0; i < atlasProperties.size(); i++)
{
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0,
x,
y,
atlasProperties[i]->width,
atlasProperties[i]->height,
GL_RGBA,
(GLsizei)(ceilf(atlasProperties[i]->width/4.f) * ceilf(atlasProperties[i]->height/4.f) * 16.f),
atlasProperties[i]->pixels);
x += atlasProperties[i]->width;
}
unbindTexture(); // unbind the texture
}
I'm testing this with just 2 small KTX textures that have the same size and as you can see from the code I'm trying to append the second one next to the first one on the x axis.
My KTX parsing works fine as I can render individual textures but as soon as I try to batch (that is as soon as I use glCompressedTexSubImage2d) I get nothing on the screen.
It might be useful to know that all of this works fine if I replace compressed textures with PNGs and swap the glCompressedTexImage2d and glCompressedTexSubImage2d with their non-compressed versions...
One of the things that I cannot find any information on is the x and y position of the textures in the atlas. How do I offset them? So if the first texture has a width of 60 pixels for example, do I just position the second one at 61?
I've seen some code online where people calculate the x and y position as follows:
x &= ~3;
y &= ~3;
Is this what I need to do and why? I've tried it but it doesn't seem to work.
Also, I'm trying the above code on an ARM i.mx6 Quad with a Vivante GPU, and I get the suspicion from what I read online that glCompressedTexSubImage2d might not be working on this board.
Can anyone please help me out?
The format you pass to glCompressedTexSubImage2D() must be the same as the one used for the corresponding glCompressedTexImage2D(). From the ES 2.0 spec:
This command does not provide for image format conversion, so an INVALID_OPERATION error results if format does not match the internal format of the texture image being modified.
Therefore, to match the glCompressedTexImage2D() call, the glCompressedTexSubImage2D() call needs to be:
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0, x, y, atlasProperties[i]->width, atlasProperties[i]->height,
GL_COMPRESSED_RGBA8_ETC2_EAC,
(GLsizei)(ceilf(atlasProperties[i]->width/4.f) *
ceilf(atlasProperties[i]->height/4.f) * 16.f),
atlasProperties[i]->pixels);
As for the sizes and offsets:
Your logic of determining the overall size would only work if the height of all sub-images is the same. Or more precisely, since the height is set to the height of the last sub-image, if no other height is larger than the last one. To make it more robust, you would probably want to use the maximum height of all sub-images.
I was surprised that you can't pass null as the last argument of glCompressedTexImage2D(), but it seems to be true. At least I couldn't find anything allowing it in the spec. But this being the case, I don't think it would be ok to simply pass the pointer to the data of the first sub-image. That would not be enough data, and it would read beyond the end of the memory. You may have to allocate and pass "data" that is large enough to cover the entire atlas texture. You could probably set it to anything (e.g. zero it out), since you're going to replace it anyway.
The way I read the ETC2 definition (as included in the ES 3.0 spec), the width/height of the texture do not strictly have to be multiples of 4. However, the positions for glCompressedTexSubImage2D() do have to be multiples of 4, as well as the width/height, unless they extend to the edge of the texture. This means that you have to make the width of each sub-image except the last a multiple of 4. At that point, you might as well use a multiple of 4 for everything.
Based on this, I think the size determination should look like this:
width = height = 0;
for (int i = 0; i < atlasProperties.size(); i++)
{
width += (atlasProperties[i]->width + 3) & ~3;
if (atlasProperties[i]->height > height)
{
height = atlasProperties[i]->height;
}
}
height = (height + 3) & ~3;
uint8_t* dummyData = new uint8_t[width * height];
memset(dummyData, 0, width * height);
glCompressedTexImage2D(GL_TEXTURE_2D, 0,
GL_COMPRESSED_RGBA8_ETC2_EAC,
width, height, 0,
width * height,
dummyData);
delete[] dummyData;
Then to set the sub-images:
int xPos = 0;
for (int i = 0; i < atlasProperties.size(); i++)
{
int w = (atlasProperties[i]->width + 3) & ~3;
int h = (atlasProperties[i]->height + 3) & ~3;
glCompressedTexSubImage2D(GL_TEXTURE_2D,
0, xPos, 0, w, h,
GL_COMPRESSED_RGBA8_ETC2_EAC,
w * h,
atlasProperties[i]->pixels);
xPos += w;
}
The whole thing would get slightly simpler if you could ensure that the original texture images already had sizes that are multiples of 4. Then you can skip rounding up the sizes/positions to multiples of 4.
After all, this was one of those mistakes that make you want to hit your head on a wall. GL_COMPRESSED_RGBA8_ETC2_EAC was actually not supported on the board.
I copied it from the headers but it did not query the device for supported formats. I can use a DXT5 format just fine with this code.

Drawing points of handwritten stroke using DrawEllipse (GDI+)

I'm working on an application that draws handwritten strokes. Strokes are internally stored as vectors of points and they can be transformed into std::vector<Gdiplus::Point>. Points are so close to each other, that simple drawing of each point should result into an image of continual stroke.
I'm using Graphics.DrawEllipse (GDI+) method to draw these points. Here's the code:
// prepare bitmap:
Bitmap *bitmap = new Gdiplus::Bitmap(w, h, PixelFormat32bppRGB);
Graphics graphics(bitmap);
// draw the white background:
SolidBrush myBrush(Color::White);
graphics.FillRectangle(&myBrush, 0, 0, w, h);
Pen blackPen(Color::Black);
blackPen.SetWidth(1.4f);
// draw stroke:
std::vector<Gdiplus::Point> stroke = getStroke();
for (UINT i = 0; i < stroke.size(); ++i)
{
// draw point:
graphics.DrawEllipse(&blackPen, stroke[i].X, stroke[i].Y, 2, 2);
}
At the end I just save this bitmap as a PNG image and sometimes the following problem occurs:
When I saw this "hole" in my stroke, I decided to draw my points again, but this time, by using ellipse with width and height set to 1 by using redPen with width set to 0.1f. So right after the code above I added the following code:
Pen redPen(Color::Red);
redPen.SetWidth(0.1f);
for (UINT i = 0; i < stroke.size(); ++i)
{
// draw point:
graphics.DrawEllipse(&redPen, stroke[i].X, stroke[i].Y, 1, 1);
}
And the new stoke I've got looked like this:
When I use Graphics.DrawRectangle instead of DrawEllipse while drawing this new red stroke, it never happens that this stroke (drawn by drawing rectangles) would have different width or holes in it:
I can't think of any possible reason, why drawing circles would result into this weird behaviour. How come that stroke is always continual and never deformed in any way when I use Graphics.DrawRectangle?
Could anyone explain, what's going on here? Am I missing something?
By the way I'm using Windows XP (e.g. in case it's a known bug). Any help will be appreciated.
I've made the wrong assumption that if I use Graphics.DrawEllipse to draw a circle with radius equal to 2px with pen of width about 2px, it will result in a filled circle with diameter about 4-5 px being drawn.
But I've found out that I actually can't rely on the width of the pen while drawing a circle this way. This method is meant only for drawing of border of this shape, thus for drawing filled ellipse it's much better to use Graphics.FillEllipse.
Another quite important fact to consider is that both of mentioned functions take as parameters coordinates that specify "upper-left corner of the rectangle that specifies the boundaries of the ellipse", so I should subtract half of the radius from both coordinates to make sure the original coordinates specify the middle of this circle.
Here's the new code:
// draw the white background:
SolidBrush whiteBrush(Color::White);
graphics.FillRectangle(&whiteBrush, 0, 0, w, h);
// draw stroke:
Pen blackBrush(Color::Black);
std::vector<Gdiplus::Point> stroke = getStroke();
for (UINT i = 0; i < stroke.size(); ++i)
graphics.FillEllipse(&blackBrush, stroke[i].X - 2, stroke[i].Y - 2, 4, 4);
// draw original points:
Pen redBrush(Color::Red);
std::vector<Gdiplus::Point> origStroke = getOriginalStroke();
for (UINT i = 0; i < origStroke.size(); ++i)
graphics.FillRectangle(&redBrush, origStroke[i].X, origStroke[i].Y, 1, 1);
which yields following result:
So in case someone will face the same problem as I did, the solution is:

Rendering sprites from spritesheet with OpenGL?

Imagine the following scenario: you have a set of RPG character spritesheets in PNG format and you want to use them in an OpenGL application.
The separate characters are (usually) 16 by 24 pixels in size (that is, 24 pixels tall) and may be at any width and height without leaving padding. Kinda like this:
(source: kafuka.org)
I already have the code to determine an integer-based clipping rectangle given a frame index and size:
int framesPerRow = sheet.Width / cellWidth;
int framesPerColumn = sheet.Height / cellHeight;
framesTotal = framesPerRow * framesPerColumn;
int left = frameIndex % framesPerRow;
int top = frameIndex / framesPerRow;
//Clipping rect's width and height are obviously cellWidth and cellHeight.
Running this code with frameIndex = 11, cellWidth = 16, cellHeight = 24 would return a cliprect (32, 24)-(48, 48) assuming it's Right/Bottom opposed to Width/Height.
The actual question
Now, given a clipping rectangle and an X/Y coordinate to place the sprite on, how do I draw this in OpenGL? Having the zero coordinate in the top left is preferred.
You have to start thinking in "texture space" where the coordinates are in the range [0, 1].
So if you have a sprite sheet:
class SpriteSheet {
int spriteWidth, spriteHeight;
int texWidth, texHeight;
int tex;
public:
SpriteSheet(int t, int tW, int tH, int sW, int sH)
: tex(t), texWidth(tW), texHeight(tH), spriteWidth(sW), spriteHeight(sH)
{}
void drawSprite(float posX, float posY, int frameIndex);
};
All you have to do is submit both vertices and texture vertices to OpenGL:
void SpriteSheet::drawSprite(float posX, float posY, int frameIndex) {
const float verts[] = {
posX, posY,
posX + spriteWidth, posY,
posX + spriteWidth, posY + spriteHeight,
posX, posY + spriteHeight
};
const float tw = float(spriteWidth) / texWidth;
const float th = float(spriteHeight) / texHeight;
const int numPerRow = texWidth / spriteWidth;
const float tx = (frameIndex % numPerRow) * tw;
const float ty = (frameIndex / numPerRow + 1) * th;
const float texVerts[] = {
tx, ty,
tx + tw, ty,
tx + tw, ty + th,
tx, ty + th
};
// ... Bind the texture, enable the proper arrays
glVertexPointer(2, GL_FLOAT, verts);
glTexCoordPointer(2, GL_FLOAT, texVerts);
glDrawArrays(GL_TRI_STRIP, 0, 4);
}
};
Franks solution is already very good.
Just a (very important) sidenote, since some of the comments suggested otherwise.
Please don't ever use glBegin/glEnd.
Don't ever tell someone to use it.
The only time it is OK to use glBegin/glEnd is in your very first OpenGL program.
Arrays are not much harder to handle, but...
... they are faster.
... they will still work with newer OpenGL versions.
... they will work with GLES.
... loading them from files is much easier.
I'm assuming you're learning OpenGL and only needs to get this to work somehow. If you need raw speed, there's shaders and vertex buffers and all sorts of both neat and complicated things.
The simplest way is to load the PNG into a texture (assuming you have the ability to load images into memory, you do need htat), then draw it with a quad setting appropriate texture coordinates (they go from 0 to 1 with floating point coordinates, so you need to divide by texture width or height accordingly).
Use glBegin(GL_QUADS), glTexcoord2f(), glVertex2f(), glEnd() for the simplest (but not fastest) way to draw this.
For making zero top left, either use gluOrtho() to set up the view matrix differently from normal GL (look up the docs for that function, set top to 0 and bottom to 1 or screen_height if you want integer coords) or just make change your drawing loop and just do glVertex2f(x/screen_width, 1-y/screen_height).
There are better and faster ways to do this, but this is probably one of the easiest if you're learning raw OpenGL from scratch.
A suggestion, if I may. I use SDL to load my textures, so what I did is :
1. I loaded the texture
2. I determined how to separate the spritesheet into separate sprites.
3. I split them into separate surfaces
4. I make a texture for each one (I have a sprite class to manage them).
5. Free the surfaces.
This takes more time (obviously) on loading, but pays of later.
This way it's a lot easier (and faster), as you only have to calculate the index of the texture you want to display, and then display it. Then, you can scale/translate it as you like and call a display list to render it to whatever you want. Or, you could do it in immediate mode, either works :)