Hi I'm trying to make an output with WriteConsoleOutputA.
I have this code:
CHAR_INFO letterA;
letterA.Char.AsciiChar = 'A';
letterA.Attributes =
FOREGROUND_RED | FOREGROUND_INTENSITY |
BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_INTENSITY;
//Set up the positions:
COORD charBufSize = { 1, 1};
COORD characterPos = { 0, 0 };
SMALL_RECT writeArea = { 0,0,0,0 };
//Write the character
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
So at this point it writes a red A with a yellow background, but for example, if I want the A appear in the coordinates (5,5) it doesn't print it even if I change SMALL_RECT to {0, 0, 10, 10}.
Or if I want to write another A right side to the first one with this:
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
WriteConsoleOutputA(wHnd, &letterA, charBufSize, { 0, 1 }, &writeArea);
I´m beginning with this graphical console mode, it would be very helpful if someone could tell me how to print that character in the coordinate that I want.
I have tried to change it , changing the coordinates something like this:
COORD charBufSize = { 5, 10};
COORD characterPos = { 3, 2 };
SMALL_RECT writeArea = { 0,0,5,10 };
but it prints weird characters and other colours in all the buffer 5*10.
Thanks
César.
WriteConsoleOutput(..) is a complex function which needs to be handled carefully.
The dwBufferSize parameter (= your charBufSize) is nothing more than a size specification of the lpBuffer parameter (= your letterA). The only difference instead of simply telling that letterA has a size of 1 is that by splitting it into two axis you are able specify the width and height of a text block with letterA characters in it. But remember that the size of letterA has to be charBufSize.X * charBufSize.Y. Otherwise WriteConsoleOutput will do weird stuff since it uses uninitialized memory.
The dwBufferCoord parameter (= your characterPos) defines the location within letterA from where to read the characters to be written to the console. So it simply defines an index offset. In your example this should always be { 0, 0 } (which is equal to letterA[0]) since letterA is only a single character.
The lpWriteRegion parameter (= your writeArea) does all the magic. It specifies the position, width and height of the area to be written by the call. The data to be written is definedby the previous parameters.
So to write a character to a specific location x, y do the following:
COORD charBufSize = {1, 1};
COORD characterPos = {0, 0};
SMALL_RECT writeArea = {x, y, x, y};
WriteConsoleOutputA(wHnd, &letterA, charBufSize, characterPos, &writeArea);
For a little better understanding use the following example and play a little with the values of charBufSize, characterPos and writeArea:
int i;
CHAR_INFO charInfo[10 * 10];
/* play with these values */
COORD charBufSize = {10, 10}; /* do not exceed x*y=100 !!! */
COORD characterPos = {5, 0}; /* must be within 0 and x*y=100 */
SMALL_RECT writeArea = {2, 2, 12, 12};
for (i = 0; i < (10 * 10); i++)
{
charInfo[i].Char.AsciiChar = 'A' + (i % 26);
charInfo[i].Attributes = FOREGROUND_RED | FOREGROUND_INTENSITY | BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_INTENSITY;
}
WriteConsoleOutputA(wHnd, charInfo, charBufSize, characterPos, &writeArea);
Here is a screenshot of the parameters in the example above showing the console and the variables. I hope this makes it a bit more clear.
Related
Here I ask you : How are we supposed to colorize the console background with only the COLOREF datatype as a parameter?
The most common way of colorizing background is by using windows header function system("color --")
However, this way is not possible, and I am tasked to find out if we can colorize the console background using only the COLOREF datatype.
I did some research, and what I came across was SetConsoleAttribute(), and the windows header function system("color --").
This is what I expect my code to be:
COLOREF data = RGB(255, 0, 0);//red, basically
SetConsoleBackground(HDC *console, data);
Any way of doing this? Thanks in advance.
[NEW ANSWER (edit)]
So #IInspectable pointed out the the console now supports 24-bit full rgb colors so i did some research and managed to make it work.
This is how i solved it:
#include <Windows.h>
#include <string>
struct Color
{
int r;
int g;
int b;
};
void SetBackgroundColor(const Color& aColor)
{
std::string modifier = "\x1b[48;2;" + std::to_string(aColor.r) + ";" + std::to_string(aColor.g) + ";" + std::to_string(aColor.b) + "m";
printf(modifier.c_str());
}
void SetForegroundColor(const Color& aColor)
{
std::string modifier = "\x1b[38;2;" + std::to_string(aColor.r) + ";" + std::to_string(aColor.g) + ";" + std::to_string(aColor.b) + "m";
printf(modifier.c_str());
}
int main()
{
// Set output mode to handle virtual terminal sequences
HANDLE hOut = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD dwMode = 0;
GetConsoleMode(hOut, &dwMode);
dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING;
SetConsoleMode(hOut, dwMode);
SetForegroundColor({ 100,100,20 });
SetBackgroundColor({ 50,100,10 });
printf("Hello World\n");
system("pause");
}
[OLD ANSWER]
The console only supports 256 different color combinations defined with a WORD which is 8 bits long. The background color is stored in the 4 higher bits. This means the console only has support for 16 different colors:
enum class Color : int
{
Black = 0,
DarkBlue = 1,
DarkGreen = 2,
DarkCyan = 3,
DarkRed = 4,
DarkPurple = 5,
DarkYellow = 6,
DarkWhite = 7,
Gray = 8,
Blue = 9,
Green = 10,
Cyan = 11,
Red = 12,
Purple = 13,
Yellow = 14,
White = 15,
};
To set the background color of the typed characters, you could do:
void SetWriteColor(const Color& aColor)
{
HANDLE hConsole = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleTextAttribute(hConsole, static_cast<WORD>(aColor) << 4);
}
I'm currently using Python 2.7 to pull pixel information from a series of bitmaps, and writing 24 bits of information to a file (with an arbitrary extension, ".bfs", to make it easy to find down the pipeline), 8 bits for position x, 8 bits for position y, 16 bits for color.
from PIL import Image
import struct
filename = raw_input('Please choose destination filename: ')
file_in = [0]*27
im = [0]*27
for i in range(1,27):
file_in[i] = str(i)+".bmp"
im[i] = Image.open(file_in[i])
file_out = open(filename+".bfs", 'w')
readable_out = open(filename+".txt", 'w')
for q in range(1,27):
pix = im[q].load()
width, height = im[q].size
for y in range (height):
for x in range (width):
rgb = pix[x,y]
red = rgb[0]
green = rgb[1]
blue = rgb[2]
Uint16_val = (((31*(red+4))/255)<<11) | (((63*(green+2))/255)<<5) | ((31*(blue+4))/255)
hex_16 = int('%.4x'%Uint16_val, 16)
print(str(x)+", "+str(y)+", "+str(hex_16)+"\n")
readable_out.write(str(x)+", "+str(y)+", "+str(hex_16)+"\n")
file_out.write(struct.pack('<1B', x))
file_out.write(struct.pack('<1B', y))
file_out.write(struct.pack('<1H', hex_16))
On the PC side everything is coming out clean how I expect (this is copied from a .txt file that I output and format to make it easier to read):
0, 0, 40208
1, 0, 33544
2, 0, 33544
3, 0, 39952
4, 0, 39944
5, 0, 33544
6, 0, 39688
7, 0, 39952
8, 0, 39944
9, 0, 33544
10, 0, 33800
11, 0, 39952
12, 0, 39952
13, 0, 33544
14, 0, 33800
15, 0, 48400
From here I'm taking the .bfs file and loading it onto an SD card for an Arduino Uno to read from. The Arduino code is supposed to read from the SD card, and output the x, y, and color values to a TFT LCD. Here is the Arduino Code:
#include <Adafruit_GFX.h> // Core graphics library
#include <Adafruit_ST7735.h> // Hardware-specific library
#include <SPI.h>
#include <SD.h>
#define TFT_CS 10 // Chip select line for TFT display
#define TFT_RST 9 // Reset line for TFT (or see below...)
#define TFT_DC 8 // Data/command line for TFT
#define SD_CS 4 // Chip select line for SD card
Adafruit_ST7735 tft = Adafruit_ST7735(TFT_CS, TFT_DC, TFT_RST);
void setup(void) {
Serial.begin(9600);
tft.initR(INITR_144GREENTAB);
Serial.print("Initializing SD card...");
if (!SD.begin(SD_CS)) {
Serial.println("failed!");
return;
}
Serial.println("OK!");
tft.fillScreen(0x0000);
}
uint32_t pos = 0;
uint8_t x,y;
uint8_t buffpix[3];
uint16_t c;
void loop() {
bfsDraw("image.bfs");
}
#define BUFFPIXEL 20
void bfsDraw(char *filename) {
File bfsFile;
int w, h, row, col;
uint8_t x,y;
uint16_t c;
uint32_t pos = 0, startTime = millis();
if((0 >= tft.width()) || (0 >= tft.height())) return;
if ((bfsFile = SD.open(filename)) == NULL) {
Serial.print("File not found");
return;
}
w = 128;
h = 128;
tft.setAddrWindow(0, 0, 0+w-1, 0+h-1);
for (row=0; row<h; row++) {
for (col=0; col<w; col++) {
x = bfsFile.read();
Serial.print(x);
Serial.print(", ");
y = bfsFile.read();
Serial.print(y);
Serial.print(", ");
c = read16(bfsFile);
Serial.print(c);
Serial.print(" ");
Serial.println(" ");
tft.drawPixel(x,y,c);
}
}
}
uint8_t read8(File f) {
uint16_t result;
((uint8_t *)&result)[0] = f.read();
return result;
}
uint16_t read16(File f) {
uint16_t result;
((uint8_t *)&result)[0] = f.read();
((uint8_t *)&result)[1] = f.read();
return result;
}
I have some print statements around the code that reads from the card before sending out to the TFT, and instead of matching the file that (I think) I wrote it outputs like this:
0, 0, 40208
1, 0, 33544
2, 0, 33544
3, 0, 39952
4, 0, 39944
5, 0, 33544
6, 0, 39688
7, 0, 39952
8, 0, 39944
9, 0, 33544
13, 10, 2048
132, 11, 4096
156, 12, 4096
As you can see the reading from the Arduino starts out matching the writing of the Python script, but after 9 the "X" byte has shifted into the middle instead of the leading position. My question, is what is causing this shift, after x = 9? is this a little endian versus big endian issue?
Thanks for your help!
You opened your file in text mode, not binary mode. On Windows, that means that every newline character (byte value 10) that you write gets converted into carriage return + linefeed (byte values 13, 10). Use 'wb' for the mode when opening the .bfs file.
Note that writing the coordinates of each pixel into the file is insane - you're doubling the size of the file for absolutely no benefit. You can easily recreate the coordinates as you're reading the file - in fact you're ALREADY DOING SO, in the form of the row and col variables!
I'm trying to use the following library here (the templated version) but in the example shown in the library the user defines the bounding boxes. In my problem I have data of unknown dimensionality each time, so I don't know how to use it. Apart from this, shouldn't the R-Tree be able to calculate the bounding boxes each time there is an insertion?
This is the sample code of the library, as you can see the user defines the bounding boxes each time:
#include <stdio.h>
#include "RTree.h"
struct Rect
{
Rect() {}
Rect(int a_minX, int a_minY, int a_maxX, int a_maxY)
{
min[0] = a_minX;
min[1] = a_minY;
max[0] = a_maxX;
max[1] = a_maxY;
}
int min[2];
int max[2];
};
struct Rect rects[] =
{
Rect(0, 0, 2, 2), // xmin, ymin, xmax, ymax (for 2 dimensional RTree)
Rect(5, 5, 7, 7),
Rect(8, 5, 9, 6),
Rect(7, 1, 9, 2),
};
int nrects = sizeof(rects) / sizeof(rects[0]);
Rect search_rect(6, 4, 10, 6); // search will find above rects that this one overlaps
bool MySearchCallback(int id, void* arg)
{
printf("Hit data rect %d\n", id);
return true; // keep going
}
void main()
{
RTree<int, int, 2, float> tree;
int i, nhits;
printf("nrects = %d\n", nrects);
for(i=0; i<nrects; i++)
{
tree.Insert(rects[i].min, rects[i].max, i); // Note, all values including zero are fine in this version
}
nhits = tree.Search(search_rect.min, search_rect.max, MySearchCallback, NULL);
printf("Search resulted in %d hits\n", nhits);
// Iterator test
int itIndex = 0;
RTree<int, int, 2, float>::Iterator it;
for( tree.GetFirst(it);
!tree.IsNull(it);
tree.GetNext(it) )
{
int value = tree.GetAt(it);
int boundsMin[2] = {0,0};
int boundsMax[2] = {0,0};
it.GetBounds(boundsMin, boundsMax);
printf("it[%d] %d = (%d,%d,%d,%d)\n", itIndex++, value, boundsMin[0], boundsMin[1], boundsMax[0], boundsMax[1]);
}
// Iterator test, alternate syntax
itIndex = 0;
tree.GetFirst(it);
while( !it.IsNull() )
{
int value = *it;
++it;
printf("it[%d] %d\n", itIndex++, value);
}
getchar(); // Wait for keypress on exit so we can read console output
}
An example of what I want to save in an R-Tree is:
-------------------------------
| ID | dimension1 | dimension2|
-------------------------------
| 1 | 8 | 9 |
| 2 | 3 | 5 |
| 3 | 2 | 1 |
| 4 | 6 | 7 |
-------------------------------
Dimensionality
There will be some limit in your requirements to the dimensionality. This is because computers only have infinite storage so cannot store an infinite number of dimensions. Really it is a decision for you how many dimensions you wish to support. The most common numbers of course are two and three. Do you actually need to support eleven? When are you going to use it?
You can do this either by always using an R-tree with the maximum number you support, and passing zero as the other coordinates, or preferably you would create several code paths, one for each supported number of dimensions. I.e. you would have one set of routines for two-dimensional data and another for three dimensional, and so on.
Calculating the bounding box
The bounding box is the rectangle or cuboid which is aligned to the axes, and completely surrounds the object you wish to add.
So if you are inserting axis-aligned rectangles/cuboids etc, then the shape is the bounding box.
If you are inserting points, the min and max of each dimension are just the point value of that dimension.
Any other shape, you have to calculate the bounding box. E.g. if you are inserting a triangle, you need to calculate the rectangle which completely surrounds the triangle as the bounding box.
The library can't do this for you because it doesn't know what you are inserting. You might be inserting spheres stored as centre + radius, or complex triangle mesh shapes. The R-Tree can provide the spatial index but needs you to provide that little bit of information to fill in the gaps.
I have an enumeration of just under 32 absolute rectangle sizes and I need to given dimensions and find the best approximation among my enumeration.
Is there any better (ie more readable and maintainable) way than the spaghetti code I am formulating out of lots of nested if's and else's?
At the moment I have just:
enum imgOptsScale {
//Some relative scales
w005h005 = 0x8,
w010h010 = 0x9,
w020h020 = 0xA,
w040h040 = 0xB,
w070h070 = 0xC,
w100h100 = 0xD,
w150h150 = 0xE,
w200h200 = 0xF,
w320h320 = 0x10,
w450h450 = 0x11,
w200h010 = 0x12,
w200h020 = 0x13,
w200h070 = 0x14,
w010h200 = 0x15,
w020h200 = 0x16,
w070h200 = 0x17
};
imgOptsScale getClosestSizeTo(int width, int height);
and I thought I'd ask for help before I got too much further into coding up. I should emphasise a bias away from too elaborate libraries though I am more interested in algorithms than containers this is supposed to run on a resource constrained system.
I think I'd approach this with a few arrays of structs, one for horizontal measures and one for vertical measures.
Read through the arrays to find the next larger size, and return the corresponding key. Build the final box measure from the two keys. (Since 32 only allows 5 bits, this is probably not very ideal -- you'd probably want 2.5 bits for the horizontal and 2.5 bits for the vertical, but my simple approach here requires 6 bits -- 3 for horizontal and 3 for vertical. You can remove half the elements from one of the lists (and maybe adjust the << 3 as well) if you're fine with one of the dimensions having fewer degrees of freedom. If you want both dimensions to be better represented, this will probably require enough re-working that this approach might not be suitable.)
Untested pseudo-code:
struct dimen {
int x;
int key;
}
struct dimen horizontal[] = { { .x = 10, .key = 0 },
{ .x = 20, .key = 1 },
{ .x = 50, .key = 2 },
{ .x = 90, .key = 3 },
{ .x = 120, .key = 4 },
{ .x = 200, .key = 5 },
{ .x = 300, .key = 6 },
{ .x = 10000, .key = 7 }};
struct dimen vertical[] = { { .x = 10, .key = 0 },
{ .x = 20, .key = 1 },
{ .x = 50, .key = 2 },
{ .x = 90, .key = 3 },
{ .x = 120, .key = 4 },
{ .x = 200, .key = 5 },
{ .x = 300, .key = 6 },
{ .x = 10000, .key = 7 }};
/* returns 0-63 as written */
int getClosestSizeTo(int width, int height) {
int horizontal_key = find_just_larger(horizontal, width);
int vertical_key = find_just_larger(vertical, height);
return (horizontal_kee << 3) & vertical_key;
}
int find_just_larger(struct dimen* d, size) {
int ret = d.key;
while(d.x < size) {
d++;
ret = d.key;
}
return ret;
}
Yes ... place your 32 different sizes in a pre-built binary search tree, and then recursively search through the tree for the "best" size. Basically you would stop your search if the left child pre-built rectangle of the current node's rectangle is smaller than your input rectangle, and the current node's rectangle is larger than the input rectangle. You would then return the pre-defined rectangle that is "closest" to your input rectangle between the two.
One nice addition to the clean code the recursive search creates is that it would also be logarithmic rather than linear in search time.
BTW, you will want to randomize the order that you insert the initial pre-defined rectangle values into the binary search tree, otherwise you will end up with a degenerate tree that looks like a linked list, and you won't get logarithmic search time since the height of the tree will be the number of nodes, rather than logarithmic to the number of nodes.
So for instance, if you've sorted the tree by the area of your rectangles (provided there are no two rectangles with the same area), then you could do something like the following:
//for brevity, find the rectangle that is the
//greatest rectangle smaller than the input
const rec_bstree* find_best_fit(const rec_bstree* node, const rec& input_rec)
{
if (node == NULL)
return NULL;
rec_bstree* return_node;
if (input_rec.area < node->area)
return_node = find_best_fit(node->left_child, input_rec);
else if (input_rec.area > node->area)
return_node = find_best_fit(node->right_child, input_rec);
if (return_node == NULL)
return node;
}
BTW, if a tree is too complex, you could also simply do an array or std::vector of instances of your rectangles, sort them using some type of criteria using std::sort, and then do binary searched on the array.
Here is my proposed solution,
enum imgOptsScale {
notScaled = 0x0,
//7 relative scales upto = 0x7
w010h010, w010h025, w010h060, w010h120, w010h200, w010h310, w010h450,
w025h010, w025h025, w025h060, w025h120, w025h200, w025h310, w025h450,
w060h010, w060h025, w060h060, w060h120, w060h200, w060h310, w060h450,
w120h010, w120h025, w120h060, w120h120, w120h200, w120h310, w120h450,
w200h010, w200h025, w200h060, w200h120, w200h200, w200h310, w200h450,
w310h010, w310h025, w310h060, w310h120, w310h200, w310h310, w310h450,
w450h010, w450h025, w450h060, w450h120, w450h200, w450h310, w450h450,
w730h010, w730h025, w730h060, w730h120, w730h200, w730h310, w730h450
};
//Only call if width and height are actually specified. else 0=>10px
imgOptsScale getClosestSizeTo(int width, int height) {
static const int possSizes[] = {10, 25, 60, 120, 200, 310, 450, 730};
static const int sizesHalfways[] = {17, 42, 90, 160, 255, 380, 590};
int widthI = 6;
while (sizesHalfways[widthI - 1] > width && --widthI>0);
int heightI = 6;
while (sizesHalfways[heightI - 1] > height && --heightI>0);
return (imgOptsScale)(8 + 7 * widthI + heightI);
}
I am trying to draw text in OpenGL. I used a small C# program to draw all characters from 0x21 to 0x7E into a bitmap. I load that as a texture and try to use that to draw my text. It appears to work, but I seem to get some weird problems. http://i54.tinypic.com/mmd7k8.png Notice the extra pixel next to 'a' and the fact that 'c' is cut off slightly. It appears the whole thing is shifted slightly. Does anybody know why I get this problem? I don't want to use a library such as gltext because I don't want to depend on libfreetype as I am only drawing a few strings. Here is the code that I am using:
static void textDrawChar(char c, int x, int y)
{
GLfloat vert[8];
GLfloat texcoord[8];
vert[0] = x; vert[1] = y;
vert[2] = x; vert[3] = y + TEXT_HEIGHT;
vert[4] = x + font_char_width[c-0x21]; vert[5] = y + TEXT_HEIGHT;
vert[6] = x + font_char_width[c-0x21]; vert[7] = y;
texcoord[0] = (float)(font_char_pos[c-0x21])/TOTAL_WIDTH;
texcoord[1] = 1.0f;
texcoord[2] = (float)(font_char_pos[c-0x21])/TOTAL_WIDTH;
texcoord[3] = 0.0f;
texcoord[4] = (float)(font_char_pos[c-0x21] + font_char_width[c-0x21])/TOTAL_WIDTH;
texcoord[5] = 0.0f;
texcoord[6] = (float)(font_char_pos[c-0x21] + font_char_width[c-0x21])/TOTAL_WIDTH;
texcoord[7] = 1.0f;
glBindTexture(GL_TEXTURE_2D, texture);
glVertexPointer(2, GL_FLOAT, 0, vert);
glTexCoordPointer(2, GL_FLOAT, 0, texcoord);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, indices);
}
void textDraw(void)
{
textDrawChar('a', 300, 300);
textDrawChar('b', 300+font_char_width['a'-0x21], 300);
textDrawChar('c', 300+font_char_width['a'-0x21]+font_char_width['b'-0x21], 300);
}
EDIT I added 4 to the texture coordinate (0,2,4,6) and that seems to have fixed the problem. However, now I need to know whether that will continue to work or may break for unknown reasons. Here is the bitmap creation code, in case the problem might be in there.
using System;
using System.Drawing;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Collections;
namespace font_to_bmp
{
class Program
{
static void Main(string[] args)
{
Bitmap b = new Bitmap(1715, 36);
Graphics g = Graphics.FromImage(b);
g.FillRectangle(new SolidBrush(Color.Transparent), 0, 0, 1715, 36);
Font f = new Font("Liberation Sans", 24, GraphicsUnit.Point);
g.PageUnit = GraphicsUnit.Pixel;
int curx = 0;
StreamWriter w = new StreamWriter(new FileStream("font_width.h", FileMode.Create, FileAccess.Write, FileShare.Read));
w.WriteLine("static const int font_char_width[] = {");
ArrayList hax = new ArrayList();
for (int i = 0x21; i <= 0x7E; i++)
{
char c = (char)i;
string s = c.ToString();
SizeF sz = g.MeasureString(s, f);
StringFormat sf = new StringFormat();
sf.SetMeasurableCharacterRanges(new CharacterRange[] {new CharacterRange(0,1)});
Region r = g.MeasureCharacterRanges(s, f, new RectangleF(0, 0, 1000, 1000), sf)[0];
RectangleF r2 = r.GetBounds(g);
Console.WriteLine("{0} is {1}x{2}", s, r2.Width, r2.Height);
int w_int = (int)(Math.Ceiling(r2.Width));
g.DrawString(s, f, new SolidBrush(Color.Black), curx, 0);
hax.Add(curx);
curx += w_int;
w.WriteLine("\t{0},", w_int);
}
Console.WriteLine("Total width {0}", curx);
w.WriteLine("};");
w.WriteLine();
w.WriteLine("static const int font_char_pos[] = {");
foreach(int z in hax)
{
w.WriteLine("\t{0},", z);
}
w.WriteLine("};");
w.Close();
b.Save("font.png");
}
}
}
Addressing the pixels exactly in a texture is a bit tricky. See my answer given in OpenGL Texture Coordinates in Pixel Space
This has been asked a few times, but I don't have the links at hand, so a quick and rough explanation. Let's say the texture is 8 pixels wide:
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
^ ^ ^ ^ ^ ^ ^ ^ ^
0.0 | | | | | | | 1.0
| | | | | | | | |
0/8 1/8 2/8 3/8 4/8 5/8 6/8 7/8 8/8
The digits denote the texture's pixels, the bars the edges of the texture and in case of nearest filtering the border between pixels. You however want to hit the pixels' centers. So you're interested in the texture coordinates
(0/8 + 1/8)/2 = 1 / (2 * 8)
(1/8 + 2/8)/2 = 3 / (2 * 8)
...
(7/8 + 8/8)/2 = 15 / (2 * 8)
Or more generally for pixel i in a N wide texture the proper texture coordinate is
(2i - 1)/(2N)
However if you want to perfectly align your texture with the screen pixels, remember that what you specify as coordinates are not a quad's pixels, but edges, which, depending on projection may align with screen pixel edges, not centers, thus may require other texture coordinates.