How can I add line breaks in a wchar_t array? - c++

I'm working on a console game. It uses a screen buffer to refresh the console window after updating the map.
Here's the main while loop.
while (true) {
//player.doStuff(_kbhit());
//map.update();
WriteConsoleOutputCharacter(
console.getScreenBuffer(),
(LPWSTR)map.getScreen(),
map.getWidth() * map.getHeight(),
{ 0, 0 }, console.getBytesWritten()
);
Sleep(1000 / 30);
}
Before this loop, I'm getting the layout of the map from a .txt file:
class Map {
int width, height;
wchar_t* screen;
public:
wchar_t* getScreen() {
return screen;
}
void setScreen(std::string layoutFile, std::string levelDataFile) {
std::ifstream levelData(levelDataFile);
levelData >> width >> height;
screen = new wchar_t[(width + 1) * height];
levelData.close();
std::wifstream layout(layoutFile);
std::wstring line;
for (int j = 0; j < height; j++) {
std::getline<wchar_t>(layout, line);
for(int i = 0; i < width; i++) {
screen[j * width + i] = line.at(i);
}
screen[width * (j + 1)] = L'\n';
}
layout.close();
}
};
map.setScreen("demo.txt", "demo_data.txt");
The problem is that the printed map displays as one string without any line breaks, like this:
00000__00000
When I expected it to look like this instead:
0000
0__0
0000
I tried adding L'\n', L'\r\n' after every line written, but it doesn't work.

In short
There are two independent problems here:
The first one is that your newline characters get overwritten.
The second, once you've corrected the first, is that the windows console API does not handle newlines
More details
The problem with the overwriting
I assume that the width does not include the trailing newline at the end of each line, since your allocation for the screen is:
new wchar_t[(width + 1) * height]; // real width is width + 1 for '\n'
Now looking at your logic, the last '\n' that you add to the line, is set at:
screen[ width * (j + 1) ] // same as screen[ j * width + width ]
This seems fine according to your indexing scheme, since you copy the layout characters to:
screen[ j * width + i ]` // where `i` is between `0` and `width-1`
so the newline would be at screen[ j * width + width ].
Unfortunately, with your indexing formula, the first character of the next line overwrites the same place: replacing j with j+1 and i with 0 gives:
screen[ (j + 1) * width + 0 ]
which is
screen[ (j + 1) * width ] // same location as screen [ width * (j+1)]
The solution for having trailing newlines on very line
Correct your indexing scheme, taking into account that the real width of the line is width+1.
So the indexing formula becomes:
for(int i = 0; i < width; i++) {
screen[j * (width+1) + i] = line.at(i);
}
and of course the trailing newline:
screen[j * (width+1) + width] = L'\n'; // self-explaining
screen[(j+1) * (width+1) -1] = L'\n'; // or alternative
The problem with the console API
The WriteConsoleOutputCharacter() provides no real support for newlines and control characters. These are diplayed as a question mark.
The documentation refers to a possibility to get those control characters handled depending on the console mode, but I've tried with Windows 10, and even with the variant WriteConsoleOutputCharacterA() (to be sure to exclude issues with wide characters), it simply does not work.
You have two solutions to make this work, but the two require some rework:
display the output line by line and control the cursor position accordingly
use the WriteConsoleOutput() which allows you to specify the target rectangle (heigth and width) and write the characters in a rectangle without need for a newline. Unfortunately, the array shall then be of CHAR_INFO instead of simple characters.
Example for the second way:
std::string w = "SATORAREPOTENETOPERAROTAS";
SMALL_RECT sr{ 2, 2, 6, 6 };
CHAR_INFO t[25];
for (int i = 0; i < 25; i++) { t[i].Char.AsciiChar = w[i]; t[i].Attributes = BACKGROUND_GREEN; }
WriteConsoleOutputA(
hOut,
t,
{ 5,5 },
{ 0,0 },
&sr
);

Related

Why my bitmap image have another color overlay after converting 32-bit to 8-bit

Im working on resizing bitmap image and converting bitmap image to 8-bit (grayscale). But I have the problem that when I convert 32-bit image to 8-bit image, the result has another color overlay while it works perfectly on 24-bit. I guess the cause is in the alpha color. but I dont know where the problem exactly is.
This is my code to generate 8-bit palette color and write it after DIB part:
char* palette = new char[1024];
for (int i = 0; i < 256; i++) {
palette[i * 4] = palette[i * 4 + 1] = palette[i * 4 + 2] = (char)i;
palette[i * 4 + 3] = 255;
}
fout.write(palette, 1024);
delete[] palette;
As I said, my code works perfectly on 24-bit. In 32-bit the color is still kept after resizing, but when converting to 8-bit, it will look like this:
expected image (when converted from 24-bit) //
unexpected image (when converted from 32-bit)
This is how I get the colors and save it to srcPixel[]:
int i = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int index = getIndex(width, x, y);
srcPixel[index].A = srcBMP.pImageData[i];
i += alpha;
srcPixel[index].B = srcBMP.pImageData[i++];
srcPixel[index].G = srcBMP.pImageData[i++];
srcPixel[index].R = srcBMP.pImageData[i++];
}
i += padding;
}
And this is the code I converted it by getting average of 4 colors A, B, G and R from that srcPixel[]:
int i = 0;
for (int y = 0; y < dstHeight; y++) {
for (int x = 0; x < dstWidth; x++) {
int index = getIndex(dstWidth, x, y);
dstBMP.pImageData[i++] = (srcPixel[index].A + srcPixel[index].B + srcPixel[index].G + srcPixel[index].R) / 4;
}
i += dstPadding;
}
If I remove and skip all alpha bytes in my code, when converting my image is still like that and I will have another problem is when resizing, my image will have another color overlay like the problem when converting to 8-bit: resizing without alpha channel.
If I skip the alpha channel while getting average (change into dstBMP.pImageData[i++] = (srcPixel[index].B + srcPixel[index].G + srcPixel[index].R) / 3, there is almost nothing different, the overlay still exists.
If I remove palette[i * 4 + 3] = 255; or doing anything with it, the result is still not affected.
Thank you very much.
You add alpha channel to the color and that's why it becomes brighter. From here I found that opaque is 255 and transparent 0 - therefore you add another channel which is set to 'white' to your result.
Remove alpha channel from your equation and see if I'm right.

Reading BMP file into an array

I am writing a longer program and I found myself needing to read a .bmp file into an array in a specific way so that the rest of the program can use it without extensive rewrites. I failed to find older answers that would resolve my problem, and I am pretty much at the beginner stages.
The image I am trying to read is used to create a text font, so I want to read it character by character into an array, where the pixels belonging to one character are added in order to a 2d bool (true if pixel is not black) array [character_id] [pixel_n]. The dimensions of characters are predetermined and known, and the file is cropped so that they all appear in a single row with no unaccounted margins.
This is the specific file I am trying to read, though here it might not show up as .bmp
As an example, shown here, I want to read the pixels in the order of the yellow line, then jump to another character. For clarity each character is 5px wide and 11px high, with 1px of margin on both sides horizontally.
Based on what I was able to find, I have written a function to do it, but I fail to make it work as intended, as far as I can tell even the pixel values are not being read correctly:
void readBMP(char* filename)
{
int i;
FILE* f = fopen(filename, "rb");
unsigned char info[54];
// read the 54-byte header
fread(info, sizeof(unsigned char), 54, f);
// extract image height and width from header
int width = *(int*)&info[18];
int height = *(int*)&info[22];
// number of pixels in total
int size = 3 * width * height;
unsigned char* data = new unsigned char[size];
// number of characters to read
int counter1 = size / ((font_width + 2) * font_height) / 3 ;
// read the rest of the data at once
fread(data, sizeof(unsigned char), size, f);
fclose(f);
//loop that goes from character to character
for(int i = 0; i < counter1; i++)
{
int tmp = 0;
//loop that reads one character into font_ref array
for(int j = 0; j < font_height; j++)
{
//loop for each row of a character
for(int k = 0; k < font_width; k++)
{
int w = static_cast<int>(data[3*(j*(font_width+2)*(counter1) + i*(font_width + 2) + 1 + k + j*font_width + j)-1]);
if( w != 0 )
font_ref [i][(tmp)] = 1;
else
font_ref [i][(tmp)] = 0;
tmp++;
}
}
}
}
(bool font_ref [150][font_width*font_height]; is the array where the font is being loaded and stored)
this code reads something, but the result is a seemingly random mess and I am unable to resolve that. Here is an example of lowercase alphabet printed using another function in the program, where white pixels represent true bools. I am aware that some libraries exist to work with graphical files, however in this program I wanted to possibly avoid that to learn more lower-level things, and the goal is rather limited and specific.
Thank you in advance for any help with the issue.
The main errors are in the offset computation for a pixel in the bitmap data:
int w = static_cast<int>(data[3*(j*(font_width+2)*(counter1) + i*(font_width + 2) + 1 + k + j*font_width + j)-1]);
j*(font_width+2)*(counter1) - This doesn't take into account that
although you say the file is cropped, there is extra black space to the right of the last character cell, so the true width must be used;
(as drescherjm and user3386109 mentioned) padding bytes are appended to the rows so that their length is a multiple of four bytes.
+ j*font_width + j)-1 - This part makes no sense - perhaps you tried to compensate the above errors.
This would be correct:
int w = data[j*(3*width+3&~3)+3*(i*(font_width+2)+1+k)];

My code is printing strange characters instead of what it is meant to print

I'm trying to make a simple console game, and I am making a sort of console graphics engine that draws a screen with a map and some text. For some reason the engine writes strange characters to the console, rather than what is meant to be written.
This engine takes two 2d vectors that describe the characters and colors to be used in the console. I am using WriteConsole() to write to the console, and using SetConsoleTextAttribute() to change the color of the text as I draw. For some reason, when I try to print text, it writes some really weird characters that have no relation to the characters that are meant to be printed. Colors work just fine though. My characters are stored as TCHARs and my colors as ints.
My function to actually draw the screen:
void update()
{
for (int y = 0; y < SCREEN_HEIGHT; y++) //loop through all of the tiles
{
for (int x = 0; x < SCREEN_WIDTH; x++)
{
if (screen.at(y).at(x) != buffer.at(y).at(x) && screenColors.at(y).at(x) != bufferColors.at(y).at(x)) //only draw the tile if it has changed
{
pos.X = x; //set coords of cursor to tile to be drawn
pos.Y = y;
SetConsoleTextAttribute(hStdOut, bufferColors.at(y).at(x)); //set the text color
SetConsoleCursorPosition(hStdOut, pos); //move the cursor to the tile to be drawn
WriteConsole(hStdOut, &(buffer.at(y).at(x)), 1, dw, NULL); //actually draw the tile
screen.at(y).at(x) = buffer.at(y).at(x); //update 2d screen vector (used to read what is on the screen for other reasons)
}
}
}
SetConsoleCursorPosition(hStdOut, restPos); //move the cursor away, so it doesn't look ugly
}
My function to write to the buffer vector:
void draw2dVector(int x, int y, vector<vector<TCHAR>> draw, vector<vector<int>> colors)
{
for (unsigned int drawY = 0; drawY < draw.size(); drawY++)
{
for (unsigned int drawX = 0; drawX < draw.front().size(); drawX++)
{
buffer.at(y + drawY).at(x + drawX) = colors.at(drawY).at(drawX); // <- I found the problem. I am writing color ints to the buffer.
bufferColors.at(y + drawY).at(x + drawX) = colors.at(drawY).at(drawX);
}
}
}
My function to convert strings to vector<TCHAR>s.
vector<TCHAR> stringToTCHARvector(string str, int strLen) //convert a TCHAR string to a TCHAR vector
{
vector<TCHAR> result;
for (int i = 0; i < strLen; i++) //loop through the characters in the string
{
result.push_back(str[i]); //add the character to the TCHAR vector
}
return result;
}
I expect the output to look something like this:
###....### Inventory:
####....## Gold Piece
#..##...##
#......###
###.#..###
##########
But instead it gives me this:
êêêêêêêêêê *insert strange characters here, because stack overflow doesn't
êêêêêwwêêê show them*
êêêêwwwwww
êêêêwwwwww
êêêêêwwwww
êêêêêwwwww
UPDATE:
It turns out that the problem was that I was writing the ints from my color vector to my TCHAR buffer. I have resolved the issue. Thank you for your help.

How to clear an area from text with WriteConsoleOutputCharacter()?

So, I have a C++ class that represents rectangles, I use the WriteConsoleOutputCharacter function that outputs faster than cout or printf(), I've already made a program that prints the rectangle but I'm having trouble clearing the rectangle out.
From my understanding from msdn this function can print unicode characters or 8-bit characters from the console's current code page. Anyway, when I wanted to print backspace in order to clear the rectangle it didn't work and it prints something else(◘). When I tried to print backspace through it's hex code (0x008) it printed the symbol again.
The code is quite simple:
const char clr[] ="\b";//Thar the array I'm printing
void rect_m::clr_ar()
{
Ex = Vx + Lx;//These variables are the rectangle's sizes
Ey = Vy + Ly;
HANDLE mout = GetStdHandle(STD_OUTPUT_HANDLE);
//The loops cover the rectangle area
for (SHORT i = Vy; i < Ey; i++)
{
for (SHORT j = Vx; j < Ex; j++)
{
WriteConsoleOutputCharacter(mout, clr, strlen(clr), { j,i }, &dwWritten);
}
}
}
Well, all I want is a way to print backspace with the WriteConsoleOutputCharacter function to clear the text out (and not by printing spaces over it). I know that's a very basic mistake and that there is a better way. So,can someone tell me please what's wrong with my code?
for clear rectangle area we can use ScrollConsoleScreenBufferW for fill selected rectangle with blank characters. note that blank characters is equal to empty space, which we can view in test, if call ReadConsoleOutputCharacter at begin on yet empty console:
COORD xy{};
ULONG n;
WCHAR c;
ReadConsoleOutputCharacterW(hConsoleOutput, &c, 1, xy, &n);
//c == ' ';
so full code can look like:
BOOL cls(const SMALL_RECT* lpScrollRectangle = 0)
{
HANDLE hConsoleOutput = GetStdHandle(STD_OUTPUT_HANDLE);
CONSOLE_SCREEN_BUFFER_INFO csbi;
if (GetConsoleScreenBufferInfo(hConsoleOutput, &csbi))
{
CHAR_INFO fi = { ' ', csbi.wAttributes };
if (!lpScrollRectangle)
{
csbi.srWindow.Left = 0;
csbi.srWindow.Top = 0;
csbi.srWindow.Right = csbi.dwSize.X - 1;
csbi.srWindow.Bottom = csbi.dwSize.Y - 1;
lpScrollRectangle = &csbi.srWindow;
}
return ScrollConsoleScreenBufferW(hConsoleOutput, lpScrollRectangle, 0, csbi.dwSize, &fi);
}
return FALSE;
}

Can RGB values be negative?

I'm trying to draw a picture using glVertex. And here is my code:
struct Pixel{
GLbyte R, G, B;
};
GLbyte * originalData = NULL;
. . .
originalData = (GLbyte *)malloc(IMAGE_SIZE);
fread(originalData, IMAGE_SIZE, 1, file);
for (int n = 0; n < 256 * 256; n++){
pixels[n].R = data[n * 3 + 0];
pixels[n].G = data[n * 3 + 1];
pixels[n].B = data[n * 3 + 2];
if (pixels[n].R < (GLbyte)0) std::cerr << "??" << std::endl;
}
And the Display Function:
glBegin(GL_POINTS);
unsigned int i = 0;
for (unsigned row = 0; row < iWidth; row++){
for (unsigned col = 0; col < iHeight; col++){
glColor3b(pixels[i].R, pixels[i].G, pixels[i].B);
glVertex3f(row,col,0.0f);
i++;
}
}
glEnd();
When I'm using glDrawPixels(256, 256, GL_RGB, GL_UNSIGNED_BYTE, originalData); Everything is OK, but Colors get mixed up when I'm using my method.
Can RGB values be negative? when I use
glColor3b(abs(pixels[i].R), abs(pixels[i].G), abs(pixels[i].B));
my output looks better(but again some colors get mixed up).
NOTE1: I'm trying to rasterize a .raw file that I created with Photoshop
NOTE2: I know my method is dummy, but I'm experimenting things
You are using glColor3b which interprets the arguments as signed bytes. So any color value >= 128 will be interpreted as negative - and clamped to 0 later in the pipeline (assuming reasonable defaults).
Since you want to use the full range 0-255, just use glColor3ub and use the type GLubyte which is for unsigned bytes.