What's wrong with the definition of this struct? - c++

I have defined the screen with a struct as below:
struct
{
private:
//data and attributes
char character : 8;
unsigned short int foreground : 3;
unsigned short int intensity : 1;
unsigned short int background : 3;
unsigned short int blink : 1;
public:
unsigned short int row;
unsigned short int col;
//a function which gets row, column, data and attributes then sets that pixel of the screen in text mode view with the data given
void setData (unsigned short int arg_row,
unsigned short int arg_col,
char arg_character,
unsigned short int arg_foreground,
unsigned short int arg_itensity,
unsigned short int arg_background,
unsigned short int arg_blink)
{
//a pointer which points to the first block of the screen in text mode view
int far *SCREEN = (int far *) 0xB8000000;
row = arg_row;
col = arg_col;
character = arg_character;
foreground = arg_foreground;
intensity = arg_itensity;
background = arg_background;
blink = arg_blink;
*(SCREEN + row * 80 + col) = (blink *32768) + (background * 4096) + (intensity * 2048) + (foreground * 256) + character;
}
} SCREEN;
but when I use characters with more than '128' ASCII code in using this struct, data will be crashed. I defined character field with 8 bit. so what's wrong with this definition?

In the c++ compiler you use apparently char is signed and so in an 8 bit character you fit values from -128 to 127(assuming two's complements representation for negative values is used). If you want to be guaranteed to be able to fit values greater than or equal to 128, use unsigned char.

Related

MSVC: shift count negative or too big

I am trying to wrap my head around a warning I get from MSVC. From what I can tell, the warning seems to be bogus, but I would like to be sure.
I am trying to convert an off_t to an offset in an OVERLAPPED. Given an off_t named offset and an OVERLAPPED named overlapped, I am trying the following:
overlapped.Offset = static_cast<DWORD>(offset);
if constexpr(sizeof(offset) > 4) {
overlapped.OffsetHigh = static_cast<DWORD>(offset >> 32);
}
MSVC complains about the bitshift, pretending the shift count is either negative or too big. Since it's clearly not negative - and even MSVC should be able to tell that - it must think it's too big.
How could it be too big? The code in question is only compiled if the size of an off_t is greater than 4. It must therefore be at least 5 bytes (but probably 8), and given 8 bits to the byte meaning a minimum of 40 bits, which is more than 32.
What is going on here?
Could it be the assignment into overlapped.OffsetHigh, and not your explicit shift, that is causing the warning? The following code generates the same warning on VS2015 compiling for x86 32-bit:
struct Clump
{
unsigned int a : 32;
unsigned int b : 32;
unsigned int c : 32;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // This is line 1121.
1>E.cpp(1121): warning C4293: '<<': shift count negative or too big, undefined behavior
But removing the bit-fields, there is no warning:
struct Clump
{
unsigned int a;
unsigned int b;
unsigned int c;
};
unsigned int x = 0;
unsigned int y = 0;
unsigned int z = 0;
Clump clump = { x, y, z }; // No warning.

Random data in image-size field in the bitmap file header created with c++

This is my first question at Stack overflow. I'm new to Image processing and to C++, I'm working with bitmap files now. While creating a Bitmap file using C++, the file can not be opened using any viewers. I used a hex editor to view the file and there were random data in Image size field in the info header. After editing it in the hex editor, the bitmap is view-able. I don't know what is wrong with the code.
The header (bitmap.h) I created is as follows
#include<iostream>
#include<fstream>
using namespace std;
struct BmpSignature
{
unsigned char data[2];
BmpSignature(){ data[0] = data[1] = 0; }
};
struct BmpHeader
{
unsigned int fileSize; // this field gives out the size of the full Image includong the headers. it is of 4 byte in width
unsigned short reserved1; // this field is reserved. it is 2 byte in width
unsigned short reserved2; //This field is also reserved. it is 2 byte in width
unsigned int dataOffset; // this gives the starting location of the starting of the image data array
};
struct BmpInfoHeader
{
unsigned int size; // this field gives the size of the Bitmap info Header. This is 4 byte in width
unsigned int width; // this gives the width of the image
unsigned int height; // this gives the height of the image
unsigned short planes; //this gives the number of planes in the image
unsigned short bitCount; // this gives the number of bits per pixels in the image. for ex. like 24 bits, 8 bits
unsigned short compression; // gives info whether the image is compressed or not
unsigned int ImageSize; // gives the actual size of the image
unsigned int XPixelsPerM; // give the number of pixels in the X direction. It is usually 2834
unsigned int YPixelsPerM;// give the number of pixels in the Y direction. It is usually 2834
unsigned int ColoursUsed; // this field gives the number of Colours used in the Image
unsigned int ColoursImp; // gives the number of Important colours in the image. if all colours are important it is usually 0
};
the cpp file I created is as follows (Create_Bitmap.cpp)
#include"bitmap.h"
#include<cmath>
#include<fstream>
using namespace std;
int main()
{
ofstream fout;
fout.open("D:/My Library/test1.bmp", ios::out |ios::binary);
BmpHeader header;
BmpInfoHeader infoheader;
BmpSignature sign;
infoheader.size = 40;
infoheader.height = 15;
infoheader.width = 15;
infoheader.planes = 1;
infoheader.bitCount = 8;
infoheader.compression = 0;
infoheader.ImageSize = 0;
infoheader.XPixelsPerM = 0;
infoheader.YPixelsPerM = 0;
infoheader.ColoursUsed = 0;
infoheader.ColoursImp = 0;
unsigned char* pixelData;
int pad=0;
for (int i = 0; i < infoheader.height * infoheader.width; i++)
{
if ((i) % 16 == 0) pad++;
}
int arrsz = infoheader.height * infoheader.width + pad;
pixelData = new unsigned char[arrsz];
unsigned char* offsetData;
offsetData = new unsigned char[4 * 256];
int xn = 0;
int yn = 4 * 256;
for (int i = 0; i < yn; i+=4)
{
offsetData[i] = xn;
offsetData[i+1] = xn;
offsetData[i+2] = xn;
offsetData[i+3] = 0;
xn++;
}
int num = 0;
for (int i = 0; i < arrsz; i++)
{
pixelData[i] = i;
}
sign.data[0] = 'B'; sign.data[1] = 'M';
header.fileSize = 0;
header.reserved1 = header.reserved2 = 0;
header.dataOffset = 0;
fout.seekp(0, ios::beg);
fout.write((char*)&sign, sizeof(sign));
fout.seekp(2, ios::beg);
fout.write((char*)&header, sizeof(header));
fout.seekp(14, ios::beg);
fout.write((char*)&infoheader, sizeof(infoheader));
fout.seekp(54, ios::beg);
fout.write((char*)offsetData, yn);
fout.write((char*)pixelData, arrsz);
fout.close();
delete[] pixelData;
delete[] offsetData;
return 0;
}
I have attached the screenshot of the created bmp file in a hex editor with the image size field selected
Bitmap Image opened in Hex Editor
Upon replacing the contents in the field using hex editor the Bitmap file can be viewed with an Image Viewer. I don't know what is wrong in this code
So you want to write in BMP format? Remember that compiler may insert padding in C++ POD structs. You may need use some compiler pragma to pack the struct. Also make sure you use little-endian for all integers, but that should be OK since you are on Windows, assuming an x86.

Creating a generic payload class

my task is to place specified bits in an array of 8 bytes (not all 64 bits).
This can be done by using a struct:
struct Date {
unsigned int nWeekDay : 3;
unsigned int nMonthDay : 6;
unsigned int reserved1 : 10;
unsigned int nMonth : 5;
unsigned int nYear : 8;
};
I would like to write a generic class that gets value, start bits and length and place the the value a the correct position.
Could someone point me to an implementation of such a class\function?

Efficient Add of Values with Overflow Protection in C/C++

What is the most efficient way to add two scalars in c/c++ with overflow protection? For example, adding two unsigned chars is 255 if a+b >= 255.
I have:
unsigned char inline add_o(unsigned char x, unsigned char y)
{
const short int maxVal = 255;
unsigned short int s_short = (unsigned short int) x + (unsigned short int) y;
unsigned char s_char = (s_short <= maxVal) ? (unsigned char)s_short : maxVal;
return s_char;
}
that can be driven by:
unsigned char x = 200;
unsigned char y = 129;
unsigned char mySum = add_o(x,y);
I see some ideas here but I am interested in the fastest way to perform this operation---or at least one that is highly palatable to an optimizing compiler.
For most modern compilers will generate branch-free code for your current solution, which is already fairly good. Few optimisations which are very hardware dependant (x86 in particular) are
replace the comparison by a masked and
try to make the overflow protection if a conditional move.
This is how I would have done it:
unsigned char inline add_o(unsigned char x, unsigned char y) {
unsigned short int s_short = (unsigned short int) x + (unsigned short int) y;
if (s_short & 0xFF00)
s_short = 0xFF;
return s_short;
}
You mean unsigned saturating arithmetic?
unsigned char inline add_o(unsigned char x, unsigned char y) {
unsigned char s = x + y;
s |= (unsigned)(-(s < x));
return s;
}
The most efficient way is to pre-fill a table with all possible results, then use the addition of x and y to index into that table.
#include <iostream>
unsigned char add_o_results[255+255];
void pre_fill() {
for (int i = 0 ; i < 255 + 255 ; ++i) {
add_o_results[i] = std::min(i, 255);
}
}
unsigned char inline add_o(unsigned char x, unsigned char y)
{
return add_o_results[x+y];
}
using namespace std;
int main()
{
pre_fill();
cout << int(add_o(150, 151)) << endl;
cout << int(add_o(10, 150)) << endl;
return 0;
}

How to read char array representing a pixel as unsigned int

I am writting a C++ command line application that will apply the Haar transform to the pixels of a bmp image. I have successfully been able to extract the header information and determine the byte array size for the pixels. After filling a char[pixelHeight][rowSizeInBytes] with the pixel data from the file, I am reading each pixel (24 bits for the bmp I'm using) into a vector. It is working on my machine but I would like to know if my implementation for converting the char array representing a pixel into an unsigned int is safe and/or the idiomatic C++ way. I am assuming a little endian architecture.
unsigned char pixelData[infoHeader->pixelHeight][rowSize];
fseek(pFile, basicHeader->pixelDataOffset, SEEK_SET);
fread(&pixelData, pixelArraySize, 1, pFile);
for(int row = 0; row < infoHeader->pixelHeight; row++)
{
for(int i = 0; i < rowSize; i = i + 3)
{
unsigned char blue = pixelData[row][i];
unsigned char green = pixelData[row][i + 1];
unsigned char red = pixelData[row][i + 2];
char vals[4];
vals[0] = blue;
vals[1] = green;
vals[2] = red;
vals[3] = '\0';
unsigned int pixelVal = *((unsigned int *)vals);
pixelVec.push_back(pixelVal);
}
}
No, this is unidiomatic. You should code what you mean rather than relying on the endianness of the system. For example:
unsigned int pixelVal = static_cast<unsigned int>(blue) |
(static_cast<unsigned int>(green) << 8) |
(static_cast<unsigned int>(red) << 16);
This assumes your intention was to get a vector with specific values for unsigned integers. If your intention was to get a vector with specific bytes, you should use a vector of byte-sized structures, not unsigned integers.