Following code crashes in x64 bit but not x86 build - c++

I've always been using a xor encryption class for my 32 bit applications but recently I have started working on a 64 bit one and encountered the following crash: https://i.stack.imgur.com/jCBlJ.png
Here's the xor class I'm using:
// xor.h
#pragma once
template <int XORSTART, int BUFLEN, int XREFKILLER>
class XorStr
{
private:
XorStr();
public:
char s[BUFLEN];
XorStr(const char* xs);
~XorStr()
{
for (int i = 0; i < BUFLEN; i++) s[i] = 0;
}
};
template <int XORSTART, int BUFLEN, int XREFKILLER>
XorStr<XORSTART, BUFLEN, XREFKILLER>::XorStr(const char* xs)
{
int xvalue = XORSTART;
int i = 0;
for (; i < (BUFLEN - 1); i++)
{
s[i] = xs[i - XREFKILLER] ^ xvalue;
xvalue += 1;
xvalue %= 256;
}
s[BUFLEN - 1] = (2 * 2 - 3) - 1;
}
The crash occurs when I try to use the obfuscated string but doesnt necessarily happen 100% of the times (never happens on 32 bit, however). Here's a small example of a 64 bit app that will crash on the second obfuscated string:
#include <iostream>
#include "xor.h"
int main()
{
// no crash
printf(/*123456789*/XorStr<0xDE, 10, 0x017A5298>("\xEF\xED\xD3\xD5\xD7\xD5\xD3\xDD\xDF" + 0x017A5298).s);
// crash
printf(/*123456*/XorStr<0xE3, 7, 0x87E64A05>("\xD2\xD6\xD6\xD2\xD2\xDE" + 0x87E64A05).s);
return 0;
}
The same app will run perfectly fine if built in 32 bit.
Here's the HTML script to generate the obfuscated strings: https://pastebin.com/QsZxRYSH
I need to tweak this class to work on 64 bit because I have a lot of strings that I already have encrypted that I need to import from a 32 bit project into the one I'm working on at the moment, which is 64 bit. Any help is appreciated!

The access violation is because 0x87E64A05 is larger than the largest value a signed 32bit integer can hold (which is 0x7FFFFFFF).
Because int is likely 32bit, then XREFKILLER cannot hold 0x87E64A05 and so its value will be implementation-defined.
This value is then used later to subtract again from xs after the pointer passed was artificially advanced by the literal 0x87E64A05 which would be interpreted as long or long long to make the value fit, depending on whether long is 32bit or larger and therefore wouldn't narrowing into the implementation defined value.
Therefore you are effectively left with some random pointer in xs[i - XREFKILLER] and this is likely to give undefined behavior, e.g. an access violation.
If compiled for 32bit x86 it probably so happens that int and pointers have the same bit-size and that the implementation-defined over-/underflow and narrowing behaviors happen to be such that the addition and subtraction cancel correctly as expected. If however the pointer type is larger than 32bit this cannot function.
There is no point to XREFKILLER at all. It just does one calculation that is immediately reverted (if there is no over-/underflow).
Note that the fact that the compiler accepts the narrowing in the template argument at all is a bug. Your program is ill-formed and the compiler should give you an error message.
In GCC for example this bug persists until version 8.2, but has been fixed on current trunk (i.e. version 9).
You will have similar problems with XORSTART if char happens to be signed on your platform, because then your provided values wont fit into it. But in that case you will have to enable warnings, because that won't be a conversion making the program ill-formed. Also the behavior of ^ may not be as you expect if char is signed on your system.
It is not clear what the point of
s[BUFLEN - 1] = (2 * 2 - 3) - 1;
is. It should be:
s[BUFLEN - 1] = '\0';
Passing the resulting string to printf as first argument will lead to spurious undefined behavior if the result string happens to contain a % which would be interpreted as introduction to a format specifier. Use std::cout.
If you want to use printf you need to write std::printf and #include<cstdio> to guarantee that it will be available. However, since this is C++, you should be using std::cout instead anyway.
More fundamentally your output string may happen to contain a 0 other than the terminating one after your transformation. This would be interpreted as end of the C-style string. This seems like a major design flaw and you probably want to use std::string instead for that reason (and because it is better style).

Related

const static char assignment with PROGMEM causes error in avr-g++ 5.4.0

I have a piece of code that was shipped as part of the XLR8 development platform that formerly used a bundled version (4.8.1) of the avr-gcc/g++ compiler. I tried to use the latest version of avr-g++ included with by my linux distribution (Ubuntu 22.04) which is 5.4.0
When running that compiler, I am getting the following error that seems to make sense to me. Here is the error and the chunk of related code below. In the bundled version of avr-g++ that was provided with the XLR8 board, this was not an error. I'm not sure why because it appears that the code below is attempting to place 16 bit words into an array of chars.
A couple questions,
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
If the size of the element was 16 bits, then is the correct fix simply to make that array of type unsigned int rather than char?
/home/rich/.arduino15/packages/alorium/hardware/avr/2.3.0/libraries/XLR8Info/src/XLR8Info.cpp:157:12: error: narrowing conversion of ‘51343u’ from ‘unsigned int’ to ‘char’ inside { } [-Wnarrowing]
0x38BF};
bool XLR8Info::hasICSPVccGndSwap(void) {
// List of chip IDs from boards that have Vcc and Gnd swapped on the ICSP header
// Chip ID of affected parts are 0x????6E00. Store the ???? part
const static char cidTable[] PROGMEM =
{0xC88F, 0x08B7, 0xA877, 0xF437,
0x94BF, 0x88D8, 0xB437, 0x94D7, 0x38BF, 0x145F, 0x288F, 0x28CF,
0x543F, 0x0837, 0xA8B7, 0x748F, 0x8477, 0xACAF, 0x14A4, 0x0C50,
0x084F, 0x0810, 0x0CC0, 0x540F, 0x1897, 0x48BF, 0x285F, 0x8C77,
0xE877, 0xE49F, 0x2837, 0xA82F, 0x043F, 0x88BF, 0xF48F, 0x88F7,
0x1410, 0xCC8F, 0xA84F, 0xB808, 0x8437, 0xF4C0, 0xD48F, 0x5478,
0x080F, 0x54D7, 0x1490, 0x88AF, 0x2877, 0xA8CF, 0xB83F, 0x1860,
0x38BF};
uint32_t chipId = getChipId();
for (int i=0;i< sizeof(cidTable)/sizeof(cidTable[0]);i++) {
uint32_t cidtoTest = (cidTable[i] << 16) + 0x6E00;
if (chipId == cidtoTest) {return true;}
}
return false;
}
As you already pointed out, the array type char definitely looks wrong. My guess is, that this is bug that may have never surfaced in the field. hasICSPVccGndSwap should always return false, so maybe they never used a chip type that had its pins swapped and got away with it.
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Yes, the error/warning behavior was changed with version 5.
As of G++ 5, the behavior is the following: When a later standard is in effect, e.g. when using -std=c++11, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic.
I would've expected v4.8.1 to throw a warning at least, but maybe that has been ignored.
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
Yes, this further supports that the array type should've been uint16 in the first place.
If the size of the element was 16 bits, then is the correct fix simply to make that array of type int rather than char?
Yes.
Several bugs here. I am not familiar with that software, but there are at least the following, obvious bugs:
The element type of cidTable should be a 16-bit, integral type like uint16_t. This follows from the code and also from the comments.
You cannot read from PROGMEM like that. The code will read from RAM, where it uses a flash address to access RAM. Currently, there is only one way to read from flash in avr-g++, which is inline assembly. To make life easier, you can use macros from avr/pgmspace.h like pgm_read_word.
cidTable[i] << 16 is Undefined Behaviour because a 16-bit type is shifted left by 16 bits. The type is an 8-bit type, then it is promoted to int which has 16 bits only. Same problem if the element type is already 16 bits wide.
Taking it all together, in order to make sense in avr-g++, the code would be something like:
#include <avr/pgmspace.h>
bool XLR8Info::hasICSPVccGndSwap()
{
// List of chip IDs from boards that have Vcc and Gnd swapped on
// the ICSP header. Chip ID of affected parts are 0x????6E00.
// Store the ???? part.
static const uint16_t cidTable[] PROGMEM =
{
0xC88F, 0x08B7, 0xA877, 0xF437, ...
};
uint32_t chipId = getChipId();
for (size_t i = 0; i < sizeof(cidTable) / sizeof (*cidTable); ++i)
{
uint16_t cid = pgm_read_word (&cidTable[i]);
uint32_t cidtoTest = ((uint32_t) cid << 16) + 0x6E00;
if (chipId == cidtoTest)
return true;
}
return false;
}

How can I convert this code from C to C++?

I am working on an embedded project using an mbed. The chip's manufacturer specifies a Cyclical Redundancy Test using this lookup generator, but its written in C.
Lookup Generator Code
///////////////////////configures CRC check lookup table////////////////////////
short pec15Table[256];
short CRC15_POLY = 0x4599; //CRC code
void configCRC(void)
{
for (int i = 0; i < 256; i++)
{
remainder = i << 7;
for (int bit = 8; bit > 0; --bit)
{
if (remainder & 0x4000)
{
remainder = ((remainder << 1));
remainder = (remainder ^ CRC15_POLY)
}
else
{
remainder = ((remainder << 1));
}
}
}
pec15Table[i] = remainder&0xFFFF;
};
I am not really good with C++ yet, so I just copied and pasted it and checked for clear syntax errors. For example I switched the int16 declarations with short and unsigned short. But, when I compile it gives me the following error.
Which doesn't make sense to me. I am sure im doing something wrong.
Error: Cannot determine which instance of overloaded function "remainder" is intended in "config.cpp", Line: 20, Col: 10
Obviously you have a namespace collision with std::remainder. This is one of many reasons to avoid global variables. C and C++ should otherwise be identical here.
Notably though, this code is very naively written. Not only must the function be rewritten to properly take parameters, but the type use is all over the place.
You should never do bit-wise arithmetic on signed types, because that opens up for a lot of poorly-defined behavior bugs. All "sloppy typing" types like short and int must be replaced with types from stdint.h. You should only use unsigned types. You need to be aware of implicit integer promotion.
Just rename variable remainder to fremainder( or some other name as you wish) and see the magic in compilation.
These kind of issues come into picture because of not following any standard convention while naming variable.
Check this link to see why renaming of variable is required

Pointer addition and integer overflow with Clang 5.0 and UBsan?

I'm trying t understand a problem we cleared recently when using Clang 5.0 and Undefined Behavior Sanitizer (UBsan). We have code that processes a buffer in the forward or backwards direction. The reduced case is similar to the code shown below.
The 0-len may look a little unusual, but it is needed for early Microsoft .Net compilers. Clang 5.0 and UBsan produced integer overflow findings:
adv-simd.h:1138:26: runtime error: addition of unsigned offset to 0x000003f78cf0 overflowed to 0x000003f78ce0
adv-simd.h:1140:26: runtime error: addition of unsigned offset to 0x000003f78ce0 overflowed to 0x000003f78cd0
adv-simd.h:1142:26: runtime error: addition of unsigned offset to 0x000003f78cd0 overflowed to 0x000003f78cc0
...
Lines 1138, 1140, 1142 (and friends) are the increment, which may
stride backwards due to the 0-len.
ptr += inc;
According to Pointer comparisons in C. Are they signed or unsigned? (which also discusses C++), pointers are neither signed nor unsigned. Our offsets were unsigned and we relied on unsigned integer wrap to achieve the reverse stride.
The code was fine under GCC UBsan and Clang 4 and earlier UBsan. We eventually cleared it for Clang 5.0 with help with the LLVM devs. Instead of size_t we needed to use ptrdiff_t.
My question is, where was the integer overflow/undefined behavior in the construction? How did ptr + <unsigned> result in signed integer overflow and lead to undefined behavior?
Here is an MSVC that mirrors the real code.
#include <cstddef>
#include <cstdint>
using namespace std;
uint8_t buffer[64];
int main(int argc, char* argv[])
{
uint8_t * ptr = buffer;
size_t len = sizeof(buffer);
size_t inc = 16;
// This sets up processing the buffer in reverse.
// A flag controls it in the real code.
if (argc%2 == 1)
{
ptr += len - inc;
inc = 0-inc;
}
while (len > 16)
{
// process blocks
ptr += inc;
len -= 16;
}
return 0;
}
The definition of adding an integer to a pointer is (N4659 expr.add/4):
I used an image here in order to preserve the formatting (this will be discussed below).
Note that this is a new wording which replaces a less-clear description from previous standards.
In your code (when argc is odd) we end up with code equivalent to:
uint8_t buffer[64];
uint8_t *ptr = buffer + 48;
ptr = ptr + (SIZE_MAX - 15);
For the variables in the standard quote applied to your code, i is 48 and j is (SIZE_MAX - 15) and n is 64.
The question now is whether it is true that 0 ≤ i + j ≤ n. If we interpret "i + j" to mean the result of the expression i + j, then that equals 32 which is less than n. But if it means the mathematical result then it is much greater than n.
The standard uses a font for mathematical equations here and does not use the font for source code. ≤ is not a valid operator either. So I think they intend this equation to describe the mathematical value i.e. this is undefined behaviour.
The C standard defines type ptrdiff_t as being the type yielded by the pointer-difference operator. It would be possible for a system to have a 32-bit size_t and a 64-bit ptrdiff_t; such definitions would be a natural fit for a system which used 64-bit linear or quasi-linear pointers but did required individual objects to be less than 4GiB each.
If objects are known to be less than 2GiB each, storing values of type ptrdiff_t rather than size_t might make the program needlessly inefficient. In such a scenario, however, code should not use size_t to hold pointer differences that might be negative, but instead use int32_t [which will be large enough if objects are less than 2GiB each]. Even if ptrdiff_t is 64 bits, a value of type int32_t will be properly sign-extended before it is added or subtracted from any pointers.

comparison between signed and unsigned integer expressions and 0x80000000

I have the following code:
#include <iostream>
using namespace std;
int main()
{
int a = 0x80000000;
if(a == 0x80000000)
a = 42;
cout << "Hello World! :: " << a << endl;
return 0;
}
The output is
Hello World! :: 42
so the comparison works. But the compiler tells me
g++ -c -pipe -g -Wall -W -fPIE -I../untitled -I. -I../bin/Qt/5.4/gcc_64/mkspecs/linux-g++ -o main.o ../untitled/main.cpp
../untitled/main.cpp: In function 'int main()':
../untitled/main.cpp:8:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if(a == 0x80000000)
^
So the question is: Why is 0x80000000 an unsigned int? Can I make it signed somehow to get rid of the warning?
As far as I understand, 0x80000000 would be INT_MIN as it's out of range for positive a integer. but why is the compiler assuming, that I want a positive number?
I'm compiling with gcc version 4.8.1 20130909 on linux.
0x80000000 is an unsigned int because the value is too big to fit in an int and you did not add any L to specify it was a long.
The warning is issued because unsigned in C/C++ has a quite weird semantic and therefore it's very easy to make mistakes in code by mixing up signed and unsigned integers. This mixing is often a source of bugs especially because the standard library, by historical accident, chose to use an unsigned value for the size of containers (size_t).
An example I often use to show how subtle is the problem consider
// Draw connecting lines between the dots
for (int i=0; i<pts.size()-1; i++) {
draw_line(pts[i], pts[i+1]);
}
This code seems fine but has a bug. In case the pts vector is empty pts.size() is 0 but, and here comes the surprising part, pts.size()-1 is a huge nonsense number (today often 4294967295, but depends on the platform) and the loop will use invalid indexes (with undefined behavior).
Here changing the variable to size_t i will remove the warning but is not going to help as the very same bug remains...
The core of the problem is that with unsigned values a < b-1 and a+1 < b are not the same thing even for very commonly used values like zero; this is why using unsigned types for non-negative values like container size is a bad idea and a source of bugs.
Also note that your code is not correct portable C++ on platforms where that value doesn't fit in an integer as the behavior around overflow is defined for unsigned types but not for regular integers. C++ code that relies on what happens when an integer gets past the limits has undefined behavior.
Even if you know what happens on a specific hardware platform note that the compiler/optimizer is allowed to assume that signed integer overflow never happens: for example a test like a < a+1 where a is a regular int can be considered always true by a C++ compiler.
It seems you are confusing 2 different issues: The encoding of something and the meaning of something. Here is an example: You see a number 97. This is a decimal encoding. But the meaning of this number is something completely different. It can denote the ASCII 'a' character, a very hot temperature, a geometrical angle in a triangle, etc. You cannot deduce meaning from encoding. Someone must supply a context to you (like the ASCII map, temperature etc).
Back to your question: 0x80000000 is encoding. While INT_MIN is meaning. There are not interchangeable and not comparable. On a specific hardware in some contexts they might be equal just like 97 and 'a' are equal in the ASCII context.
Compiler warns you about ambiguity in the meaning, not in the encoding. One way to give meaning to a specific encoding is the casting operator. Like (unsigned short)-17 or (student*)ptr;
On a 32 bits system or 64bits with back compatibility int and unsigned int have encoding of 32bits like in 0x80000000 but on 64 bits MIN_INT would not be equal to this number.
Anyway - the answer to your question: in order to remove the warning you must give identical context to both left and right expressions of the comparison.
You can do it in many ways. For example:
(unsigned int)a == (unsigned int)0x80000000 or (__int64)a == (__int64)0x80000000 or even a crazy (char *)a == (char *)0x80000000 or any other way as long as you maintain the following rules:
You don't demote the encoding (do not reduce the amount of bits it requires). Like (char)a == (char)0x80000000 is incorrect because you demote 32 bits into 8 bits
You must give both the left side and the right side of the == operator the same context. Like (char *)a == (unsigned short)0x80000000 is incorrect an will yield an error/warning.
I want to give you another example of how crucial is the difference between encoding and meaning. Look at the code
char a = -7;
bool b = (a==-7) ? true : false;
What is the result of 'b'? The answer will shock you: it is undefined.
Some compilers (typically Microsoft visual studio) will compile a program that b will get true while on Android NDK compilers b will get false.
The reason is that Android NDK treats 'char' type as 'unsigned char', while Visual studio treats 'char' as 'signed char'. So on Android phones the encoding of -7 actually has a meaning of 249 and is not equal to the meaning of (int)-7.
The correct way to fix this problem is to specifically define 'a' as signed char:
signed char a = -7;
bool b = (a==-7) ? true : false;
0x80000000 is considered unsigned per default.
You can avoid the warning like this:
if (a == (int)0x80000000)
a=42;
Edit after a comment:
Another (perhaps better) way would be
if ((unsigned)a == 0x80000000)
a=42;

A warning - comparison between signed and unsigned integer expressions

I am currently working through Accelerated C++ and have come across an issue in exercise 2-3.
A quick overview of the program - the program basically takes a name, then displays a greeting within a frame of asterisks - i.e. Hello ! surrounded framed by *'s.
The exercise - In the example program, the authors use const int to determine the padding (blank spaces) between the greeting and the asterisks. They then ask the reader, as part of the exercise, to ask the user for input as to how big they want the padding to be.
All this seems easy enough, I go ahead ask the user for two integers (int) and store them and change the program to use these integers, removing the ones used by the author, when compiling though I get the following warning;
Exercise2-3.cpp:46: warning: comparison between signed and unsigned integer expressions
After some research it appears to be because the code attempts to compare one of the above integers (int) to a string::size_type, which is fine. But I was wondering - does this mean I should change one of the integers to unsigned int? Is it important to explicitly state whether my integers are signed or unsigned?
cout << "Please enter the size of the frame between top and bottom you would like ";
int padtopbottom;
cin >> padtopbottom;
cout << "Please enter size of the frame from each side you would like: ";
unsigned int padsides;
cin >> padsides;
string::size_type c = 0; // definition of c in the program
if (r == padtopbottom + 1 && c == padsides + 1) { // where the error occurs
Above are the relevant bits of code, the c is of type string::size_type because we do not know how long the greeting might be - but why do I get this problem now, when the author's code didn't get the problem when using const int? In addition - to anyone who may have completed Accelerated C++ - will this be explained later in the book?
I am on Linux Mint using g++ via Geany, if that helps or makes a difference (as I read that it could when determining what string::size_type is).
It is usually a good idea to declare variables as unsigned or size_t if they will be compared to sizes, to avoid this issue. Whenever possible, use the exact type you will be comparing against (for example, use std::string::size_type when comparing with a std::string's length).
Compilers give warnings about comparing signed and unsigned types because the ranges of signed and unsigned ints are different, and when they are compared to one another, the results can be surprising. If you have to make such a comparison, you should explicitly convert one of the values to a type compatible with the other, perhaps after checking to ensure that the conversion is valid. For example:
unsigned u = GetSomeUnsignedValue();
int i = GetSomeSignedValue();
if (i >= 0)
{
// i is nonnegative, so it is safe to cast to unsigned value
if ((unsigned)i >= u)
iIsGreaterThanOrEqualToU();
else
iIsLessThanU();
}
else
{
iIsNegative();
}
I had the exact same problem yesterday working through problem 2-3 in Accelerated C++. The key is to change all variables you will be comparing (using Boolean operators) to compatible types. In this case, that means string::size_type (or unsigned int, but since this example is using the former, I will just stick with that even though the two are technically compatible).
Notice that in their original code they did exactly this for the c counter (page 30 in Section 2.5 of the book), as you rightly pointed out.
What makes this example more complicated is that the different padding variables (padsides and padtopbottom), as well as all counters, must also be changed to string::size_type.
Getting to your example, the code that you posted would end up looking like this:
cout << "Please enter the size of the frame between top and bottom";
string::size_type padtopbottom;
cin >> padtopbottom;
cout << "Please enter size of the frame from each side you would like: ";
string::size_type padsides;
cin >> padsides;
string::size_type c = 0; // definition of c in the program
if (r == padtopbottom + 1 && c == padsides + 1) { // where the error no longer occurs
Notice that in the previous conditional, you would get the error if you didn't initialize variable r as a string::size_type in the for loop. So you need to initialize the for loop using something like:
for (string::size_type r=0; r!=rows; ++r) //If r and rows are string::size_type, no error!
So, basically, once you introduce a string::size_type variable into the mix, any time you want to perform a boolean operation on that item, all operands must have a compatible type for it to compile without warnings.
The important difference between signed and unsigned ints
is the interpretation of the last bit. The last bit
in signed types represent the sign of the number, meaning:
e.g:
0001 is 1 signed and unsigned
1001 is -1 signed and 9 unsigned
(I avoided the whole complement issue for clarity of explanation!
This is not exactly how ints are represented in memory!)
You can imagine that it makes a difference to know if you compare
with -1 or with +9. In many cases, programmers are just too lazy
to declare counting ints as unsigned (bloating the for loop head f.i.)
It is usually not an issue because with ints you have to count to 2^31
until your sign bit bites you. That's why it is only a warning.
Because we are too lazy to write 'unsigned' instead of 'int'.
At the extreme ranges, an unsigned int can become larger than an int.
Therefore, the compiler generates a warning. If you are sure that this is not a problem, feel free to cast the types to the same type so the warning disappears (use C++ cast so that they are easy to spot).
Alternatively, make the variables the same type to stop the compiler from complaining.
I mean, is it possible to have a negative padding? If so then keep it as an int. Otherwise you should probably use unsigned int and let the stream catch the situations where the user types in a negative number.
The primary issue is that underlying hardware, the CPU, only has instructions to compare two signed values or compare two unsigned values. If you pass the unsigned comparison instruction a signed, negative value, it will treat it as a large positive number. So, -1, the bit pattern with all bits on (twos complement), becomes the maximum unsigned value for the same number of bits.
8-bits: -1 signed is the same bits as 255 unsigned
16-bits: -1 signed is the same bits as 65535 unsigned
etc.
So, if you have the following code:
int fd;
fd = open( .... );
int cnt;
SomeType buf;
cnt = read( fd, &buf, sizeof(buf) );
if( cnt < sizeof(buf) ) {
perror("read error");
}
you will find that if the read(2) call fails due to the file descriptor becoming invalid (or some other error), that cnt will be set to -1. When comparing to sizeof(buf), an unsigned value, the if() statement will be false because 0xffffffff is not less than sizeof() some (reasonable, not concocted to be max size) data structure.
Thus, you have to write the above if, to remove the signed/unsigned warning as:
if( cnt < 0 || (size_t)cnt < sizeof(buf) ) {
perror("read error");
}
This just speaks loudly to the problems.
1. Introduction of size_t and other datatypes was crafted to mostly work,
not engineered, with language changes, to be explicitly robust and
fool proof.
2. Overall, C/C++ data types should just be signed, as Java correctly
implemented.
If you have values so large that you can't find a signed value type that works, you are using too small of a processor or too large of a magnitude of values in your language of choice. If, like with money, every digit counts, there are systems to use in most languages which provide you infinite digits of precision. C/C++ just doesn't do this well, and you have to be very explicit about everything around types as mentioned in many of the other answers here.
or use this header library and write:
// |notEqaul|less|lessEqual|greater|greaterEqual
if(sweet::equal(valueA,valueB))
and don't care about signed/unsigned or different sizes