Wrong reinterpretation of the nullptr in a Qt 5.15-based project | C++ - c++

Both GCC (version 12.2) and Clang (version 14.0) compilers interpret nullptr as 32-bit integer (int) in some places, and this causes errors.
For example, in the qhashfunctions.h file there is a variant of function called qHash that takes nullptr as a main argument.
Q_DECL_CONST_FUNCTION inline uint qHash(std::nullptr_t, uint seed = 0) Q_DECL_NOTHROW
{
return qHash(reinterpret_cast<quintptr>(nullptr), seed);
}
Any compiler reports that int cannot be casted to quintptr (unsigned long long).
Second example has the same problem. In the std_thread.h file there is the following code:
_M_start_thread(_State_ptr(new _State_impl<_Wrapper>(
std::forward<_Callable>(__f), std::forward<_Args>(__args)...)),
__depend);
}
Earlier in this file __depend is declared as nullptr (auto __depend = nullptr;). This means that function pointer (2nd argument) is nullptr. Any compiler reports that a parameter of type void (*)() cannot be initialised with a value of type int.
What is the solution to the problem?
I use ArchLinux (x86-64) with latest updates and Qt version 5.15.8. Building is done through Qt Creator
I also tried to create another Qt-based project and write the following code in the main function:
unsigned long long i = reinterpret_cast<unsigned long long>(nullptr)
Compilation succeded...

Related

const static char assignment with PROGMEM causes error in avr-g++ 5.4.0

I have a piece of code that was shipped as part of the XLR8 development platform that formerly used a bundled version (4.8.1) of the avr-gcc/g++ compiler. I tried to use the latest version of avr-g++ included with by my linux distribution (Ubuntu 22.04) which is 5.4.0
When running that compiler, I am getting the following error that seems to make sense to me. Here is the error and the chunk of related code below. In the bundled version of avr-g++ that was provided with the XLR8 board, this was not an error. I'm not sure why because it appears that the code below is attempting to place 16 bit words into an array of chars.
A couple questions,
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
If the size of the element was 16 bits, then is the correct fix simply to make that array of type unsigned int rather than char?
/home/rich/.arduino15/packages/alorium/hardware/avr/2.3.0/libraries/XLR8Info/src/XLR8Info.cpp:157:12: error: narrowing conversion of ‘51343u’ from ‘unsigned int’ to ‘char’ inside { } [-Wnarrowing]
0x38BF};
bool XLR8Info::hasICSPVccGndSwap(void) {
// List of chip IDs from boards that have Vcc and Gnd swapped on the ICSP header
// Chip ID of affected parts are 0x????6E00. Store the ???? part
const static char cidTable[] PROGMEM =
{0xC88F, 0x08B7, 0xA877, 0xF437,
0x94BF, 0x88D8, 0xB437, 0x94D7, 0x38BF, 0x145F, 0x288F, 0x28CF,
0x543F, 0x0837, 0xA8B7, 0x748F, 0x8477, 0xACAF, 0x14A4, 0x0C50,
0x084F, 0x0810, 0x0CC0, 0x540F, 0x1897, 0x48BF, 0x285F, 0x8C77,
0xE877, 0xE49F, 0x2837, 0xA82F, 0x043F, 0x88BF, 0xF48F, 0x88F7,
0x1410, 0xCC8F, 0xA84F, 0xB808, 0x8437, 0xF4C0, 0xD48F, 0x5478,
0x080F, 0x54D7, 0x1490, 0x88AF, 0x2877, 0xA8CF, 0xB83F, 0x1860,
0x38BF};
uint32_t chipId = getChipId();
for (int i=0;i< sizeof(cidTable)/sizeof(cidTable[0]);i++) {
uint32_t cidtoTest = (cidTable[i] << 16) + 0x6E00;
if (chipId == cidtoTest) {return true;}
}
return false;
}
As you already pointed out, the array type char definitely looks wrong. My guess is, that this is bug that may have never surfaced in the field. hasICSPVccGndSwap should always return false, so maybe they never used a chip type that had its pins swapped and got away with it.
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Yes, the error/warning behavior was changed with version 5.
As of G++ 5, the behavior is the following: When a later standard is in effect, e.g. when using -std=c++11, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic.
I would've expected v4.8.1 to throw a warning at least, but maybe that has been ignored.
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
Yes, this further supports that the array type should've been uint16 in the first place.
If the size of the element was 16 bits, then is the correct fix simply to make that array of type int rather than char?
Yes.
Several bugs here. I am not familiar with that software, but there are at least the following, obvious bugs:
The element type of cidTable should be a 16-bit, integral type like uint16_t. This follows from the code and also from the comments.
You cannot read from PROGMEM like that. The code will read from RAM, where it uses a flash address to access RAM. Currently, there is only one way to read from flash in avr-g++, which is inline assembly. To make life easier, you can use macros from avr/pgmspace.h like pgm_read_word.
cidTable[i] << 16 is Undefined Behaviour because a 16-bit type is shifted left by 16 bits. The type is an 8-bit type, then it is promoted to int which has 16 bits only. Same problem if the element type is already 16 bits wide.
Taking it all together, in order to make sense in avr-g++, the code would be something like:
#include <avr/pgmspace.h>
bool XLR8Info::hasICSPVccGndSwap()
{
// List of chip IDs from boards that have Vcc and Gnd swapped on
// the ICSP header. Chip ID of affected parts are 0x????6E00.
// Store the ???? part.
static const uint16_t cidTable[] PROGMEM =
{
0xC88F, 0x08B7, 0xA877, 0xF437, ...
};
uint32_t chipId = getChipId();
for (size_t i = 0; i < sizeof(cidTable) / sizeof (*cidTable); ++i)
{
uint16_t cid = pgm_read_word (&cidTable[i]);
uint32_t cidtoTest = ((uint32_t) cid << 16) + 0x6E00;
if (chipId == cidtoTest)
return true;
}
return false;
}

Why result of std::size not compile time OR is not size_t?

On Visual C++ 2019:
The following code renders the warning:
warning C4267: 'argument': conversion from 'size_t' to 'DWORD', possible loss of data
HANDLE events[2];
WaitForMultipleObjects(std::size(events), events, FALSE, INFINITE);
But using _countof(events) won't give any warning. Note that std::size's template overload function is called.
This one:
template<class _Ty,
size_t _Size> inline
constexpr size_t size(const _Ty(&)[_Size]) _NOEXCEPT
{ // get dimension for array
return (_Size);
}
Which essentially is returning a size_t, and the function is constexpr. And that's why array declaration works:
HANDLE Events[2];
int arr[std::size(Events)];
But following code won't compile without warning:
DWORD sz1 = std::size(Events);
This is okay:
DWORD sz2= _countof(Events);
Any specific reason, or a compiler bug?
Relevant:
What is the return type of sizeof operator?
EDIT, Interestingly, these would also work fine:
HANDLE events[2];
constexpr size_t s1 = sizeof(Events) / sizeof(Events[0]);
constexpr size_t s2 = std::size(Events);
The variables s1 and s2 are taken as true compile-time values, but not std::size()'s result itself!
If you read the warning messages it's a complaint about converting from the type size_t (the result of std::size(Events)) to DWORD (the type of sz1).
The problem is that on a 64-bit system size_t is typically a 64-bit unsigned integer type. But Windows define DWORD as a 32 bit unsigned integer type.
That the use of _countof doesn't generate a warning might be because of implementation-specific behavior of the MSVC compiler.
DWORD is always 32-bit unsigned on Windows.
size_t is typically a 64-bit unsigned long long with a 64-bit compiler. Switch your build to 32-bit and it's a 32-bit unsigned int.
Assigning a 64-bit int to a 32-bit - yep, that's a warning condition.
What's weird is this:
WaitForMultipleObjects(sizeof(events) / sizeof(events[0]), events, FALSE, INFINITE);
Compiles without issue. I'm guessing it's because the compiler can infer the type of that const expression reduces to unsigned int or smaller.
But this:
auto count = sizeof(events) / sizeof(events[0]);
WaitForMultipleObjects(count, events, FALSE, INFINITE);
Generates a nearly identical warning since count evaluates to a 64-bit unsigned long long.
But this will also compile without warning:
const auto count = sizeof(events) / sizeof(events[0]);
WaitForMultipleObjects(count, events, FALSE, INFINITE);

How to force const propagation through an inline function?

I'm trying to coerce the pre-processor to perform some math for me so a constant gets propagated into inline assembly. Here's the reduced case:
inline
unsigned int RotateRight(unsigned char value, unsigned int amount)
{
COMPILE_ASSERT(((unsigned char)(amount%32)) < 32);
__asm__ ("rorb %1, %0" : "+mq" (value) : "I" ((unsigned char)(amount%32)));
return value;
}
The code above relies upon CPU specific functionality, and I'm OK with it (its actually a template specialization on x86/x64 Linux when GCC is available). The "I" constraint says the integral value must be between [0,31] inclusive.
Callers of the code would look like:
byte b1 = RotateRight(1, 1);
byte b2 = RotateRight(1, 31);
A RotateRight(1, 31) comes from the cryptographers (its undefined behavior in C/C++ because a byte can only be rotated in the range [0,7]). I can break free from C/C++ constraints using ASM. And since the shift amount is known at compile time, I'd like it to be reduced at compile time; and I'd like the rorb version using the immediate-8 generated.
Without the COMPILE_ASSERT, the code compiles but I'm not sure if the constant is being propagated. That is, it might be generated with an unexpected reduction (% 32). With the COMPILE_ASSERT, the code fails to compile.
$ make validat1.o
g++ -DNDEBUG -g2 -O3 -march=native -pipe -c validat1.cpp
In file included from simple.h:10:0,
from filters.h:6,
from files.h:5,
from validat1.cpp:6:
misc.h: In function ‘T CryptoPP::rotlFixed(T, unsigned int) [with T = unsigned char]’:
misc.h:940:43: error: ‘y’ cannot appear in a constant-expression
CRYPTOPP_COMPILE_ASSERT(((unsigned char)(y%32)) < 32);
^
misc.h:72:85: note: in definition of macro ‘CRYPTOPP_COMPILE_ASSERT_INSTANCE’
_COMPILE_ASSERT_INSTANCE(assertion, instance) static CompileAssert<(assertion)>
I know I'm not supposed to use a #define, and C++ inline functions are the answer. But I feel like I'm suffering a disconnect.
How do I force the compiler to propagate the value that involves const values?
Or, if the COMPILE_ASSERT is the wrong tool (const is being propagated), how do I set up a test so that I can verify the immediate-8 version of the rorb is used?
Related, this is a C++03 project. It does not use Boost, does not use Cmake, does not use Autotools, etc.
When you specify amount as function argument, you lose its compile-time constness.
Why don't you declare amount is template argument? In such case the function user is also forced to pass a compile-time constant, which is good too.
To ensure that shift is used as compile-time constant, you can create a static const local variable.
template<unsigned int amount> inline
unsigned int RotateRight(unsigned char value)
{
static const unsigned char shift = (unsigned char)(amount%32);
__asm__ ("rorb %1, %0" : "+mq" (value) : "I" (shift));
return value;
}

Passing pointer of unsigned int to pointer of long int

I have a sample code which is working properly in 32 bit system, but when I cross compile it for 64-bit system and try to run on 64 bit Machine, it behaves differently.
Can anyone tell me why this is happening?
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
func((time_t *)&input);
}
Here "time_t" is a type defined in linux system library header file which is of type "long int".
This code is working fine with a 32-bit system but it isn't with 64-bit.
For 64-bit I have tried this:
#include <usr/include/time.h>
#include <sys/time.h>
void func(time_t * inputArg)
{
printf("%ld\n",*inputArg);
}
int main()
{
unsigned int input = 123456;
time_t tempVar = (time_t)input
func(&tempVar);
}
Which is working fine, but I have used the first method in my whole application a number of times. Any alternate solutions would be appreciated.
can anyone tell me why this is happening?
Dereferencing an integer pointer of different size than the type of the pointed object has undefined behaviour.
If the pointed to integer is smaller than the pointers pointed type, you will read unrelated bytes as part of the dereferenced number.
Your fix works because you pass a pointer to an object of proper type, but consider that your input cannot represent all of the values that time_t can.
Best fix is to use the proper type initially. Use time_t as the input.
Your "fixed" code has a cast that lets the compiler convert your unsigned int value to a time_t. Your original code assumes that they're identical.
On your 32-bit system they are identical, so you get lucky. On your 64-bit system you find out what happens when you invoke Undefined Behavior.
In other words, both C and C++ allow you to cast pointers to whatever you want, but it's up to you to make sure such casts are safe.
thank you for response.
Actually I found my mistake when I printed sizeof 'long int' in 32bit machine and 64bit machine.
32bit machine :
sizeof(int) = 32bit & sizeof(long int) = 32 bit & sizeof(long long int) = 64 bit
64bit machine:
sizeof(int) = 32bit & sizeof(long int) = 64 bit & sizeof(long long int) = 64 bit

Inline float to uint "representation" not working?

This is in C, but I tagged it C++ incase it's the same. This is being built with:
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 14.00.50727.220 for 80x86
if that makes any different
Why does this work?
(inVal is 0x80)
float flt = (float) inVal;
outVal = *((unsigned long*)&flt);
(results in outVal being 0x43000000 -- correct)
But this doesn't?
outVal = *((unsigned long*)&((float)inVal));
(results in outVal being 0x00000080 -- NOT CORRECT :( )
Before asking this question I googled around a bit and found this function in java that basically does what I want. If you're a bit confused about what I'm trying to do, this program might help explain it:
class hello
{
public static void main(String[] args)
{
int inside = Float.floatToIntBits(128.0f);
System.out.printf("0x%08X", inside);
}
}
You're trying to take the address of a non-const temporary (the result of your (float) conversion) – this is illegal in C++ (and probably also in C). Hence, your code results in garbage.
In your first, working, code, you're not using a temporary so your code is working. Notice that from a standards point of view this is still ill-defined since the size and internal representation of the involved types isn't specified and may differ depending on platform and compiler. You're probably safe, though.
In C99, you may use compound literals to make this work inline:
unsigned long outVal = *((unsigned long *)&((float){ inVal }));
The literal (float){ inVal } will create a variable with automatic storage duration (ie stack-allocated) with a well-defined address.
Type punning may also be done using unions instead of pointer casts. Using compound literals and the non-standard __typeof__ operator, you can even do some macro magic:
#define wicked_cast(TYPE, VALUE) \
(((union { __typeof__(VALUE) src; TYPE dest; }){ .src = VALUE }).dest)
unsigned long outVal = wicked_cast(unsigned long, (float)inVal);
GCC prefers unions over pointer casts in regard to optimization. This might not work at all with the MS compiler as its C99 support is rumored to be non-existant.
Assuming: inVal and outVal are parameters.
void func(int inVal,unsigned long* outVal)
{
float flt = (float) inVal;
*outVal = (unsigned long)flt; // convert flot to unsigned long.
// Then assign to the variable by de-ref
// the pointer.
}