Due to some libraries, I have to compile my application in 32 bit, but I need to use integer variables that exceed the max number of 32 bit types. So for example if I try to use uint64_t I get an overflow at 2147483647.
I thought it is possible to use 64 bit integer variables in 32 bit application, so what did I miss here? Do I have to include some specific header oder do I have to set some option therefore? Using VS17.
EDIT:
I did some testing, and in this example program, I can reproduce my overflow problem.
#include <iostream>
int main()
{
uint64_t i = 0;
while (true)
{
std::printf("%d\n",i);
i += (uint64_t)10000;
}
return 0;
}
The bug is here:
std::printf("%d\n",i);
^^
You've used the wrong format specifier, and therefore the behaviour of the program is undefined. %d is for signed int. You need to use
std::printf("%" PRIu64 "\n",i);
PRIu64 is declared in <cinttypes>.
P.S. You also haven't included the header which declares std::printf.
Related
I have a piece of code that was shipped as part of the XLR8 development platform that formerly used a bundled version (4.8.1) of the avr-gcc/g++ compiler. I tried to use the latest version of avr-g++ included with by my linux distribution (Ubuntu 22.04) which is 5.4.0
When running that compiler, I am getting the following error that seems to make sense to me. Here is the error and the chunk of related code below. In the bundled version of avr-g++ that was provided with the XLR8 board, this was not an error. I'm not sure why because it appears that the code below is attempting to place 16 bit words into an array of chars.
A couple questions,
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
If the size of the element was 16 bits, then is the correct fix simply to make that array of type unsigned int rather than char?
/home/rich/.arduino15/packages/alorium/hardware/avr/2.3.0/libraries/XLR8Info/src/XLR8Info.cpp:157:12: error: narrowing conversion of ‘51343u’ from ‘unsigned int’ to ‘char’ inside { } [-Wnarrowing]
0x38BF};
bool XLR8Info::hasICSPVccGndSwap(void) {
// List of chip IDs from boards that have Vcc and Gnd swapped on the ICSP header
// Chip ID of affected parts are 0x????6E00. Store the ???? part
const static char cidTable[] PROGMEM =
{0xC88F, 0x08B7, 0xA877, 0xF437,
0x94BF, 0x88D8, 0xB437, 0x94D7, 0x38BF, 0x145F, 0x288F, 0x28CF,
0x543F, 0x0837, 0xA8B7, 0x748F, 0x8477, 0xACAF, 0x14A4, 0x0C50,
0x084F, 0x0810, 0x0CC0, 0x540F, 0x1897, 0x48BF, 0x285F, 0x8C77,
0xE877, 0xE49F, 0x2837, 0xA82F, 0x043F, 0x88BF, 0xF48F, 0x88F7,
0x1410, 0xCC8F, 0xA84F, 0xB808, 0x8437, 0xF4C0, 0xD48F, 0x5478,
0x080F, 0x54D7, 0x1490, 0x88AF, 0x2877, 0xA8CF, 0xB83F, 0x1860,
0x38BF};
uint32_t chipId = getChipId();
for (int i=0;i< sizeof(cidTable)/sizeof(cidTable[0]);i++) {
uint32_t cidtoTest = (cidTable[i] << 16) + 0x6E00;
if (chipId == cidtoTest) {return true;}
}
return false;
}
As you already pointed out, the array type char definitely looks wrong. My guess is, that this is bug that may have never surfaced in the field. hasICSPVccGndSwap should always return false, so maybe they never used a chip type that had its pins swapped and got away with it.
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Yes, the error/warning behavior was changed with version 5.
As of G++ 5, the behavior is the following: When a later standard is in effect, e.g. when using -std=c++11, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic.
I would've expected v4.8.1 to throw a warning at least, but maybe that has been ignored.
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
Yes, this further supports that the array type should've been uint16 in the first place.
If the size of the element was 16 bits, then is the correct fix simply to make that array of type int rather than char?
Yes.
Several bugs here. I am not familiar with that software, but there are at least the following, obvious bugs:
The element type of cidTable should be a 16-bit, integral type like uint16_t. This follows from the code and also from the comments.
You cannot read from PROGMEM like that. The code will read from RAM, where it uses a flash address to access RAM. Currently, there is only one way to read from flash in avr-g++, which is inline assembly. To make life easier, you can use macros from avr/pgmspace.h like pgm_read_word.
cidTable[i] << 16 is Undefined Behaviour because a 16-bit type is shifted left by 16 bits. The type is an 8-bit type, then it is promoted to int which has 16 bits only. Same problem if the element type is already 16 bits wide.
Taking it all together, in order to make sense in avr-g++, the code would be something like:
#include <avr/pgmspace.h>
bool XLR8Info::hasICSPVccGndSwap()
{
// List of chip IDs from boards that have Vcc and Gnd swapped on
// the ICSP header. Chip ID of affected parts are 0x????6E00.
// Store the ???? part.
static const uint16_t cidTable[] PROGMEM =
{
0xC88F, 0x08B7, 0xA877, 0xF437, ...
};
uint32_t chipId = getChipId();
for (size_t i = 0; i < sizeof(cidTable) / sizeof (*cidTable); ++i)
{
uint16_t cid = pgm_read_word (&cidTable[i]);
uint32_t cidtoTest = ((uint32_t) cid << 16) + 0x6E00;
if (chipId == cidtoTest)
return true;
}
return false;
}
I'm quite new to programming, I have recently learnt a little C++ and I am using Visual Studio 2017 Community version.
I need to use a 64 bit integer to store a value and carry out some arithmetic operations, however my compiler only allows me to use 32 bits of the "int64" variable I have created.
Here is an example of some code and the behaviour
unsigned __int64 testInt = 0x0123456789ABCDEF;
printf("int value = %016X\n", testInt); // only 32 bits are being stored here? (4 bytes)
printf("size of integer in bytes = %i\n\n", sizeof(testInt)); // size of int is correct (8 bytes)
The value stored in the variable seems to be 0x0000000089ABCDEF.
Why can I not use all 64 bits of this integer, it seems to act as a 32 bit int?
Probably I'm missing something basic, but I can't find anything relating to this from searching :(
It would be nice if it were just something basic, but it turns out that 64 bit ints are not dealt with consistently on all platforms so we have to lean on macros.
This answer describes the use of PRIu64, PRIx64, and related macros included in <inttypes.h>. It looks funny like this, but I think the portable solution would look like:
#include <inttypes.h>
unsigned __int64 testInt = 0x0123456789ABCDEF;
printf("int value = %016" PRIX64 "\n", testInt);
The PRIX64 expands to the appropriate format specifier depending on your platform (probably llX for Visual Studio).
Format specifier %X takes an unsigned int (probably 32 bit on your system), whereas __int64 corresponds to long long.
Use printf("int value = %016llX\n", testInt) instead. Documentation can be found, for example, at cppreference.com.
there is one structure in Linux (64bit OS)
And I did the following to output this structure as Hexa code.
After the below code, "strBuff" will be output to the file in the same way as "printf",
This is a situation that needs to be read from windows, and should be stored in the same structure as above "example".
However, there was a problem here.
In my current windows, unsigned long size is 4byte.
In my current Linux, unsigned long size is 8byte.
So there is too much zero output in the output text.
This seems to be related to the padding bit. It is expected that only 2 bytes should be padding, and padding is done by 4 bytes.
It is not possible to change the structure "example" because the code is implemented by thinking it is 4byte when outputting from linux and the code is already in the completion stage.
I have two things to ask.
What if I need to get rid of unnecessary zero hexa in the output code?
Currently, we are using a hard coding method to skip all unsigned long and signed long variables.
Compatibility between windows and linux should be solved.
The code can be changed both on the reading side and on the output side. Is there a lib related to the above problem and compatibility that can solve the padding problem?
enter code here
struct example
{
unsigned long Ul;
int a;
signed long Sl;
}
struct examle eg;
// data input at eg
char *tempDataPtr = (char*)(&eg);
for(int i = 0 ; i < size(example) ; i++)
{
sprintf(&strBuff[i*3],"%02X ", tempDataPtr[i]);
}
Use types that have explicit format:
(And order them from largest to smallest for good measure, to protect against padding discrepancies between fields)
struct example
{
uint32_t Ul;
int32_t Sl;
int16_t a;
}
I need to accurately convert a long representing bits to a double and my soluton shall be portable to different architectures (being able to be standard across compilers as g++ and clang++ woulf be great too).
I'm writing a fast approximation for computing the exp function as suggested in this question answers.
double fast_exp(double val)
{
double result = 0;
unsigned long temp = (unsigned long)(1512775 * val + 1072632447);
/* to convert from long bits to double,
but must check if they have the same size... */
temp = temp << 32;
memcpy(&result, &temp, sizeof(temp));
return result;
}
and I'm using the suggestion found here to convert the long into a double. The issue I'm facing is that whereas I got the following results for int values in [-5, 5] under OS X with clang++ and libc++:
0.00675211846828461
0.0183005779981613
0.0504353642463684
0.132078289985657
0.37483024597168
0.971007823944092
2.7694206237793
7.30961990356445
20.3215942382812
54.8094177246094
147.902587890625
I always get 0 under Ubuntu with clang++ (3.4, same version) and libstd++. The compiler there even tells me (through a warning) that the shifting operation can be problematic since the long has size equal or less that the shifting parameter (indicating that longs and doubles have not the same size probably)
Am I doing something wrong and/or is there a better way to solve the problem being as more compatible as possible?
First off, using "long" isn't portable. Use the fixed length integer types found in stdint.h. This will alleviate the need to check for the same size, since you'll know what size the integer will be.
The reason you are getting a warning is that left shifting 32 bits on the 32 bit intger is undefined behavior. What's bad about shifting a 32-bit variable 32 bits?
Also see this answer: Is it safe to assume sizeof(double) >= sizeof(void*)? It should be safe to assume that a double is 64bits, and then you can use a uint64_t to store the raw hex. No need to check for sizes, and everything is portable.
I included the stdint.h in my solution and used uint64_t, but the result was not what i wanted. Here is the code that i used.
#include <stdio.h>
#include "./stdint.h"
void main (void)
{
//-- to get the maximum value that the 32-bit integer can take
unsigned int test = 0 ;
test-- ;
//-- to see if the 64-bit integer can take the value that the 32-bit integer can't take
uint64_t test2 = test ;
test2++ ;
printf("%u\n", test) ;
printf("%u\n", test2) ;
while(1) { }
}
And Here is the result.
4294967295
0
I want to use the full range that the 64-bit integer can take. How can i do it in x86 Visual Studio 2008? For your information, i'm using a 32-bit windows 7.
The %u format specifier for printf is for unsigned integers. Use %llu.
Since this is C++ though, you might as well use the type-safe std::cout to avoid programmer error:
std::cout << test2;
Use:
#include <inttypes.h>
/* ... */
printf("%" PRIu64 "\n", test2);
to print an uint64_t value.
u conversion specifier is used to print an unsigned int value.
Note that the PRIu64 macro is a C macro. In C++ the macro is not present and you may have to use %llu conversion specification.
Since you've also tagged this as C++, I'll add the obvious way to avoid type mismatches like you've run into with printf: use an ostream instead:
std::cout << test2;
In a book that i have, it says that program always convert any
variables to integer when it calculates. Is it possible to make the
default integer 64-bit?
This is 1) not correct, and 2) no, you can't tell (most) compilers what size int you want. You could of course use something like:
typedef Int int64_t;
typedef Uint uint64_t;
But you can't rename int to something other than what the compiler things it should be - which may be 16, 32, 64, 36, or 72 bits - or some other number >= 16.