Problem with sprintf function, last parameters are wrong when written - c++

So I use sprintf
sprintf(buffer, "%f|%f|%f|%f|%f|%f|%d|%f|%d", x, y, z, u, v, w, nID, dDistance, nConfig);
But when I print the buffer I get the 2 last parameters wrong, they are lets suppose to be 35.0000 and 0 and in the string they are 0.00000 and 10332430 and my buffer is long enough and all the other parameters are good in the string
Any idea? Is there a length limit to sprintf or something?
I checked the types of all the numbers and they are right, but what seems to be the problem is the dDistance. When I remove it from the sprint, the nConfig gets the right value in the string, but when I remove nConfig, dDistance still doesn't get the right value. I checked and dDistance is a double. Any idea?
Since people don't seem to believe me I did this :
char test[255]={0};
int test1 = 2;
double test2=35.00;
int test3 = 0;
sprintf(test,"%d|%f|%d",test1,test2,test3);
and I get this in my string:
2|0.000000|1078034432

I'd check to make sure your argument types match up with your format string elements. Trying to display a double as an integer type (%d) or vice versa can give you strange output.

Are you sure your data types match your format spec?
Try printing out your data, that is, create a printf with the same format and see what happens that is:
printf("%f|%f|%f|%f|%f|%f|%d|%f|%d", x, y, z, u, v, w, nID,dDistance, nConfig);
Try using streams, for example
stringstream ss;
ss << x << "|" << y << "|" << z << "|" << u << "|" << v << "|" << w << "|"
<< nID << "|"<< dDistance << "|"<< nConfig;
string s = ss.str();
and then do something with s;

What size are long and int?
If nID is a long being printed with an 'int' format and sizeof(int) != sizeof(long) then you get misaligned data access (in the sense that the alignment is not correct) from there on.
Which compiler are you using? GCC should diagnose the problem without difficulty if that is the trouble. Similarly, if nID is a 'long long', you could have problems.
To answer your question - yes, there is an lower-bound on the upper-limit of the length of string that sprintf() must be able to handle. The number is 509 for C89 systems, and 4095 for C99 systems.
To be precise, C99 says (fprintf(), section §7.19.6.1, para 15):
The number of characters that can be produced by any single conversion shall be at least
4095.
There is no further qualification on sprintf().
With fixed version of amended example converted into compilable code:
#include <stdio.h>
int main(void)
{
char test[255] = {0};
int test1 = 2;
double test2 = 35.00;
int test3 = 0;
sprintf(test, "%d|%f|%d", test1, test2, test3);
puts(test);
return(0);
}
I get the output I would expect:
2|35.000000|0
To be getting the output you show, you have to be working with a very weird setup.
What platform are you on?
The behaviour you are showing indicates that the sprintf() function you are using is confused about the alignment or size of something. Do you have a prototype in scope - that is, have you #included <stdio.h>? With GCC, I get all sorts of warnings when I omit the header:
x.c: In function ‘main’:
x.c:8: warning: implicit declaration of function ‘sprintf’
x.c:8: warning: incompatible implicit declaration of built-in function ‘sprintf’
x.c:9: warning: implicit declaration of function ‘puts’
But even with test2 redefined as a float I get the correct result.
If I change test2 into a long double, then I get a different set of warnings and a different result:
x.c: In function ‘main’:
x.c:8: warning: implicit declaration of function ‘sprintf’
x.c:8: warning: incompatible implicit declaration of built-in function ‘sprintf’
x.c:8: warning: format ‘%f’ expects type ‘double’, but argument 4 has type ‘long double’
x.c:9: warning: implicit declaration of function ‘puts’
2|0.000000|0
This is closer to what you are seeing, though far from identical.
Since we don't have the platform information, I'm suspicious that you are working with a truly ancient (or do I mean buggy?) version of C. But I'm still at something of a loss to see how you get what you show - unless you're on a big-endian machine (I'm testing on Intel Mac with MacOS X 10.6.2) ...pause... on a SPARC machine running Solaris 10, without #include <stdio.h> and with long double test2 = 35.0;, I get:
gcc -O x.c -o x && ./x
x.c: In function 'main':
x.c:8: warning: incompatible implicit declaration of built-in function 'sprintf'
2|-22446048024026502740613283801712842727785152907992454451420278635613183447049251888877319095301502091725503415850736723945766334416305449970423980980099172087880564051243997662107950451281495566975073444407658980167407319222477077473080454782593800009947058951029590025152409694784570786541673131117073399808.000000|0
That's different; it also generates 321 characters of output, so there's a buffer overflow in there - it is better to use 'snprintf()' to prevent that from occurring. When things are declared properly, of course, I get the expected result.
So, can you post compilable code that shows your problem, instead of a snippet that does not? And can you identify your platform - machine type, operating system version, compiler version (and maybe C library version)?

Is dDistance of type double? It looks like your %f is only grabbing four bytes of a double and then the next four bytes are treated as an integer, the real integer value is then ignored.
This question is tagged C++; are you able to use std::ostringstream which would eliminate any possible problems with the conversion string?

Related

What is a 16 byte signed integer data type?"

I made this program to test what data types arbitrary integer literals get evaluated to. This program was inspired from reading some other questions on StackOverflow.
How do I define a constant equal to -2147483648?
Why do we define INT_MIN as -INT_MAX - 1?
(-2147483648> 0) returns true in C++?
In these questions, we have an issue: the programmer wants to write INT_MIN as -2^31, but 2^31 is actually a literal and - is the unary negation operator. Since INT_MAX is usually 2^31 - 1 having a 32-bit int, the literal 2^31 cannot be represented as an int, and so it gets promoted to a larger data type. The second answer in the third question has a chart according to which the data type of the integer literals is determined. The compiler goes down the list from the top until it finds a data type which can fit the literal.
Suffix Decimal constants
none int
long int
long long int
=========================================================================
In my little program, I define a macro that will return the "name" of a variable, literal, or expression, as a C-string. Basically, it returns the text that is passed inside of the macro, exactly as you see it in the code editor. I use this for printing the literal expression.
I want to determine the data type of the expression, what it evaluates to. I have to be a little clever about how I do this. How can we determine the data type of a variable or an expression in C? I've concluded that only two "bits" of information are necessary: the width of the data type in bytes, and the signedness of the data type.
I use the sizeof() operator to determine the width of the data type in bytes. I also use another macro to determine if the data type is signed or not. typeof() is a GNU compiler extension that returns the data type of a variable or expression. But I cannot read the data type. I typecast -1 to whatever that data type is. If it's a signed data type, it will still be -1, if it's an unsigned data type, it will become the UINT_MAX for that data type.
#include <stdio.h> /* C standard input/output - for printf() */
#include <stdlib.h> /* C standard library - for EXIT_SUCCESS */
/**
* Returns the name of the variable or expression passed in as a string.
*/
#define NAME(x) #x
/**
* Returns 1 if the passed in expression is a signed type.
* -1 is cast to the type of the expression.
* If it is signed, -1 < 0 == 1 (TRUE)
* If it is unsigned, UMax < 0 == 0 (FALSE)
*/
#define IS_SIGNED_TYPE(x) ((typeof(x))-1 < 0)
int main(void)
{
/* What data type is the literal -9223372036854775808? */
printf("The literal is %s\n", NAME(-9223372036854775808));
printf("The literal takes up %u bytes\n", sizeof(-9223372036854775808));
if (IS_SIGNED_TYPE(-9223372036854775808))
printf("The literal is of a signed type.\n");
else
printf("The literal is of an unsigned type.\n");
return EXIT_SUCCESS;
}
As you can see, I'm testing -2^63 to see what data type it is. The problem is that in ISO C90, the "largest" data type for integer literals appears to be long long int, if we can believe the chart. As we all know, long long int has a numerical range -2^63 to 2^63 - 1 on a modern 64-bit system. However, the - above is the unary negation operator, not really part of the integer literal. I'm attempting to determine the data type of 2^63, which is too big for the long long int. I'm attempting to cause a bug in C's type system. That is intentional, and only for educational purposes.
I am compiling and running the program. I use -std=gnu99 instead of -std=c99 because I am using typeof(), a GNU compiler extension, not actually part of the ISO C99 standard. I get the following output:
$ gcc -m64 -std=gnu99 -pedantic experiment.c
$
$ ./a.out
The literal is -9223372036854775808
The literal takes up 16 bytes
The literal is of a signed type.
I see that the integer literal equivalent to 2^63 evaluates to a 16 byte signed integer type! As far as I know, there is no such data type in the C programming language. I also don't know of any Intel x86_64 processor that has a 16 byte register to store such an rvalue. Please correct me if I'm wrong. Explain what's going on here? Why is there no overflow? Also, is it possible to define a 16 byte data type in C? How would you do it?
Your platform likely has __int128 and 9223372036854775808 is acquiring that type.
A simple way to get a C compiler to print a typename is with something like:
int main(void)
{
#define LITERAL (-9223372036854775808)
_Generic(LITERAL, struct {char x;}/*can't ever match*/: "");
}
On my x86_64 Linux, the above is generating an
error: ‘_Generic’ selector of type ‘__int128’ is not compatible with any association error message, implying __int128 is indeed the type of the literal.
(With this, the warning: integer constant is so large that it is unsigned is wrong. Well, gcc isn't perfect.)
After some digging this is what I've found. I converted the code to C++, assuming that C and C++ behave similarly in this case. I want to create a template function to be able to accept any data type. I use __PRETTY_FUNCTION__ which is a GNU compiler extension which returns a C-string containing the "prototype" of the function, I mean the return type, the name, and the formal parameters that are input. I am interested in the formal parameters. Using this technique, I am able to determine the data type of the expression that gets passed in exactly, without guessing!
/**
* This is a templated function.
* It accepts a value "object" of any data type, which is labeled as "T".
*
* The __PRETTY_FUNCTION__ is a GNU compiler extension which is actually
* a C-string that evaluates to the "pretty" name of a function,
* means including the function's return type and the types of its
* formal parameters.
*
* I'm using __PRETTY_FUNCTION__ to determine the data type of the passed
* in expression to the function, during the runtime!
*/
template<typename T>
void foo(T value)
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
foo(5);
foo(-9223372036854775808);
Compiling and running, I get this output:
$ g++ -m64 -std=c++11 experiment2.cpp
$
$ ./a.out
void foo(T) [with T = int]
void foo(T) [with T = __int128]
I see that the passed in expression is of type __int128. Apparently, this is a GNU compiler specific extension, not part of the C standard.
Why isn't there int128_t?
https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gcc/_005f_005fint128.html
https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gcc/C-Extensions.html#C-Extensions
How is a 16 byte data type stored on a 64 bit machine
With all warnings enabled -Wall gcc will issue warning: integer constant is so large that it is unsigned warning. Gcc assigns this integer constant the type __int128 and sizeof(__int128) = 16.
You can check that with _Generic macro:
#define typestring(v) _Generic((v), \
long long: "long long", \
unsigned long long: "unsigned long long", \
__int128: "__int128" \
)
int main()
{
printf("Type is %s\n", typestring(-9223372036854775808));
return 0;
}
Type is __int128
Or with warnings from printf:
int main() {
printf("%s", -9223372036854775808);
return 0;
}
will compile with warning:
warning: format '%s' expects argument of type 'char *', but argument 2 has type '__int128' [-Wformat=]

comparison between signed and unsigned integer expressions and 0x80000000

I have the following code:
#include <iostream>
using namespace std;
int main()
{
int a = 0x80000000;
if(a == 0x80000000)
a = 42;
cout << "Hello World! :: " << a << endl;
return 0;
}
The output is
Hello World! :: 42
so the comparison works. But the compiler tells me
g++ -c -pipe -g -Wall -W -fPIE -I../untitled -I. -I../bin/Qt/5.4/gcc_64/mkspecs/linux-g++ -o main.o ../untitled/main.cpp
../untitled/main.cpp: In function 'int main()':
../untitled/main.cpp:8:13: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if(a == 0x80000000)
^
So the question is: Why is 0x80000000 an unsigned int? Can I make it signed somehow to get rid of the warning?
As far as I understand, 0x80000000 would be INT_MIN as it's out of range for positive a integer. but why is the compiler assuming, that I want a positive number?
I'm compiling with gcc version 4.8.1 20130909 on linux.
0x80000000 is an unsigned int because the value is too big to fit in an int and you did not add any L to specify it was a long.
The warning is issued because unsigned in C/C++ has a quite weird semantic and therefore it's very easy to make mistakes in code by mixing up signed and unsigned integers. This mixing is often a source of bugs especially because the standard library, by historical accident, chose to use an unsigned value for the size of containers (size_t).
An example I often use to show how subtle is the problem consider
// Draw connecting lines between the dots
for (int i=0; i<pts.size()-1; i++) {
draw_line(pts[i], pts[i+1]);
}
This code seems fine but has a bug. In case the pts vector is empty pts.size() is 0 but, and here comes the surprising part, pts.size()-1 is a huge nonsense number (today often 4294967295, but depends on the platform) and the loop will use invalid indexes (with undefined behavior).
Here changing the variable to size_t i will remove the warning but is not going to help as the very same bug remains...
The core of the problem is that with unsigned values a < b-1 and a+1 < b are not the same thing even for very commonly used values like zero; this is why using unsigned types for non-negative values like container size is a bad idea and a source of bugs.
Also note that your code is not correct portable C++ on platforms where that value doesn't fit in an integer as the behavior around overflow is defined for unsigned types but not for regular integers. C++ code that relies on what happens when an integer gets past the limits has undefined behavior.
Even if you know what happens on a specific hardware platform note that the compiler/optimizer is allowed to assume that signed integer overflow never happens: for example a test like a < a+1 where a is a regular int can be considered always true by a C++ compiler.
It seems you are confusing 2 different issues: The encoding of something and the meaning of something. Here is an example: You see a number 97. This is a decimal encoding. But the meaning of this number is something completely different. It can denote the ASCII 'a' character, a very hot temperature, a geometrical angle in a triangle, etc. You cannot deduce meaning from encoding. Someone must supply a context to you (like the ASCII map, temperature etc).
Back to your question: 0x80000000 is encoding. While INT_MIN is meaning. There are not interchangeable and not comparable. On a specific hardware in some contexts they might be equal just like 97 and 'a' are equal in the ASCII context.
Compiler warns you about ambiguity in the meaning, not in the encoding. One way to give meaning to a specific encoding is the casting operator. Like (unsigned short)-17 or (student*)ptr;
On a 32 bits system or 64bits with back compatibility int and unsigned int have encoding of 32bits like in 0x80000000 but on 64 bits MIN_INT would not be equal to this number.
Anyway - the answer to your question: in order to remove the warning you must give identical context to both left and right expressions of the comparison.
You can do it in many ways. For example:
(unsigned int)a == (unsigned int)0x80000000 or (__int64)a == (__int64)0x80000000 or even a crazy (char *)a == (char *)0x80000000 or any other way as long as you maintain the following rules:
You don't demote the encoding (do not reduce the amount of bits it requires). Like (char)a == (char)0x80000000 is incorrect because you demote 32 bits into 8 bits
You must give both the left side and the right side of the == operator the same context. Like (char *)a == (unsigned short)0x80000000 is incorrect an will yield an error/warning.
I want to give you another example of how crucial is the difference between encoding and meaning. Look at the code
char a = -7;
bool b = (a==-7) ? true : false;
What is the result of 'b'? The answer will shock you: it is undefined.
Some compilers (typically Microsoft visual studio) will compile a program that b will get true while on Android NDK compilers b will get false.
The reason is that Android NDK treats 'char' type as 'unsigned char', while Visual studio treats 'char' as 'signed char'. So on Android phones the encoding of -7 actually has a meaning of 249 and is not equal to the meaning of (int)-7.
The correct way to fix this problem is to specifically define 'a' as signed char:
signed char a = -7;
bool b = (a==-7) ? true : false;
0x80000000 is considered unsigned per default.
You can avoid the warning like this:
if (a == (int)0x80000000)
a=42;
Edit after a comment:
Another (perhaps better) way would be
if ((unsigned)a == 0x80000000)
a=42;

Error when compiling C++ program: "error: cast from ‘double*’ to ‘int’ loses precision"

I get 2 errors when trying to compile this code:
#include <iostream>
using namespace std;
int main() {
int i;
char myCharArray[51] = "This string right here contains exactly 50 chars.";
double myDoubleArray[4] = {100, 101, 102, 103};
char *cp, *cbp;
double *dp, *dbp;
dp = &myDoubleArray[0];
dbp = myDoubleArray;
cp = &myCharArray[0];
cbp = myCharArray;
while ((cp-cbp) < sizeof(myCharArray)) {cp++; dp++; }
cout << "Without cast: " << (dp-dbp) << endl;
cout << " Cast 1: " << ((int *) dp-(int *) dbp) << endl;
cout << " Cast 2: " << ((int) dp-(int) dbp) << endl;
}
The errors I get are:
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
error: cast from ‘double*’ to ‘int’ loses precision [-fpermissive]
g++ won't let me compile the program. I'm asking what I could change to make it compile.
cast from ‘double*’ to ‘int’ loses precision
is as simple as it could be read: The number of bits to store in an int is less then the number of bits which are stored in a pointer on your platform. Normally it helps to make the int to an unsigned int, because on most platforms a pointer can be stored in an unsigned int type. A unsigned int has one more bit for the value because there is no need to decide between positive and negative. And pointers are always positive.
And even better is to use the types for such things to make your code more portable. Have a look for uintptr_t
Your "Without cast" line performs pointer subtraction, which yields the difference (in units of the size of the pointed-to type) between two pointers. If the two pointers point to elements of the same array, or just past the end of it, then the difference is the number of array elements between them. The result is of the signed integer type ptrdiff_t.
That's a perfectly sensible thing to do.
Your second line ("Cast 1:") converts the pointers (which are of type double*) to int* before the subtraction. That in effect pretends that the pointers are pointing to elements of an array of int, and determines the number of elements between the int objects to which they point. It's not at all clear why you'd want to do that.
Your third line ("Cast 2:") converts both pointer values to int before subtracting them. If int is not big enough to hold the converted pointer value, then the result may be nonsense. If it is, then on most systems it will probably yield the distinct between the two pointed-to objects in bytes. But I've worked on systems (Cray T90) where the byte offset of a pointer is stored in the high-order 3 bits of the pointer value. On such a system your code would probably yield the distance between the pointed-to objects in words. Or it might yield complete nonsense. In any case, the behavior is undefined.
The problem with the conversion from double* to int isn't just that it loses precision (which is what your compiler happened to complain about). It's that the result of the conversion doesn't necessarily mean anything.
The easiest, and probably the best, way to get your code to compile is to delete the second and third lines.
If you want a solution other than that, you'll have to explain what you're trying to do. Converting the pointer values to uintptr_t will probably avoid the error message, but it won't cause what you're doing to make sense.

Locating numerical errors due to Integer division

Is there a g++ warning or other tool that can identify integer division (truncation toward zero)? I have thousands of lines of code with calculations that inevitably will have numerical errors typically due to "float = int/int" that need to be located. I need a reasonable method for finding these.
Try -Wconversion.
From gcc's man page:
Warn for implicit conversions that may
alter a value. This includes
conversions between real and integer,
like "abs (x)" when "x" is "double";
conversions between signed and
unsigned, like "unsigned ui = -1"; and
conversions to smaller types, like
"sqrtf (M_PI)". Do not warn for
explicit casts like "abs ((int) x)"
and "ui = (unsigned) -1", or if the
value is not changed by the conversion
like in "abs (2.0)". Warnings about
conversions between signed and
unsigned integers can be disabled by
using -Wno-sign-conversion.
For C++, also warn for conversions
between "NULL" and non-pointer types;
confusing overload resolution for
user-defined conversions; and
conversions that will never use a type
conversion operator: conversions to
"void", the same type, a base class or
a reference to them. Warnings about
conversions between signed and
unsigned integers are disabled by
default in C++ unless
-Wsign-conversion is explicitly enabled.
For the following sample program (test.cpp), I get the error test.cpp: In function ‘int main()’:
test.cpp:7: warning: conversion to ‘float’ from ‘int’ may alter its value.
#include <iostream>
int main()
{
int a = 2;
int b = 3;
float f = a / b;
std::cout << f;
return 0;
}
I have a hard time calling these numerical errors. You asked for integer calculations, and got the correct numbers for integer calculations. If those numbers aren't acceptable, then ask for floating point calculations:
int x = 3;
int y = 10;
int z = x / y;
// "1." is the same thing as "1.0", you may want to read up on
// "the usual arithmetic conversions." You could add some
// parentheses here, but they aren't needed for this specific
// statement.
double zz = 1. * x / y;
This page contains info about g++ warnings. If you've already tried -Wall then the only thing left could be the warnings in this link. On second look -Wconversion might do the trick.
Note: Completely edited the response.
Remark on -Wconversion of gcc:
Changing the type of the floating point variable from float to double makes the warning vanish:
$ cat 'file.cpp'
#include <iostream>
int main()
{
int a = 2;
int b = 3;
double f = a / b;
std::cout << f;
}
Compiling with $ g++-4.7 -Wconversion 'file.cpp' returns no warnings (as $ clang++ -Weverything 'file.cpp').
Explanation:
The warning when using the type float is not returned because of the totally valid integer arithmetics, but because float cannot store all possible values of int (larger ones cannot be captured by float but by double). So there might be a change of value when assigning RHS to f in the case of float but not in the case of double. To make it clear: The warning is not returned because of int/int but because of the assignment float = int.
For this see following questions: what the difference between the float and integer data type when the size is same in java, Storing ints as floats and Rounding to use for int -> float -> int round trip conversion
However, when using float -Wconversion could still be useful to identify possible lines which are affected but is not comprehensive and is actually not intended for that. For the purpose of -Wconversion see docs/gcc/Warning-Options.html and here gcc.gnu.org/wiki/NewWconversion
Possibly of interest is also following discussion 'Implicit casting Integer calculation to float in C++'
The best way to find such error is to have really good unit tests. All alternatives are not good enough.
Have a look at this clang-tidy detection.
It catches cases like this:
d = 32 * 8 / (2 + i);
d = 8 * floatFunc(1 + 7 / 2);
d = i / (1 << 4);

What format specifier to use for printing "unsigned long long" in C with getting truncated values on the console?

typedef unsigned long long IMSI;
IMSI imsi;
when i am trying to print this using %llu as a format specifier, i am getting a rather unrelated value.
What can i do to remove this problem?
I am also using gcc 4.3.3
I though there might be a problem with the tracing mechanism that i have been using, but i am getting the same problem even when using printf.
imsiAsInt = 9379666465 ;
brrm_trace(ubm_TRACE_BRRM_UECTRL,ubm_TRACE_INFO,
UEC_IUH_ACCACHE_ENTRY_FOUND,imsiAsInt, status.ueRegCause,
mCacheEntries.size());
printf("printf:UEC_IUH_ACCACHE_ENTRY_FOUND=%llu, sizeof(IMSI)=%d\n",
imsiAsInt,sizeof(IMSI));
This gives following output
UEC_IUH_ACCACHE_ENTRY_FOUND Imsi=789731873,UeRegCause=1,CurSize=5 -->The trace
printf:UEC_IUH_ACCACHE_ENTRY_FOUND=789731873, sizeof(IMSI)=8 ---> when using printf
Also for smaller values in 7 digits i am not getting any issue.
Which compiler are you using? The following program
#include <stdio.h>
int main()
{
unsigned long long x;
x = 12345;
printf("Value: %llu\n", x);
x = -1;
printf("Value: %llu\n", x);
return 0;
}
does give the expected output:
Value: 12345
Value: 18446744073709551615
on Linux with gcc 4.4.3
This could be a problem:
imsiAsInt = 9379666465 ;
[Warning] integer constant is too large for 'long' type
Try 9379666465ll
You didn't say what OS or compiler you're using, and you haven't posted the code, so giving a correct answer is not easy. I'll have a stab at it though and guess that you're using an old version of MSVC that doesn't support the standard printf format specifiers for long long and you may therefore have to use the non-standard Microsoft alternative of %Lu to get the desired result.
For future reference you should post your code, and give enough detail for people to answer e.g. what OS and compiler you are using. As others have noted you should also do something about your accept rate.