Analyze float value in gdb - gdb

Sample code:
int main()
{
float x = 456.876;
printf ("\nx = %f\n", x);
return 0;
}
In gdb, I executed this code like this:
Breakpoint 1, main () at sample_float.c:5
5 float x = 456.876;
(gdb) n
7 printf ("\nx = %f\n", x);
(gdb) p &x
$1 = (float *) 0x7fffffffd9dc
(gdb) x/4fb &x
0x7fffffffd9dc: 33 112 -28 67
Is it possible to see the value at address of x, using: x/fb command as: 456.876?
Thanks.

Perhaps I am misreading your question but you can simply do
p/f x
Or
x/f &x
Is that what you were looking for ?

agree with the above answer, but to understand why you got the results that you did.
(gdb) x/4fb &x
0x7fffffffd9dc: 33 112 -28 67
from the gdb manual
x/3uh 0x54320' is a request to display three halfwords (h) of memory, formatted as unsigned decimal integers (u'), starting at address 0x54320.
thus, x/4fb &x is formatting a byte as a float 4 times. not 4 bytes as a float.

Here is a link to examining memory using gdb
You can use the command x (for "examine") to examine memory in any of several formats,
independently of your program's data types.
x/nfu addr
x addr
x
n, f, and u are all optional parameters that specify how much memory to display and how to format it; addr is an expression giving the address where you want to start displaying memory. If you use defaults for nfu, you need not type the slash `/'. Several commands set convenient defaults for addr.
n, the repeat count
The repeat count is a decimal integer; the default is 1. It specifies how much memory (counting by units u) to display.
f, the display format
The display format is one of the formats used by print, `s' (null-terminated string), or `i' (machine instruction). The default is `x' (hexadecimal) initially.
The default changes each time you use either x or print.
u, the unit size The unit size is any of
b: Bytes.
h: Halfwords (two bytes).
w: Words (four bytes). This is the initial default.
g: Giant words (eight bytes).

Related

Why "%I64d" is giving strange output when it is used multiple times in same format string?

While I was solving a programming problem on codeforces, I found that when the format specifier "%I64d" was used multiple times in same format string like:
long long int a, b, c;
a = 1, b = 3, c = 5;
printf("%I64d %I64d %I64d\n", a, b, c);
the output was
1 0 3
However when I separated each specifier, like:
long long int a, b, c;
a = 1, b = 3, c = 5;
printf("%I64d ", a);
printf("%I64d ", b);
printf("%I64d ", c);
puts("");
the output was, as expected:
1 3 5
Here is the ideone link to see above code snippets in action:
http://ideone.com/f2udRB
Please help me to understand why this is happening? If this is undefined behaviour, how such an output is shown? How can I understand the reasons when sometimes such unexpected outputs are shown?
With the format string %I64d printf() expects a 4-byte number on the stack, since with GNU printf I means, "use alternative output digits" and the 64 means "pad to 64 chars". The remaining d stands for an signed 32-bit integer.
But you are pushing 8-byte numbers, since a, b, c are of type long long.
The numbers are below 2^32, so on the stack you see (in 4-byte-steps)
1 0 3 0 5 0
Only the first 3 numbers are interpreted by printf, the rest is discarded. When you use %lld, printf() correctly interprets the stack data as 8-byte numbers.
Please help me to understand why this is happening? If this is
undefined behaviour, how such an output is shown?
Yes, the behavior is undefined, but the reason of different output is due to calling conventions. The ideone compiler is running at 32-bit, which means that parameters are passed on stack (according to the System V ABI) instead of registers. You may see that dissassembly of the code shows like:
push 0
push 5
push 0
push 3
push 0
push 1
push OFFSET FLAT:.LC0
call printf
The second code snippet is different because you passing just one argument each time, so it gets the correct (that is, the first one) value.

Defining integer in C resolves to unsigned char, or produces unexpected behavior in case of math operations

I am working on a micro-controller and I want to implement a simple average filter on the resulted values to filter out noise (or to be honest to not let values dance on LCD!).
The ADC result is inserted into memory by DMA. I have (just for sake of ease of debugging) an array with size 8. To make life even easier I have wrote some defines to make my code editable with minimum effort:
#define FP_ID_POT_0 0 //identifier for POT_0
#define FP_ID_POT_1 1 //identifier for POT_1
#define FP_ANALOGS_BUFFER_SIZE 8 //buffer size for filtering ADC vals
#define FP_ANALOGS_COUNT 2 // we have now 2 analog axis
#define FP_FILTER_ELEMENT_COUNT FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT
//means that the DMA buffer will have 4 results for each ADC
So, the buffer has size of 8 and its type is uint32_t. And I am reading 2 ADC channels. in the buffer I will have 4 result for Channel A and 4 result for Channel B (in a circular manner). A simple dump of this array is like:
INDEX 0 1 2 3 4 5 6 7
CHNL A B A B A B A B
VALUE 4017 62 4032 67 4035 64 4029 63
It means that the DMA puts results for ChA and ChB always in a fixed place.
Now to calculate the average for each channel I have the function below:
uint32_t filter_pots(uint8_t which) {
uint32_t sum = 0;
uint8_t i = which;
for( ; i < FP_ANALOGS_BUFFER_SIZE; i += FP_ANALOGS_COUNT) {
sum += adc_vals[i];
}
return sum / (uint32_t)FP_FILTER_ELEMENT_COUNT;
}
If I want to use the function for chA I will pass 0 as argument to the funtion. If I want chB I pass 1...and if I happen to have chC I will pass 2 and so on. This way I can initiate the for-loop to point to the element that I need.
The problem is, at the last step when I want to return the result, I do not get the correct value. The function returns 1007 when used for chA and 16 when used for chB. I am pretty sure that the sum is calculated OK (I can see it in debugger). The problem, I beleive, is in the division by a value defined using #define. Even casting it to uint32_t does not help. The sum is being calculated OK, but I can no see what type or value FP_FILTER_ELEMENT_COUNT has been assigned to by compiler. Mybe its an overflow problem of dividing uint32 by uint8?
#define FP_FILTER_ELEMENT_COUNT FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT
//means FP_FILTER_ELEMENT_COUNT will be 8 / 2 which results in 4
What causes this behaviour and if there is no way that #define would work in my case, what other options I have?
The compiler is IAR Embedded Workbench. Platform is STM32F103
For fewer surprises, always put parenthesis around your macro definitions
#define FP_FILTER_ELEMENT_COUNT (FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT)
This prevents oddball operator precedence issues and other unexpected syntax and logic errors from cropping up. In this case, you're returning sum/8/2 (i.e. sum/16) when you want to return sum/4.
Parentheses will help, as #Russ said, but an even better solution is to use constants:
static const int FP_ID_POT_0 = 0; //identifier for POT_0
static const int FP_ID_POT_1 = 1; //identifier for POT_1
static const int FP_ANALOGS_BUFFER_SIZE = 8; //buffer size for filtering ADC vals
static const int FP_ANALOGS_COUNT = 2; // we have now 2 analog axis
static const int FP_FILTER_ELEMENT_COUNT = FP_ANALOGS_BUFFER_SIZE / FP_ANALOGS_COUNT;
In C++, all of these are compile-time integral constant expressions, and can be used as array bounds, case labels, template arguments, etc. But unlike macros, they respect namespaces, are type-safe, and act like real values, not text substitution.

How does int to char array conversion occurs?

void main()
{
int a;
char *x;
x = (char*)&a;
a = 500;
x[0] = 2;
x[1] = 2;
x[2] = 0;
cout<<"final op : "<<a;
}
I know the answer is 514 . But how does it work ?
First, note that exactly how this works is not dictated by the C or C++ standards, and you really shouldn't rely on any specific behavior, unless you really know what you're doing and the ramifications thereof. (since you had to ask, you don't fall into that category, even after reading this answer!)
The two most common ways this is done are both that the integer is stored in base 256, storing one digit per byte (bytes are usually 8-bits, and 2^8 = 256). So when you access it via char*, each place represents one digit.
In little endian format, you access the digits in order of magnitude; e.g. x[0] has the one's place, x[1] has the 256's place, x[2] has the 256^2's place, and so forth. That is, you access the digits from right to left.
In big endian format, the digits are in the opposite order.
Which format is used usually dictated by the computer hardware.
How many base-256 digits make up an int can also vary. 4 digits is common, but 2 and 8 happen, and there are even some exotic environments where int doesn't fit exactly into this picture at all.
On your particular computer and programming environment, what probably happened was that int was at least 4 digits and stored in little endian form; so assigning 500 means the digits were
0, 0, 1, 244
(because 500 = 244 * 1 + 1 * 256; remember how to compute change of base!) and then the assignments changed the digits to
0, 0, 2, 2
which represents 2 * 1 + 2 * 256 = 514.
(also, note that if char is signed in your programming environment, you would probably have read the rightmost digit of 500 as being -12 rather than 244: look up two's complement notation)
The way this works is that x points to the same location that holds the data for a. When you write to or read from x[0], you are accessing the first byte of the value held in a.
This isn't converting between char array and int, this is "accessing an int through a char pointer".

C++ : storing a 13 digit number always fails

I'm programming in C++ and I have to store big numbers in one of my exercices.
The biggest number i have to store is : 9 780 321 563 842.
Each time i try to print the number (contained in a variable) it gives me a wrong result (not that number).
A 32bit type isn't enough since 2^32 is a 10 digit number and I have to store a 13 digit number. But with 64 bits you can respresent a number that has 20digits. So I tried using the type "uint64_t" but that didn't work for me and I really don't understand why.
So I searched on the internet to find which type would be sufficient for my variable to fit in. I saw on this forum persons with the same problem but they solved it using long long int or long double as type. But none worked for me (neither did long float).
I really don't know which other type could store that number, as I tried a lot but nothing worked for me.
Thanks for your help! :)
--
EDIT : The code is a bit long and complex and would not matter for the question, so this is actually what I do with the variable containing that number :
string barcode_s = "9780321563842";
uint64_t barcode = atoi(barcode_s.c_str());
cout << "Barcode is : " << barcode << endl;
Off course I don't put that number in a variable (of type string) "barcode_s" to convert it directly to a number, but that's what happen in my program. I read text from an input file and put it in "barcode_s" (the text I read and put in that variable is always a number) and then I convert that string to a number (using atoi).
So i presume the problem comes from the "atoi" function?
Thanks for your help!
The problem is indeed atoi: it returns an int, which is on most platforms a 32-bits integer. Converting to uint64_t from int will not magically restore the information that has been lost.
There are several solutions, though. In C++03, you could use stringstream to handle the conversion:
std::istringstream stream(barcode_s);
unsigned long barcode = 0;
if (not (stream >> barcode)) { std::abort(); }
In C++11, you can simply use stoul or stoull:
unsigned long long const barcode = std::stoull(barcode_s);
Your number 9 780 321 563 842 is hex 8E52897B4C2, which fits into 44 bits (4 bits per hex digit), so any 64 bit integer, no matter if signed or unsigned, will have space to spare. 'uint64_t' will work, and it will even fit into a 'double' with no loss of precision.
It follows that the remaining issue is a mistake in your code, usually that is either an accidental conversion of the 64 bit number to another type somewhere, or you are calling the wrong fouction to print a 64 bit integer.
Edit: just saw your code. 'atoi' returns int. As in 'int32_t'. Converting that to 'unit64_t' will not reconstruct the 64 bit number. Have a look at this: http://msdn.microsoft.com/en-us/library/czcad93k.aspx
The atoll () function converts char* to a long long.
If you don't have the longer function available, write your own in the mean time.
uint64_t result = 0 ;
for (unsigned int ii = 0 ; str.c_str()[ii] != 0 ; ++ ii)
{
result *= 10 ;
result += str.c_str () [ii] - '0' ;
}

C/C++ Bit Array or Bit Vector

I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts -
Are they used as boolean flags?
Can one use int arrays instead? (more memory of course, but..)
What's this concept of Bit-Masking?
If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers?
I am looking for applications, so that I can understand better. for Eg -
Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing
numbers?
For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?
I think you've got yourself confused between arrays and numbers, specifically what it means to manipulate binary numbers.
I'll go about this by example. Say you have a number of error messages and you want to return them in a return value from a function. Now, you might label your errors 1,2,3,4... which makes sense to your mind, but then how do you, given just one number, work out which errors have occured?
Now, try labelling the errors 1,2,4,8,16... increasing powers of two, basically. Why does this work? Well, when you work base 2 you are manipulating a number like 00000000 where each digit corresponds to a power of 2 multiplied by its position from the right. So let's say errors 1, 4 and 8 occur. Well, then that could be represented as 00001101. In reverse, the first digit = 1*2^0, the third digit 1*2^2 and the fourth digit 1*2^3. Adding them all up gives you 13.
Now, we are able to test if such an error has occured by applying a bitmask. By example, if you wanted to work out if error 8 has occured, use the bit representation of 8 = 00001000. Now, in order to extract whether or not that error has occured, use a binary and like so:
00001101
& 00001000
= 00001000
I'm sure you know how an and works or can deduce it from the above - working digit-wise, if any two digits are both 1, the result is 1, else it is 0.
Now, in C:
int func(...)
{
int retval = 0;
if ( sometestthatmeans an error )
{
retval += 1;
}
if ( sometestthatmeans an error )
{
retval += 2;
}
return retval
}
int anotherfunc(...)
{
uint8_t x = func(...)
/* binary and with 8 and shift 3 plaes to the right
* so that the resultant expression is either 1 or 0 */
if ( ( ( x & 0x08 ) >> 3 ) == 1 )
{
/* that error occurred */
}
}
Now, to practicalities. When memory was sparse and protocols didn't have the luxury of verbose xml etc, it was common to delimit a field as being so many bits wide. In that field, you assign various bits (flags, powers of 2) to a certain meaning and apply binary operations to deduce if they are set, then operate on these.
I should also add that binary operations are close in idea to the underlying electronics of a computer. Imagine if the bit fields corresponded to the output of various circuits (carrying current or not). By using enough combinations of said circuits, you make... a computer.
regarding the usage the bits array :
if you know there are "only" 1 million numbers - you use an array of 1 million bits. in the beginning all bits will be zero and every time you read a number - use this number as index and change the bit in this index to be one (if it's not one already).
after reading all numbers - the missing numbers are the indices of the zeros in the array.
for example, if we had only numbers between 0 - 4 the array would look like this in the beginning: 0 0 0 0 0.
if we read the numbers : 3, 2, 2
the array would look like this: read 3 --> 0 0 0 1 0. read 3 (again) --> 0 0 0 1 0. read 2 --> 0 0 1 1 0. check the indices of the zeroes: 0,1,4 - those are the missing numbers
BTW, of course you can use integers instead of bits but it may take (depends on the system) 32 times memory
Sivan
Bit Arrays or Bit Vectors can be though as an array of boolean values. Normally a boolean variable needs at least one byte storage, but in a bit array/vector only one bit is needed.
This gets handy if you have lots of such data so you save memory at large.
Another usage is if you have numbers which do not exactly fit in standard variables which are 8,16,32 or 64 bit in size. You could this way store into a bit vector of 16 bit a number which consists of 4 bit, one that is 2 bit and one that is 10 bits in size. Normally you would have to use 3 variables with sizes of 8,8 and 16 bit, so you only have 50% of storage wasted.
But all these uses are very rarely used in business aplications, the come to use often when interfacing drivers through pinvoke/interop functions and doing low level programming.
Bit Arrays of Bit Vectors are used as a mapping from position to some bit value. Yes it's basically the same thing as an array of Bool, but typical Bool implementation is one to four bytes long and it uses too much space.
We can store the same amount of data much more efficiently by using arrays of words and binary masking operations and shifts to store and retrieve them (less overall memory used, less accesses to memory, less cache miss, less memory page swap). The code to access individual bits is still quite straightforward.
There is also some bit field support builtin in C language (you write things like int i:1; to say "only consume one bit") , but it is not available for arrays and you have less control of the overall result (details of implementation depends on compiler and alignment issues).
Below is a possible way to answer to your "search missing numbers" question. I fixed int size to 32 bits to keep things simple, but it could be written using sizeof(int) to make it portable. And (depending on the compiler and target processor) the code could only be made faster using >> 5 instead of / 32 and & 31 instead of % 32, but that is just to give the idea.
#include <stdio.h>
#include <errno.h>
#include <stdint.h>
int main(){
/* put all numbers from 1 to 1000000 in a file, except 765 and 777777 */
{
printf("writing test file\n");
int x = 0;
FILE * f = fopen("testfile.txt", "w");
for (x=0; x < 1000000; ++x){
if (x == 765 || x == 777760 || x == 777791){
continue;
}
fprintf(f, "%d\n", x);
}
fprintf(f, "%d\n", 57768); /* this one is a duplicate */
fclose(f);
}
uint32_t bitarray[1000000 / 32];
/* read file containing integers in the range [1,1000000] */
/* any non number is considered as separator */
/* the goal is to find missing numbers */
printf("Reading test file\n");
{
unsigned int x = 0;
FILE * f = fopen("testfile.txt", "r");
while (1 == fscanf(f, " %u",&x)){
bitarray[x / 32] |= 1 << (x % 32);
}
fclose(f);
}
/* find missing number in bitarray */
{
int x = 0;
for (x=0; x < (1000000 / 32) ; ++x){
int n = bitarray[x];
if (n != (uint32_t)-1){
printf("Missing number(s) between %d and %d [%x]\n",
x * 32, (x+1) * 32, bitarray[x]);
int b;
for (b = 0 ; b < 32 ; ++b){
if (0 == (n & (1 << b))){
printf("missing number is %d\n", x*32+b);
}
}
}
}
}
}
That is used for bit flags storage, as well as for parsing different binary protocols fields, where 1 byte is divided into a number of bit-fields. This is widely used, in protocols like TCP/IP, up to ASN.1 encodings, OpenPGP packets, and so on.