fprintf(obFile,"\t%s\t%s\n",intToBase32((unsigned int)address),intToBase32((unsigned int)(atoi(infoArray[i].operand2)<<=2)));
fprintf(obFile,"\t%s\t%s\n",intToBase32((unsigned int)address),intToBase32((unsigned int)(getRegister(infoArray[i].operand2)<<=2)));
fprintf(obFile,"\t%s\t%s\n",intToBase32((unsigned int)address),intToBase32((unsigned int)(atoi(infoArray[i].operand1)<<=2)));
i get the error: lvalue required as a left operand of assigment
on these and more like this lines in c, cannot understand what is the problem.
needed information:
FILE *obfile;
int address;
infoArray is a array of structs. infoArray[i].operand1 and infoArray[i].operand2 are strings of numbers. example : "5", "-10".
/*Gets unsigned int and returns base 32 string by dividing for groups of 5 bits */
char * intToBase32(unsigned int num)
finally i need to get a line wtih address in base 32 and a number in base 32, while the number should be shifted 2 bits left before converting to base 32.
Related
#include <stdio.h>
int main() {
int i,n;
int a = 123456789;
void *v = &a;
unsigned char *c = (unsigned char*)v;
for(i=0;i< sizeof a;i++) {
printf("%u ",*(c+i));
}
char *cc = (char*)v;
printf("\n %d", *(cc+1));
char *ccc = (char*)v;
printf("\n %u \n", *(ccc+1));
}
This program generates the following output on my 32 bit Ubuntu machine.
21 205 91 7
-51
4294967245
First two lines of output I can understand =>
1st Line : sequence of storing of bytes in memory.
2nd Line : signed value of the second byte value (2's complement).
3rd Line : why such a large value ?
please explain the last line of output. WHY three bytes of 1's are added
because (11111111111111111111111111001101) = 4294967245 .
Apparently your compiler uses signed characters and it is a little endian, two's complement system.
123456789d = 075BCD15h
Little endian: 15 CD 5B 07
Thus v+1 gives value 0xCD. When this is stored in a signed char, you get -51 in signed decimal format.
When passed to printf, the character *(ccc+1) containing value -51 first gets implicitly type promoted to int, because variadic functions like printf has a rule stating that all small integer parameters will get promoted to int (the default argument promotions). During this promotion, the sign is preserved. You still have value -51, but for a 32 bit signed integer, this gives the value 0xFFFFFFCD.
And finally the %u specifier tells printf to treat this as an unsigned integer, so you end up with 4.29 bil something.
The important part to understand here is that %u has nothing to do with the actual type promotion, it just tells printf how to interpret the data after the promotion.
-51 store in 8 bit hex is 0xCD. (Assuming 2s compliment binary system)
When you pass it to a variadic function like printf, default argument promotion takes place and char is promoted to int with representation 0xFFFFFFCD (for 4 byte int).
0xFFFFFFCD interpreted as int is -51 and interpreted as unsigned int is 4294967245.
Further reading: Default argument promotions in C function calls
please explain the last line of output. WHY three bytes of 1's are
added
This is called sign extension. When a smaller signed number is assigned (converted) to larger number, its signed bit get's replicated to ensure it represents same number (for example in 1s and 2s compliment).
Bad printf format specifier
You are attempting to print a char with specifier "%u" which specifies unsigned [int]. Arguments which do not match the conversion specifier in printf is undefined behavior from 7.19.6.1 paragraph 9.
If a conversion specification is invalid, the behavior is undefined. If
any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined.
Use of char to store signed value
Also to ensure char contains signed value, explicitly use signed char as char may behave as signed char or unsigned char. (In latter case, output of your snippet may be 205 205). In gcc you can force char to behave as unsigned char with -funsigned-char option.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
A riddle (in C)
see this code
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
int main()
{
int d;
for(d=-1;d <= TOTAL_ELEMENTS-2;d++)
printf("%d\n",array[d+1]);
return 0;
}
now this loop won't run.
sizeof() would return an unsigned value so TOTAL_ELEMENTS has an unsigned value.
now , coming to the for loop, please tell me if the unary operator '-' works on signed int 2 or an implicit conversion takes place into unsigned and then the '-' operator works.
In your example d is converted to an unsigned int in the comparison. But -1 cannot be represented as an unsigned int value, so it is is converted to UINT_ MAX. To avoid this behaviour you can convert the right side of the comparison to an signed int by prepending (int).
See Understand integer conversion rules for details on integer conversion in C.
There's no unary operator in d <= TOTAL_ELEMENTS-2.
The TOTAL_ELEMENTS-2 reduces to an expression with a binary operator of -. This expression then becomes unsigned because one of its operands is unsigned.
In the case of d <= TOTAL_ELEMENTS-2, d's type is also converted to unsigned int for the same reason.
The relevant portion of the standard is section 6.3.1.8#1 (ISO/IEC 9899:1999) which says:
"Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type."
Yes, d also has an unsigned type in that expression, because of promotion, which is why the loop fails.
However, the question is whether the C compiler "thinks":
(unsigned) ((unsigned) 5 - (unsigned) 2)
i.e. promoting 2 to unsigned, or:
(unsigned) ((unsigned) 5 - (signed) 2)
i.e. subtraction taking operands of both types. Of course, it doesn't matter, as it would be the same operation for both. However, the whole point is that subtracting will return a value of one type, so theoretically it can only take arguments of that type. So it's the first (unsigned int 2).
P.S. (-2) is unary, while (5 - 2) is binary.
I suspect the unsigned type of sizeof() to propagate to the expression TOTAL_ELEMENTS-2 and then to both operands of d <= TOTAL_ELEMENTS-2. Inserting (int) juste before TOTAL_ELEMENTS fixes the issue.
look , that '-' operator being unary was a stupid thing.forget it.it was the binary '-' , i realise.
when 2 is converted to unsigned int it becomes unsigned 2 , so TOTAL_ELEMENTS-2 has a value equal to unsigned 5 , and then when d is being converted to an unsigned int it gets a large positive value and
so the loop fails.
is that happening here??
and yes, i didn't write this code,this is some c puzzle i found on the web.
thank ya all.
Is there a way in C/C++ to compute the maximum power of two that is representable by a certain data type using the sizeof operator?
For example, say I have an unsigned short int. Its values can range between 0 and 65535.
Therefore the maximum power of two that an unsigned short int can contain is 32768.
I pass this unsigned short int to a function and I have (at the moment) and algorithm that looks like this:
if (ushortParam > 32768) {
ushortParam = 32768; // Bad hardcoded literals
}
However, in the future, I may want to change the variable type to incorporate larger powers of two. Is there a type-independent formula using sizeof() that can achieve the following:
if (param > /*Some function...*/sizeof(param) )
{
param = /*Some function...*/sizeof(param);
}
Note the parameter will never require floating-point precision - integers only.
Setting most significant bit of your a variable of that parameter size will give you the highest power of 2.
1 << (8*sizeof(param)-1)
What about:
const T max_power_of_two = (std::numeric_limits<T>::max() >> 1) + 1;
To get the highest power of 2 representable by a certain integer type you may use limits.h instead of the sizeof operator. For instance:
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
int main() {
int max = INT_MAX;
int hmax = max>>1;
int mpow2 = max ^ hmax;
printf("The maximum representable integer is %d\n",max);
printf("The maximum representable power of 2 is %d\n",mpow2);
return 0;
}
This should always work as the right shift of a positive integer is always defined. Quoting from the standard C section 6.5.7.5 (Bitwise shift operator):
The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1
has an unsigned type or if E1 has a signed type and a nonnegative
value, the value of the result is the integral part of the quotient of
E1 divided by the quantity, 2 raised to the power E2.
If the use of sizeof is mandatory you can use:
1 << (CHAR_BIT*sizeof(param)-1)
for unsigned integer types and:
1 << (CHAR_BIT*sizeof(param)-2)
for signed integer types. The lines above will work only in the case of integer types without padding bits. The part of the standard C ensuring these lines to work is in section 6.2.6.2. In particular:
For unsigned integer types other than unsigned char, the bits of the
object representation shall be divided into two groups: value bits and
padding bits (there need not be any of the latter). If there are N
value bits, each bit shall represent a different power of 2 between 1
and 2N-1, so that objects of that type shall be capable of
representing values from 0 to 2N - 1 using a pure binary
representation; this shall be known as the value representation.
guarantees the first method to work while:
For signed integer types, the bits of the object representation shall
be divided into three groups: value bits, padding bits, and the sign
bit. There need not be any padding bits; there shall be exactly one
sign bit.
...
A valid (non-trap) object representation of a signed integer type
where the sign bit is zero is a valid object representation of the
corresponding unsigned type, and shall represent the same value.
explains why the second line give the right answer.
The accepted answer will probably work on Posix platforms, but is not general C/C++. It assumes that CHAR_BIT is 8, doesn't specify the type, and assumes that the type has no padding bits.
Here are more general versions for any/all unsigned integer types and don't require including any headers, dependencies, etc.:
#define MAX_VAL(UNSIGNED_TYPE) ((UNSIGNED_TYPE) -1)
#define MAX_POW2(UNSIGNED_TYPE) (~(MAX_VAL(UNSIGNED_TYPE) >> 1))
#define MAX_POW2_VER2(UNSIGNED_TYPE) (MAX_VAL(UNSIGNED_TYPE) ^ (MAX_VAL(UNSIGNED_TYPE) >> 1))
#define MAX_POW2_VER3(UNSIGNED_TYPE) ((MAX_VAL(UNSIGNED_TYPE) >> 1) + 1)
The standards, even C90, guarantee that casting -1 to an unsigned type always yields the maximum value that type can represent. From there, all of the bitwise operators above are well defined.
http://c0x.coding-guidelines.com/6.3.1.3.html
6.3.1.3 Signed and unsigned integers
682 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
683 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
684 Otherwise, the new type is signed and the value cannot be represented in it;
685 either the result is implementation-defined or an implementation-defined signal is raised.
The maximum value of an unsigned type is one less than a power of 2 and has all value bits set. The above expressions result in the highest bit alone being set, which is the maximum power of 2 that the type can represent.
http://c0x.coding-guidelines.com/6.2.6.2.html
6.2.6.2 Integer types
593 For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter).
594 If there are N value bits, each bit shall represent a different power of 2 between 1 and 2^(N - 1), so that objects of that type shall be capable of representing values from 0 to 2^N − 1 using a pure binary representation;
595 this shall be known as the value representation.
596 The values of any padding bits are unspecified.
I know that when we assign a negative value to a unsigned datatype then the two's compliment of it gets stored, that is the maximum value that the datatype can store minus the negative value we have assigned.
To test that, I have written a program which illustrates that, however I am not able to understand the behavior of char datatype.
#include <iostream>
using namespace std;
template<class T>
void compare(T a,T b)
{
cout<<dec<<"a:"<<(int)a<<"\tb:"<<(int)b<<endl; //first line
cout<<hex<<"a:"<<(int)a<<"\tb:"<<(int)b<<endl; //second line
if(a>b)
cout<<"a is greater than b"<<endl;
else
cout<<"b is greater than a"<<endl;
}
int main()
{
unsigned short as=2;
unsigned short bs=-4;
compare(as,bs);
unsigned int al = 2;
unsigned int bl =-4;
compare(al,bl);
char ac=2;
char bc=-4;
compare(ac,bc);
int ai =2;
int bi =-4;
compare(ai,bi);
}
Output is
a:2 b:65532
a:2 b:fffc
b is greater than a
a:2 b:-4
a:2 b:fffffffc
b is greater than a
a:2 b:-4
a:2 b:fffffffc
a is greater than b
a:2 b:-4
a:2 b:fffffffc
a is greater than b
The compare(...) function is called for times with arguments of different datatypes
unsigned short- 2 bytes , therefore -4 gets stored as 65532.
unsigned int - 4 bytes, however as we are trying to typecast it to int while outputting it it is shown -4 in the output, so it is tricking the compiler, however the hex output and the logical comparison result shows that the internal representation is in two's compliment.
char - 1 byte, this is where I am getting confused.
int - 4 bytes, signed datatype, nothing unexpected, normal result.
The question I have to ask is why char is behaving like a signed int?
Even though we are typecasting to int before outputting the first line in the result, why is char showing values similar to int, even when char is 1 byte and int 4 byte. unsigned short showed different value, because its memory requirement was 2 byte.
unsigned int and int is showing same result in the first line of the result, because both are 4 bytes, and the compiler gets tricked successfully, and is acceptable.
But why is char also showing the same value, as if its memory layout was the same as that of int?
And the logical comparison also shows that char does not behave as a unsigned datatype, but a signed one. unsigned datatypes are showing b as greater than one. While char is showing a is greater than b , in terms with signed datatype. Why?
Isn't char 1 byte unsigned datatype?
This is what I learnt when I did a course on C and C++ in by B.Tech degree.
Any explaination would be helpful.
The compiler used is mingw 2.19.1.
Isn't char 1 byte unsigned datatype?
Maybe, maybe not. The signedness of char is implementation-defined.
In your current implementation, it is obviously signed.
And in the output from the compare method, you get four bytes shown, because you cast to int for the output, so the char value -4 gets converted to the int value -4.
Can anyone explain the following behaviour to a relative newbie...
const char cInputFilenameAndPath[] = "W:\\testerfile.bin";
int filesize = 4584;
char * fileinrampointer;
fileinrampointer = (char*) malloc(filesize);
ifstream fsInputFileStream;
fsInputFileStream.open(cInputFilenameAndPath, fstream::in | fstream::binary);
fsInputFileStream.read((char *)(fileinrampointer), filesize);
for(int f=0; f<4; f++)
{
printf("%x\n", *fileinrampointer);
fileinrampointer++;
}
I was expecting the above code to rread the first 4 bytes of the file I just read into memory. In the loop I am just displaying the current byte pointed to by the pointer then incrementing the pointer ready to display the next byte.
When I run the code I get:
37
ffffff94
42
ffffffd2
The values are correct but every other value seems to be padded up to a 64 bit number.
Because I'm asking it to display the value indicated by a 'char sized' pointer, I was expecting char size results but every other result comes out as a long long.
If I asign *fileinrampointer to an unsigned __int8 it leaves me with the value I want (without the leading 1s) which solves the problem, but I'm just wondering if anyone can explain what is happening above?
The expression *fileinrampointer is of type signed char, and it is being promoted to a signed int while being passed to printf. Thus, the sign bit propagates. Later on, you print it out with %x which means unsigned int in hex, which causes you to print all the 1's (as opposed to correctly interpret them as a part of a 2's complement signed integer). Also, ffffffd2 is 8 hex digits which means it's a 32bit signed integer.
If you declare fileinrampointer as unsigned char or unsigned __int8 the sign bit doesn't propagate during promotion. You may as well leave it signed and cast it
printf("%x\n", static_cast<unsigned char>(*fileinrampointer) );
ISO/IEC 9899:1999 6.5.2.2:
6 . If the expression that denotes the called function has a type that does not include a prototype, the integer promotions are performed on each argument, and arguments that have type float are promoted to double. These are called the default argument promotions. [...]
[...]
7. If the expression that denotes the called function has a type that does include a prototype, the arguments are implicitly converted, as if by assignment, to the types of the corresponding parameters, taking the type of each parameter to be the unqualified version of its declared type. The ellipsis notation in a function prototype declarator causes argument type conversion to stop after last declared parameter. The default argument promotions are performed on trailing arguments.
This clearly backs up my statement that this is integer promotion, and not printf interpretation.
Also see
ISO/IEC 9899:1999 7.15.1.1
glibc manual A.2.2.4
glibc manual 12.12.4
securecoding.cert.org
You are not asking it to display a value indicated by a char sized pointer, you are asking it to display a hexidecimal integer (%x) using the contents of a char pointer. Not tried it but you could try casting it:
printf("%x\n", (unsigned int)(*fileinrampointer));