How to convert multi-character constant in x to integer?
I tried for example '13' as ('3' + '1' << 3), but it doesn't work properly.
I don't mean "0123", but '0123'. It compiles, but I don't how did compiler gets the octal result 6014231063 when printing it. I am not looking for atoi which just converts this to present number. For example int x = '1' would print 49 in decimal number system. Now I am interested what would print int x = '0123'. This task is from programming competition, so the answer shouldn't be unexpected behavior.
int main(void) {
int x = '0123';
printf("%o\n", x);
printf("%d\n", x >> 24);
printf("%d\n", x << 8 >> 24);
printf("%d\n", x & 0xff);
return 0;
}
How to convert multi-character constant to integer in C?
'0123' in an int.
int x = '0123';
'0123' is a character-constant. In C, this is one of the forms of a constant and it has the type of int. It is rarely used as its value is implementation-defined. It's usually the following depending on endianness and character codding (e.g. ASCII):
(('0'*256 + '1')*256 + `2`)*256 + '3' = 858927408 = 0x33323130
(('3'*256 + '2')*256 + `1`)*256 + '0' = 808530483 = 0x30313233
Further: It is a challenge to write useful portable code with it. Many coding styles bar it when used with more than 1 character.
'0123' is a multi-character constant/literal (C calls it a constant, C++ calls it a literal). In both languages, it is of type int and has an implementation-defined value.
It's probably typical for '0123' to have the value
('0' << 24) + ('1' << 16) + ('2' << 8) + '3'
(assuming CHAR_BIT==8, and keeping in mind that the values of '0' et al are themselves implementation-defined).
Because the value is implementation-defined, multi-character constants are rarely useful, and nearly useless in portable code. The standard doesn't even guarantee that '0123' and '1234' have distinct values.
But to answer your question, '0123' is already of type int, so no conversion is necessary. You can store, manipulate, or print that value in any way you like.
For example, on my system this program:
#include <stdio.h>
int main(void) {
printf("0x%x\n", (unsigned int)'0123');
}
prints (after a compile-time warning):
0x30313233
which is consistent with the formula above -- but the result might differ under another implementation.
The "implementation-defined" value means that an implementation is required to document it. gcc's behavior (for version 5.3) is documented here:
The preprocessor and compiler interpret character constants in the
same way; i.e. escape sequences such as ‘\a’ are given the values they
would have on the target machine.
The compiler evaluates a multi-character character constant a
character at a time, shifting the previous value left by the number of
bits per target character, and then or-ing in the bit-pattern of the
new character truncated to the width of a target character. The final
bit-pattern is given type int, and is therefore signed, regardless of
whether single characters are signed or not (a slight change from
versions 3.1 and earlier of GCC). If there are more characters in the
constant than would fit in the target int the compiler issues a
warning, and the excess leading characters are ignored.
For example, 'ab' for a target with an 8-bit char would be
interpreted as
(int) ((unsigned char) 'a' * 256 + (unsigned char)'b'), and
'\234a' as (int) ((unsigned char) '\234' * 256 + (unsigned char) a').
You could try something in the lines of creating a function like this:
int StringLiteralToInt(const char* string, int numbeOfCharacters)
{
int result = 0;
for(int ch = 0; ch < numberOfCharacters; ch++)
{
float powerTen = pow(10, numbeOfCharacters - (ch+1));
result += (int)string[ch] * (int)powerTen;
}
return result;
}
I just wrote that inline, so it might not be 100% right, but it should be the right idea. Just multiply the chars by a power of ten (right most - 10^0, left most - 10^(strinSize-1).
Hope that helps :)
Well, you could try this:
int main()
{
int x = '0123';
printf("%x\n", x);
}
For me this prints 30313233, as I expect.
Here it is broken apart, as it looks like you were trying to do:
printf("%o ", (x >> 24) & 0xff);
printf("%o ", (x >> 16) & 0xff);
printf("%o ", (x >> 8) & 0xff);
printf("%o\n", x & 0xff);
These printouts show that the multi-character character constant is, in some sense, made up of the characters '0', '1', '2', and '3' all jammed together. But there is really no sense in which this multi-character character constant has any meaningful relationship to the integer 123. (We could write some code to shift and mask by 8 bits, then subtract '0' to convert from character to digit, then multiply by 10 and add, just like atoi, but it wouldn't really mean anything.)
Related
I (think I) understand how the maths with different variable types works. For example, if I go over the max limit of an unsigned int variable, it will loop back to 0.
I don't understand the behavior of this code with unsigned char:
#include<iostream>
int main() {
unsigned char var{ 0 };
for(int i = 0; i < 501; ++i) {
var += 1;
std::cout << var << '\n';
}
}
This just outputs 1...9, then some symbols and capital letters, and then it just doesn't print anything. It doesn't loop back to the values 1...9 etc.
On the other hand, if I cast to int before printing:
#include<iostream>
int main() {
unsigned char var{ 0 };
for(int i = 0; i < 501; ++i) {
var += 1;
std::cout << (int)var << '\n';
}
}
It does print from 1...255 and then loops back from 0...255.
Why is that? It seems that the unsgined char variable does loop (as we can see from the int cast).
Is it safe to to maths with unsigned char variables? What is the behavior that I see here?
Why doesn't it print the expected integer value?
The issue is not with the looping of char. The issue is with the insertion operation for std::ostream objects and 8-bit integer types. The non-member operator<< functions for these types treat all 8-bit integers (char, signed char, and unsigned char) as their ASCII character types.
operator<<(std::basic_ostream)
The canonical way to handle outputing 8-bit integer types is the way you're doing it. I personally prefer this instead:
char foo;
std::cout << +foo;
The unary + operator promotes the char type to an integer type, which then causes the integer printing function to be called.
Note that integer overflow is only defined for unsigned integer types. If you repeat this with char or signed char, the behavior is undefined by the standard. SOMETHING will happen, for sure, because we live in reality, but that overflow behavior may differ from compiler to compiler.
Why doesn't it repeat the 0..9 characters
I tested this using g++ to compile, and bash on Ubuntu 20.04. My non-printable characters are handled as explicit symbols in some cases, or nothing printed in other cases. The non-repeating behavior must be due to how your shell handles these non-printable characters. We can't answer that without more information.
Unsigned chars aren't trated as numbers in this case. This data type is literally a byte:
1 byte = 8 bits = 0000 0000 which means 0.
What cout is printing is the character that represents that byte you changed by adding +1 to it.
For example:
0 = 0000 0000
1 = 0000 0001
2 = 0000 0010
.
.
.
9 = 0000 1001
Then, here start other chars that arent related to numbers.
So, if you cast it to int, it will give you the numeric representations of that byte, giving you a 0-255 output.
Hope this clarifies!
Edit: Made the explanation more clear.
This code produces Medium warnings at lines w/ return:
// Checks if the symbol defines two-symbols Unicode sequence
bool doubleSymbol(const char c) {
static const char TWO_SYMBOLS_MASK = 0b110;
return (c >> 5) == TWO_SYMBOLS_MASK;
}
// Checks if the symbol defines three-symbols Unicode sequence
bool tripleSymbol(const char c) {
static const char THREE_SYMBOLS_MASK = 0b1110;
return (c >> 4) == THREE_SYMBOLS_MASK;
}
// Checks if the symbol defines four-symbols Unicode sequence
bool quadrupleSymbol(const char c) {
static const char FOUR_SYMBOLS_MASK = 0b11110;
return (c >> 3) == FOUR_SYMBOLS_MASK;
}
PVS says that the expressions are always false (V547), but they actually aren't: char may be a part of Unicode symbol that is read to std::string!
Here is the Unicode representation of symbols:
1 byte - 0xxx'xxxx - 7 bits
2 bytes - 110x'xxxx 10xx'xxxx - 11 bits
3 bytes - 1110'xxxx 10xx'xxxx 10xx'xxxx - 16 bits
4 bytes - 1111'0xxx 10xx'xxxx 10xx'xxxx 10xx'xxxx - 21 bits
The following code counts number of symbols in a Unicode text:
size_t symbolCount = 0;
std::string s;
while (getline(std::cin, s)) {
for (size_t i = 0; i < s.size(); ++i) {
const char c = s[i];
++symbolCount;
if (doubleSymbol(c)) {
i += 1;
} else if (tripleSymbol(c)) {
i += 2;
} else if (quadrupleSymbol(c)) {
i += 3;
}
}
}
std::cout << symbolCount << "\n";
For the Hello! input the output is 6 and for Привет, мир! is 12 — this is right!
Am I wrong or doesn't PVS know something? ;)
PVS-Studio analyzer knows that there are signed and unsigned char types. Whether signed/unsigned is used depends on compilation keys and PVS-Studio analyzer takes these keys into account.
I think this code is compiled, when char is of signed char type. Let's see what consequences it brings.
Let’s look only at the first case:
bool doubleSymbol(const char c) {
static const char TWO_SYMBOLS_MASK = 0b110;
return (c >> 5) == TWO_SYMBOLS_MASK;
}
If the value variable 'c' is less than or equal to 01111111, the condition will always be false, because during the shift the max value you can get is 011.
It means we are interested in only cases where the highest bit in the variable 'c' is equal to 1. As this variable is of signed char type, then the highest bit means that the variable stores a negative value. Before the shift, signed char becomes a signed int and the value continues to be negative.
Now let's see what the standard says about the right-shift of negative numbers:
The value of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a non-negative value, the value of the result is the integral part of the quotient of E1/2^E2. If E1 has a signed type and a negative value, the resulting value is implementation-defined.
Thus, the shift of a negative number to the left is implementation-defined. This means that the highest bits are filled with nulls or ones. Both will be correct.
PVS-Studio thinks that the highest bits are filled with ones. It has a full right to think so, because it is necessary to choose any implementation. So it turns out that the expression ((c) >> 5) will have a negative value if the highest bit in the variable 'c' is originally equal to 1. A negative number cannot be equal to TWO_SYMBOLS_MASK.
It turns out that from the viewpoint of PVS-Studio, the condition will always be false, and it correctly issues a warning V547.
In practice, the compiler may behave differently: the highest bits will be filled with 0 and then everything will work correctly.
In any case, it is necessary to fix the code, as it goes to the implementation-defined behavior of the compiler.
Code might be fixed as follows:
bool doubleSymbol(const unsigned char c) {
static const char TWO_SYMBOLS_MASK = 0b110;
return (c >> 5) == TWO_SYMBOLS_MASK;
}
I have this code which handles Strings like "19485" or "10011010" or "AF294EC"...
long long toDecimalFromString(string value, Format format){
long long dec = 0;
for (int i = value.size() - 1; i >= 0; i--) {
char ch = value.at(i);
int val = int(ch);
if (ch >= '0' && ch <= '9') {
val = val - 48;
} else {
val = val - 55;
}
dec = dec + val * (long long)(pow((int) format, (value.size() - 1) - i));
}
return dec;
}
this code works for all values which are not in 2's complement.
If I pass a hex-string which is supposed to be a negativ number in decimal I don't get the right result.
If you don't handle the minus sign, it won't handle itself.
Check for it, and memorize the fact you've seen it. Then, at
the end, if you'd seen a '-' as the first character, negate
the results.
Other points:
You don't need (nor want) to use pow: it's just
results = format * results + digit each time through.
You do need to validate your input, making sure that the digit
you obtain is legal in the base (and that you don't have any
other odd characters).
You also need to check for overflow.
You should use isdigit and isalpha (or islower and
isupper) for you character checking.
You should use e.g. val -= '0' (and not 48) for your
conversion from character code to digit value.
You should use [i], and not at(i), to read the individual
characters. Compile with the usual development options, and
you'll get a crash, rather than an exception, in case of error.
But you should probably use iterators, and not an index, to go
through the string. It's far more idiomatic.
You should almost certainly accept both upper and lower case
for the alphas, and probably skip leading white space as well.
Technically, there's also no guarantee that the alphabetic
characters are in order and adjacent. In practice, I think you
can count on it for characters in the range 'A'-'F' (or
'a'-'f', but the surest way of converting character to digit
is to use table lookup.
You need to know whether the specified number is to be interpreted as signed or unsigned (in other words, is "ffffffff" -1 or 4294967295?).
If signed, then to detect a negative number test the most-significant bit. If ms bit is set, then after converting the number as you do (generating an unsigned value) take the 1's complement (bitwise negate it then add 1).
Note: to test the ms bit you can't just test the leading character. If the number is signed, is "ff" supposed to be -1 or 255?. You need to know the size of the expected result (if 32 bits and signed, then "ffffffff" is negative, or -1. But if 64 bits and signed, "ffffffff' is positive, or 4294967295). Thus there is more than one right answer for the example "ffffffff".
Instead of testing ms bit you could just test if unsigned result is greater than the "midway point" of the result range (for example 2^31 -1 for 32-bit numbers).
I have a char a[] of hexadecimal characters like this:
"315c4eeaa8b5f8aaf9174145bf43e1784b8fa00dc71d885a804e5ee9fa40b16349c146fb778cdf2d3aff021dfff5b403b510d0d0455468aeb98622b137dae857553ccd8883a7bc37520e06e515d22c954eba5025b8cc57ee59418ce7dc6bc41556bdb36bbca3e8774301fbcaa3b83b220809560987815f65286764703de0f3d524400a19b159610b11ef3e"
I want to convert it to letters corresponding to each hexadecimal number like this:
68656c6c6f = hello
and store it in char b[] and then do the reverse
I don't want a block of code please, I want explanation and what libraries was used and how to use it.
Thanks
Assuming you are talking about ASCII codes. Well, first step is to find the size of b. Assuming you have all characters by 2 hexadecimal digits (for example, a tab would be 09), then size of b is simply strlen(a) / 2 + 1.
That done, you need to go through letters of a, 2 by 2, convert them to their integer value and store it as a string. Written as a formula you have:
b[i] = (to_digit(a[2*i]) << 4) + to_digit(a[2*i+1]))
where to_digit(x) converts '0'-'9' to 0-9 and 'a'-'z' or 'A'-'Z' to 10-15.
Note that if characters below 0x10 are shown with only one character (the only one I can think of is tab, then instead of using 2*i as index to a, you should keep a next_index in your loop which is either added by 2, if a[next_index] < '8' or added by 1 otherwise. In the later case, b[i] = to_digit(a[next_index]).
The reverse of this operation is very similar. Each character b[i] is written as:
a[2*i] = to_char(b[i] >> 4)
a[2*i+1] = to_char(b[i] & 0xf)
where to_char is the opposite of to_digit.
Converting the hexadecimal string to a character string can be done by using std::substr to get the next two characters of the hex string, then using std::stoi to convert the substring to an integer. This can be casted to a character that is added to a std::string. The std::stoi function is C++11 only, and if you don't have it you can use e.g. std::strtol.
To do the opposite you loop over each character in the input string, cast it to an integer and put it in an std::ostringstream preceded by manipulators to have it presented as a two-digit, zero-prefixed hexadecimal number. Append to the output string.
Use std::string::c_str to get an old-style C char pointer if needed.
No external library, only using the C++ standard library.
Forward:
Read two hex chars from input.
Convert to int (0..255). (hint: sscanf is one way)
Append int to output char array
Repeat 1-3 until out of chars.
Null terminate the array
Reverse:
Read single char from array
Convert to 2 hexidecimal chars (hint: sprintf is one way).
Concat buffer from (2) to final output string buffer.
Repeat 1-3 until out of chars.
Almost forgot to mention. stdio.h and the regular C-runtime required only-assuming you're using sscanf and sprintf. You could alternatively create a a pair of conversion tables that would radically speed up the conversions.
Here's a simple piece of code to do the trick:
unsigned int hex_digit_value(char c)
{
if ('0' <= c && c <= '9') { return c - '0'; }
if ('a' <= c && c <= 'f') { return c + 10 - 'a'; }
if ('A' <= c && c <= 'F') { return c + 10 - 'A'; }
return -1;
}
std::string dehexify(std::string const & s)
{
std::string result(s.size() / 2);
for (std::size_t i = 0; i != s.size(); ++i)
{
result[i] = hex_digit_value(s[2 * i]) * 16
+ hex_digit_value(s[2 * i + 1]);
}
return result;
}
Usage:
char const a[] = "12AB";
std::string s = dehexify(a);
Notes:
A proper implementation would add checks that the input string length is even and that each digit is in fact a valid hex numeral.
Dehexifying has nothing to do with ASCII. It just turns any hexified sequence of nibbles into a sequence of bytes. I just use std::string as a convenient "container of bytes", which is exactly what it is.
There are dozens of answers on SO showing you how to go the other way; just search for "hexify".
Each hexadecimal digit corresponds to 4 bits, because 4 bits has 16 possible bit patterns (and there are 16 possible hex digits, each standing for a unique 4-bit pattern).
So, two hexadecimal digits correspond to 8 bits.
And on most computers nowadays (some Texas Instruments digital signal processors are an exception) a C++ char is 8 bits.
This means that each C++ char is represented by 2 hex digits.
So, simply read two hex digits at a time, convert to int using e.g. an istringstream, convert that to char, and append each char value to a std::string.
The other direction is just opposite, but with a twist.
Because char is signed on most systems, you need to convert to unsigned char before converting that value again to hex digits.
Conversion to and from hexadecimal can be done using hex, like e.g.
cout << hex << x;
cin >> hex >> x;
for a suitable definition of x, e.g. int x
This should work for string streams as well.
I am trying to learn C programming, and I was studying some source codes and there are some things I didn't understand, especially regarding Bitwise Operators. I read some sites on this, and I kinda got an idea on what they do, but when I went back to look at this codes, I could not understand why and how where they used.
My first question is not related to bitwise operators but rather some ascii magic:
Can somebody explain to me how the following code works?
char a = 3;
int x = a - '0';
I understand this is done to convert a char into an int, however I don't understand the logic behind it. Why/How does it work?
Now, Regarding Bitwise operators, I feel really lost here.
What does this code do?
if (~pointer->intX & (1 << i)) { c++; n = i; }
I read somewhere that ~ inverts bits, but I fail to see what this statement is doing and why is it doing that.
Same with this line:
row.data = ~(1 << i);
Other question:
if (x != a)
{
ret |= ROW;
}
What exactly is the |= operator doing? From what I read, |= is OR but i don't quite understand what is this statement doing.
Is there any way of rewriting this code to make it easier to understands so that it doesn't use this bitwise operators? I find them very confusing to understand, so hopefully somebody will point me in the right direction on understanding how they work better!
I have a much better understanding of bitwise operators now and the whole code makes much more sense now.
One last thing: appartenly nobody responded if there would be a "cleaner" way for rewriting this code in a way that its easier to understand and maybe not at "bitlevel". Any ideas?
This will produce junk:
char a = 3;
int x = a - '0';
This is different - note the quotes:
char a = '3';
int x = a - '0';
The char datatype stores a number that identifiers a character. The characters for the digits 0 through 9 are all next to each other in the character code list, so if you subtract the code for '0' from the code for '9', you get the answer 9. So this will turn a digit character code into the integer value of the digit.
(~pointer->intX & (1 << i))
That will be interpreted by the if statement as true if it's non-zero. There are three different bitwise operators being used.
The ~ operator flips all the bits in the number, so if pointer->intX was 01101010, then ~pointer->intX will be 10010101. (Note that throughout, I'm illustrating the contents of a byte. If it was a 32-bit integer, I'd have to write 32 digits of 1s and 0s).
The & operator combines two numbers into one number, by dealing with each bit separately. The resulting bit is only 1 if both the input bits are 1. So if the left side is 00101001 and the right side is 00001011, the result will be 00001001.
Finally, << means left shift. If you start with 00000001 and left shift it by three places, you'll have 00001000. So the expression (1 << i) produces a value where bit i is switched on, and the others are all switch off.
Putting it all together, it tests if bit i is switched off (zero) in pointer->intX.
So you may be able to figure out what ~(1 << i) does. If i is 4, the thing in brackets will be 00010000, and so the whole thing will be 11101111.
ret |= ROW;
That one is equivalent to:
ret = ret | ROW;
The | operator is like & except that the resulting bit is 1 if either of the input bits is 1. So if ret is 00100000 and ROW is 00000010, the result will be 00100010.
ret |= ROW;
is equivalent to
ret = ret | ROW;
For char a = 3; int x = a - '0'; I think you meant char a = '3'; int x = a - '0';. It's easy enough to understand if you realize that in ASCII the numbers come in order, like '0', '1', '2', ... So if '0' is 48 and '1' is 49, then '1' - '0' is 1.
For bitwise operations, they are hard to grasp until you start looking at bits. When you view these operations on binary numbers then you can see exactly how they work...
010 & 111 = 010
010 | 111 = 111
010 ^ 111 = 101
~010 = 101
I think you probably have a typo, and meant:
char a = '3';
The reason this works is that all the numbers come in order, and '0' is the first. Obviously, '0' - '0' = 0. '1' - '0' = 1, since the character value for '1' is one greater than the character value for '0'. Etc.
1) A char is really just a 8-bit integer. '0' == 48, and all that that implies.
2) (~(pointer->intX) & (1 << i)) evalutates whether the 'i'th bit (from the right) in the intX member of whatever pointer points to is not set. The ~ inverts the bits, so all the 0s become 1s and vice versa, then the 1 << i puts a single 1 in the desired location, & combines the two values so that only the desired bit is kept, and the whole thing evalutes to true if that bit was 0 to begin with.
3) | is bitwise or. It takes each bit in both operands and performs a logical OR, producing a result where each bit is set if either operand had that bit set. 0b11000000 | 0b00000011 == 0b11000011. |= is an assignment operator, in the same way that a+=b means a=a+b, a|=b means a=a|b.
Not using bitwise operators CAN make things easier to read in some cases, but it will usually also make your code significantly slower without strong compiler optimization.
The subtraction trick you reference works because ASCII numbers are arranged in ascending order, starting with zero. So if ASCII '0' is a value of 48 (and it is), then '1' is a value of 49, '2' is 50, etc. Therefore ASCII('1') - ASCII('0') = 49 - 48 = 1.
As far as bitwise operators go, they allow you to perform bit-level operations on variables.
Let's break down your example:
(1 << i) -- this is left-shifting the constant 1 by i bits. So if i=0, the result is decimal 1. If i = 1, it shifts the bit one to the left, backfilling with zeros, yielding binary 0010, or decimal 2. If i = 2, you shift the bit two to the left, backfilling with zeros, yielding binary 0100 or decimal 4, etc.
~pointer->intX -- this is taking the value of the intX member of pointer and inverting its bits, setting all zeros to ones and vice versa.
& -- the ampersand operator does a bitwise AND comparison. The results of this will be 1 wherever both the left and right side of the expression are 1, and 0 otherwise.
So the test will succeed if pointer->intX has a 0 bit at the ith position from the right.
Also, |= means to do a bitwise OR comparison and assign the result to the left side of the expression. The result of a bitwise OR is 1 for every bit where the corresponding left or right side bit is 1,
Single quotes are used to indicate that a single char is used. '0' therefore is the char '0', which has the ASCII-Code 48.
3-'0'=3-48
'1<<i' shifts 1 i places to the left, therefore only the ith bit from the right is 1.
~pointer->intX negates the field intX, so the logical AND returns a true value (not 0) when intX has every bit except for the ith bit from the right isn't set.
char a = '3';
int x = a - '0';
you had a typo here (notice the 's around the 3), this assigns the ascii value of the character 3, to the char variable, then the next line takes '3' - '0' and assigns it to x, because of the way ascii values work, x will then be equal to 3 (integer value)
In the first comparison, I've never seen ~ being used on a pointer that way before, another typo maybe? If I were to read out the following code:
(~pointer->intX & (1 << i))
I would say "(the value intX dereferenced from pointer) AND (1 left shifted i times)"
1 << i is a quick way of multiplying 1 by a power of 2, ie if i is 3, then 1 << 3 == 8
In this case, I have no clue why you would invert the bits of the pointer..
In the 2nd comparison, x |= y is the same as x = x | y
I'm assuming you mean char a='3'; for the first line of code (otherwise you get a rather strange answer). The basic principal is that ASCII codes for digits are sequential, i.e. the code for '0'=48, the code for '1'=49, and so on. Subtracting '0' simply converts from the ASCII code to the actual digit, so e.g. '3' - '0' = 3, and so on. Note that this will only work if the character you're subtracting '0' from is an actual digit - otherwise the result will have little meaning.
a. Without context the "why" of this code is impossible to say. As for what it's doing, it appears that the if statement evaluates as true when bit i of pointer->intX is not set, i.e. that particular bit is a 0. I believe the & operator gets executed before the ~ operator, as the ~ operator has very low precedence. The code could make better use of parentheses to make the intended order of operations clearer. In this case, the order of operations might not matter though - I believe the result is the same either way.
b. This is simply creating a number with all bits EXCEPT bit i set to 1. A convenient way of creating a mask for bit i is to use the expression (1<<i).
The bitwise OR operation in this case is used to set the bits specified by the ROW constant to 1. If these bits are not set, it sets them; if they're already set it has no effect.
1) Can somebody explain to me how the following code works? char a = 3; int x = a - '0';
I undertand this is done to convert a char into an int, however I don't understand the logic behind it. Why/How does it work?
Sure. variable a is of type char, and by putting single quotes around 0 that is causing C to view it as a char as well. Finally, the whole statement is automagically typecast to its integer equivalent, because x is defined as an integer.
2) Now, Regarding Bitwise operators, I feel really lost here.
--- What does this code do? if (~pointer->intX & (1 << i)) { c++; n = i; } I read somewhere that ~ inverts bits, but I fail to see what this statement is doing and why is it doing that.
(~pointer->intX & (1 << i)) is saying:
negate intX, and AND it with a 1 shifted left by i bits
so, what you're getting, if intX = 1011, and i = 2, equates to
(0100 & 0100)
-negate 1011 = 0100
-(1 << 2) = 0100
0100 & 0100 = 1 :)
then, if the AND operation returns a 1 (which, in my example, it does)
{ c++; n = i; }
so, increment c by 1, and set variable n to be i
Same with this line: row.data = ~(1 << i);
Same principle here.
Shift a 1 to the left by i places, and negate.
So, if i = 2 again
(1 << 2) = 0100
~(0100) = 1011
**--- Other question:
if (x != a) { ret |= ROW; }
What exacly is the |= operator doing? From what I read, |= is OR but i don't quite understand what is this statement doing.**
if (x != a) (hopefully this is apparent to you....if variable x does not equal variable a)
ret |= ROW;
equates to
ret = ret | ROW;
which means, binary OR ret with ROW
For examples of exactly what AND and OR operations accomplish, you should have a decent understanding of binary logic.
Check wikipedia for truth tables...ie
Bitwise operations