Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
This is what I tried:
int i=-1,size=1;
while (i>>1)
size++;
printf("%d",size);
The goal is to determine the size of int without using the sizeof operator.
The above loop turns out to be infinite. Is there a way to fix it so it does what it is intended to do?
Just use unsigned for i, rather than int. They are
guaranteed to have the same size, and right shift of a signed integer is implementation defined (but will usually shift in the sign bit). And don't forget to divide
the results by CHAR_BIT (which is not guaranteed to be 8).
You have chosen a negative number for right-shifting.
Right shifting a negative number, it gets filled with the sign bit 1 (or not, depending on implementation), so your value can never be 0 (=false), which means you get precisely the infinite loop you are complaining about.
your loop is indeed infinite.
start from i = 1 and shift it left till you reach i =0 and stop. you have the bits.
-edit---
this will work for signed as well as unsigned integer alike.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
In a C++ program I am writing for fun there are frequently numeric values being used that will absolutely never be greater than 255, and I am therefore storing them as unsigned chars to save on memory used (as an unsigned char only uses a byte, as opposed to the 4 bytes of an int). Is this a good idea? Are there any downsides to doing this?
I appreciate tips and insight anyone can give.
It's a trade-off.
Given that you are using unsigned char to represent non-negative (presumably) values that don't exceed 255, you will save on memory usage for storing the particular values.
If your code does arithmetic operations on those unsigned char, then the values may be implicitly promoted to int, the operation done using ints, and then the result converted back. This is consistent with the fact that quite a few real-world machines do not have machine registers that work directly with char types, but do have registers and instructions that are optimised for a larger "native" integral type i.e. int. Such to-and-fro conversions can mean that code which does a sequence of operations on unsigned chars can have measurably lower speed than coding to use variables of type int. (Notionally, an implementation might "optimise out" such to-and-fro conversions, if analysis shows there is no change of observable result from a sequence of operations, but it is not required to)
Generally speaking, for representing numeric values, I would suggest not using unsigned char and to default to using int (or another suitable integral type if the range of values you need to represent goes beyond the range that an int is guaranteed able to represent). Get the code working first and, if you decide to optimise your code to save on memory, do testing/profiling on representative target systems to determine the extent of any performance impact of using unsigned char. If using C++11 or later, you might also consider using uint8_t (on implementations that support it) but bear in mind there may be similar trade-offs with that as well.
There isn't really any downside but you might not use lesser memory depending on the order of your member variable definitions because of padding bytes(this only applies to classes).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
My WIN32 (C++) code has a UINT lets call it number.
The value of this UINT (or INT doesn't matter) start with a 0 and is recognized as an octal value. It's possible to use the standart operators and the value will keep the octal-system. The same is possible with hex (with foregoing 0x).
The problem is I have to use the Value of number in a buffer to calculate with it without changing the value of number. I can assign a value like 07777 to buffer on declaration line but if use an operation like buffer = number the value in buffer is recognized on decimal base.
Anybody has a solution for me?
There's no such thing in C as an "octal value". Integers are stored in binary.
For example, these three constants:
10
012
0xA
all have exactly the same type and value. They're just different notations -- and the difference exists only in your source code, not at run time. Assigning an octal constant to a variable doesn't make the variable octal.
For example, this:
int n = 012;
stores the value ten in n. You can print that value in any of several formats:
printf("%d\n", n);
printf("0%o\n", n);
printf("0x%x\n", n);
In all three cases, the stored value is converted to a human-readable sequence of characters, in decimal, octal, or hexadecimal.
Anybody has a solution for me?
No, because there is no actual problem.
(Credit goes to juanchopanza for mentioning this in a comment.)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
if(number == 0) {
//do something
}
Since zero evaluates to false in C++, if a number is zero it will be false in if condition.
how can I check if it is really a number zero?
There is a misconception, here: the if statement is not about the 0 constant or the value of number, but about the returned value of operator==.
The point is not if number or 0 are themselves equivalent to "false", but if == returns true or not. And it returns true if the operands have the same value.
As far integral types promotions work, the above assertion works for whatever pair of integral types.
If number is something else, then all relates to the convertibility towards a common type between number and int (since 0 is a n int) and to the "precision" the number value may have, or in the way operator== is implemented between the two types.
You can experience some trouble if number is a floating point. If this is your case, just compare with a reasonable small around of zero. Have a look at this article http://floating-point-gui.de/errors/comparison/ I'm sure you can better understand the problem.
Trivial example:
#define epsilon 0.0001;
if( fabs(number) < epsilon ){
....
}
choosing the correct epsilon could be tricky, depending on the application.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
static CBigNum bnProofOfWorkLimit(~uint256(0) >> 32);
That statement is filled with all sorts of magic. What exactly is it doing?
What does >> mean in C++ code?
For integer types, it's the binary right-shift operator, which takes the binary representation of its first operand, and moves it a number of places to the right. a >> b is roughly the same as a / pow(2,b).
That statement is filled with all sorts of magic. What exactly is it doing?
uint256 isn't a standard type or function; I'll assume it's a big-number type with 256 bits, with suitable operator overloads so that it acts like a standard numeric type. So uint256(0) is a 256-bit number with value zero.
~ is the binary negation operator; it zeros all set bits, and sets all zero bits. So ~uint256(0) will contain 256 bits, all set.
Finally, the shift moves those bits 32 bits to the right. So the top 32 bits will all be zero, and the remaining 224 bits will be set.
Assuming uint256 is a 256 bit unsigned integer type and the operators are defined as for the built-in types, this will:
initialize a 256 bit unsigned integer with 0
bitwise invert it (operator ~)
right-shift it by 32 bits (operator >>)
See Wikipedia on C / C++ operators
My guess is a shift. It's shifting the bits to the right, possibly by 32 bits. We can't say for sure without seeing the uint256 class due to c++ operator overloading.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can one calculate nth Fibonacci number in C/C++? The Fibonacci number can contain upto 1000 digits. I can generate numbers upto 2^64 (unsigned long long). Since that is the limit of numbers, so I am guessing there has to be some other way of doing it - which I do not know of.
EDIT:
Also, it has to be done without using any external libraries.
I'll give a few hints since you haven't indicated that you've started yet.
A thousand digits is a lot. More than any built-in numeric type in C or C++ can hold.
One way to get around it is to use an arbitrary-precision math library. This will have constructs that will give you basically as many digits as you want in your numbers.
Another way is to roll your own cache-and-carry:
unsigned short int term1[1024]; // 1024 digits from 0-9
unsigned short int term2[1024]; // and another
unsigned short int sum[1024]; // the sum
addBigNumbers(&term1, &term2, &sum); // exercise for the reader
I'd expect the algorithm for addBigNumbers to go something like this:
Start at the ones digit (index 0)
Add term1[0] and term2[0]
Replace sum[0] with the right digit of term1[0] + term2[0] (which is ... ?)
Keep track of the carry (how?) and use it in the next iteration (how?)
Repeat for each digit
Now, since you're calculating a Fibonacci sequence, you'll be able to re-use these big numbers to get to the next term in the sequence. You might find that it's faster not to copy them around but to just change which ones are the terms and which one is the sum on your repeated calls to addBigNumbers.
You could try using GMP for arbitrarily large integers.