Having some trouble finding the sum of a 2D vector. Does this look ok?
int sumOfElements(vector<iniMatrix> &theBlocks)
{
int theSum = 0;
for(unsigned i=0; (i < theBlocks.size()); i++)
{
for(unsigned j=0; (j < theBlocks[i].size()); j++)
{
theSum +=theBlocks[i][j];
}
}
return theSum;
}
It returns a negative number, however, it should return a positive number..
Hope someone can help :)
The code looks proper in an abstract sense, but you may be overflowing theSum. You can try making theSum type double to see what value you get to help sort out the proper integral type to use for it.
double sumOfElements(vector<iniMatrix> &theBlocks)
{
double theSum = 0;
/* ... */
return theSum;
}
When you observe the returned value, you can see if it would fit in an int or if you need to use a wider long or long long type.
If all the values in the matrix are positive, you should consider using one of the unsigned integral types. which would double your range of allowed values.
The problem is obviously the int exceeds its boundary (like others said)
For signed data types it becomes negative when overflow, and for unsigned datatypes it starts from zero again after overflow.
If you want to detect overflow pragmatically, you can paste these lines instead of the additional line.
if( theSum > int(theSum + theBlocks[i][j]) )
//print error message, throw exception, break, ...
break;
else
theSum += theBlocks[i][j];
For more generic solution to work with more data types and more operations than addition, check this: How to detect integer overflow?
A solution would be using unsigned long long and if it exceeds its boundary too, you need to use third party libraries for big integers.
Like Mokhtar Ashour says, it's may be that the variable theSum overflows. Try making it either unsigned if no numbers are negative, or change the type from int (which is 32 bits) to long long (which is 64 bits).
I think it may be int overflow problem. to make sure, you may insert a condition after the inner loop finishes to see if your result exceeds the int range.
if(result>sizeof(int))
cout<<"hitting boundaries";
a better way to test if you exceed the int boundaries is to print the result after the inner loop ends and notice the result.
.if so, just use a bigger data type.
Related
I just came across an extremely strange problem. The function I have is simply:
int strStr(string haystack, string needle) {
for(int i=0; i<=(haystack.length()-needle.length()); i++){
cout<<"i "<<i<<endl;
}
return 0;
}
Then if I call strStr("", "a"), although haystack.length()-needle.length()=-1, this will not return 0, you can try it yourself...
This is because .length() (and .size()) return size_t, which is an unsigned int. You think you get a negative number, when in fact it underflows back to the maximum value for size_t (On my machine, this is 18446744073709551615). This means your for loop will loop through all the possible values of size_t, instead of just exiting immediately like you expect.
To get the result you want, you can explicitly convert the sizes to ints, rather than unsigned ints (See aslgs answer), although this may fail for strings with sufficient length (Enough to over/under flow a standard int)
Edit:
Two solutions from the comments below:
(Nir Friedman) Instead of using int as in aslg's answer, include the header and use an int64_t, which will avoid the problem mentioned above.
(rici) Turn your for loop into for(int i = 0;needle.length() + i <= haystack.length();i ++){, which avoid the problem all together by rearranging the equation to avoid the subtraction all together.
(haystack.length()-needle.length())
length returns a size_t, in other words an unsigned int. Given the size of your strings, 0 and 1 respectively, when you calculate the difference it underflows and becomes the maximum possible value for an unsigned int. (Which is approximately 4.2 billions for a storage of 4 bytes, but could be a different value)
i<=(haystack.length()-needle.length())
The indexer i is converted by the compiler into an unsigned int to match the type. So you're gonna have to wait until i is greater than the max possible value for an unsigned int. It's not going to stop.
Solution:
You have to convert the result of each method to int, like so,
i <= ( (int)haystack.length() - (int)needle.length() )
I have come to know that one can store digits from 0-255 in char data type. So, I tried the following code:
#include <stdio.h>
int main()
{
unsigned char num[4];
int sum=0;
int i=0;
printf("Enter Four Digit Number\n");
while(i<4)
{
scanf("%1d",&num[i]);
i++;
}
sum=(int)num[0]+(int)num[1]+(int)num[2]+(int)num[3];
printf("Sum of digits: %d",sum);
return 0;
}
which seems to run correctly but as soon as I put the following code in while loop, the value of i changes to zero every time the loop reiterates and code breaks:
sum=sum+(int)(num[i]); i++;
I'm using code::blocks with MinGW compiler.
Be VERY VERY careful when you use character types to store numbers. The range of a signed char can vary depending on your compiler, and the operating system. On most, the minimum value is -128 and the maximum value is 127. On others it can differ. This can become problematic when you expect the range to be from 0-255. If you want to use a char to store numbers, use unsigned char, as it's range is 0-255.
Because the numeric representations of a character (signed, or unsigned) are not garunteed, I usually approach the problem by writing a function that uses an array to assign, and look up, the numeric values. (but, I almost never do this, it's easier to use an integer...)
if you want add the integer value into the particular index
char a[10];
int i=9;
a[2]=boost::lexical_cast<char>(i)
found this is the best way to convert char into int and vice-versa.
This works fine for me.
while(i < 4){
scanf("%1d", &num[i]);
sum += (int)(num[i++]);
}
I want to calculate maximum value(int) of i for which (i*(i+1)(2i+1))/3 < 4,294,967,295 (int limit).
int main()
{
unsigned int i=1;
unsigned int l=std::numeric_limits<unsigned int>::max();
while(l>((i*(i+1)*(2*i+1))/3))
{
i++;
}
cout<<(i-1);getchar();return 0;
}
Your problem is caused comparing unsigned int l to an expression casted to int, this gives undefined results. In the second case the inner expression is all evaluated to an unsigned int and casted to int after evaluation (with a loss of precision that might cut the positive value). In your first case the nominator of the division function is casted to int before the division applies.
You should better write your condition like this, or even better omit the cast at all (there's no single float or double math operation done in your expression, you're dealing solely with unsigned int):
while(l>(unsigned int)(i*(i+1)*(2*i+1))/3) { // ...
// ^^^^^^^^
If you do so, you'll always experience your loop running endlessly or very long. IMHO it makes no sense, to check if the result of the condition expression might be bigger than std::numeric_limits<unsigned int>::max(), it cannot be bigger.
This code will not give you the correct answer. The calculation can be rewritten as (i*(i+1)*(2*i+1)) < 3 * 4,294,967,295, now consider what that means about the calculation of the left hand side.
The inequality appearing in while loop is of order 3. this type of curve has very high slope,meaning small change in co-ordinate produces huge amount in y. while loop soon encounter in comparison of unsigned int and overflow of i, thus gives never ending loop(Yes never ending, i tried).
The solution is simple. break the in-equality in logarithm. Now the 3rd order polynomial is linear of log. Eventually it worked.
In a lot of code examples, source code, libraries etc. I see the use of int when as far as I can see, an unsigned int would make much more sense.
One place I see this a lot is in for loops. See below example:
for(int i = 0; i < length; i++)
{
// Do Stuff
}
Why on earth would you use an int rather than an unsigned int? Is it just laziness - people can't be bothered with typing unsigned?
Using unsigned can introduce programming errors that are hard to spot, and it's usually better to use signed int just to avoid them. One example would be when you decide to iterate backwards rather than forwards and write this:
for (unsigned i = 5; i >= 0; i--) {
printf("%d\n", i);
}
Another would be if you do some math inside the loop:
for (unsigned i = 0; i < 10; i++) {
for (unsigned j = 0; j < 10; j++) {
if (i - j >= 4) printf("%d %d\n", i, j);
}
}
Using unsigned introduces the potential for these sorts of bugs, and there's not really any upside.
It's generally laziness or lack of understanding.
I aways use unsigned int when the value should not be negative. That also serves the documentation purpose of specifying what the correct values should be.
IMHO, the assertion that it is safer to use "int" than "unsigned int" is simply wrong and a bad programming practice.
If you have used Ada or Pascal you'd be accustomed to using the even safer practice of specifying specific ranges for values (e.g., an integer that can only be 1, 2, 3, 4, 5).
If length is also int, then you should use the same integer type, otherwise weird things happen when you mix signed and unsigned types in a comparison statement. Most compilers will give you a warning.
You could go on to ask, why should length be signed? Well, that's probably historical.
Also, if you decide to reverse the loop, ie
for(int i=length-1;i>=0 ;i--)
{
// do stuff
}
the logic breaks if you use unsigned ints.
I chose to be as explicit as possible while programming. That is, if I intend to use a variable whose value is always positive, then unsigned is used. Many here mention "hard to spot bugs" but few give examples. Consider the following advocate example for using unsigned, unlike most posts here:
enum num_things {
THINGA = 0,
THINGB,
THINGC,
NUM_THINGS
};
int unsafe_function(int thing_ID){
if(thing_ID >= NUM_THINGS)
return -1;
...
}
int safe_function(unsigned int thing_ID){
if(thing_ID >= NUM_THINGS)
return -1;
...
}
int other_safe_function(int thing_ID){
if((thing_ID >=0 ) && (thing_ID >= NUM_THINGS))
return -1;
...
}
/* Error not caught */
unsafe_function(-1);
/* Error is caught */
safe_function((unsigned int)-1);
In the above example, what happens if a negative value is passed in as thing_ID? In the first case, you'll find that the negative value is not greater than or equal to NUM_THINGS, and so the function will continue executing.
In the second case, you'll actually catch this at run-time because the signedness of thing_ID forces the conditional to execute an unsigned comparison.
Of course, you could do something like other_safe_function, but this seems more of a kludge to use signed integers rather than being more explicit and using unsigned to begin with.
I think the most important reason is if you choose unsigned int, you can get some logical errors. In fact, you often do not need the range of unsigned int, using int is safer.
this tiny code is usecase related, if you call some vector element then the prototype is int but there're much modern ways to do it in c++ eg. for(const auto &v : vec) {} or iterators, in some calculcation if there's no substracting/reaching a negative number you can and should use unsigned (explains better the range of values expected), sometimes as many posted examples here shows you actually need int but the truth is it's all about usecase and situation, no one strict rule apply to all usecases and it would be kinda dumb to force one over...
Is it better to cast the iterator condition right operand from size_t to int, or iterate potentially past the maximum value of int? Is the answer implementation specific?
int a;
for (size_t i = 0; i < vect.size(); i++)
{
if (some_func((int)i))
{
a = (int)i;
}
}
int a;
for (int i = 0; i < (int)vect.size(); i++)
{
if (some_func(i))
{
a = i;
}
}
I almost always use the first variation, because I find that about 80% of the time, I discover that some_func should probably also take a size_t.
If in fact some_func takes a signed int, you need to be aware of what happens when vect gets bigger than INT_MAX. If the solution isn't obvious in your situation (it usually isn't), you can at least replace some_func((int)i) with some_func(numeric_cast<int>(i)) (see Boost.org for one implementation of numeric_cast). This has the virtue of throwing an exception when vect grows bigger than you've planned on, rather than silently wrapping around to negative values.
I'd just leave it as a size_t, since there's not a good reason not to do so. What do you mean by "or iterate potentially up to the maximum value of type_t"? You're only iterating up to the value of vect.size().
For most compilers, it won't make any difference. On 32 bit systems, it's obvious, but even on 64 bit systems, both variables will probably be stored in a 64-bit register and pushed on the stack as a 64-bit value.
If the compiler stores int values as 32 bit values on the stack, the first function should be more efficient in terms of CPU-cycles.
But the difference is negligible (although the second function "looks" cleaner)