Understanding CAtlArray::SetCount - c++

I don't quite understand how to call CAtlArray::SetCount.
This is the signature:
bool SetCount(size_t nNewSize, int nGrowBy = - 1);
And here's an explanation of the parameters, from the docs:
nNewSize
The required size of the array.
nGrowBy
A value used to determine how large to make the buffer. A
value of -1 causes an internally calculated value to be used.
So, let's say you have an array of some current length, and you'd like to add another 7 units to it. How would you call SetCount? Do you get the current count, add 7, and then pass that number as the first argument? Or do you just pass 7 as the second argument -- and if so, what's the first argument?

The lack of details on the nGrowBy parameter is a hint that Microsoft's guys do not want ordinary programmers to use it. For normal operations you can ignore it and trust the default value of -1 to do the right thing.
So the correct way to add a number of items is what you guessed:
find the current size if you do not have it at hand
add the number or units to add
and pass that value as the first argument of SetCount (leaving the default value of -1 for the second argument)

nGrowBy sets storage growing strategy.
nGrowBy
Strategy
-1
Use last set strategy. If not set, default strategy is 0.
0
Set default strategy which grows storage by at least 50%.
>0
Set new strategy which grows storage by at least nGrowBy items.
Every time when nNewSize is larger than size calculated by growing strategy, nNewSize is used instead of calculated value (this explains at least in above table).
In most cases you should use just -1 (not specify it at all). Providing explicit value for nGrowBy makes sense only when you know something special on future array resizing, but even then in most cases it is better to just SetCount( final_size ).

Related

Which value is the current maximum at std::max_element with lambda?

If I'm using std::max_element with a lambda and I have a special value in the row which is always considered to be lower than anything.
So when the parameters are given to the comparison-lambda:
which of both parameters is the current maximum, i.e. the maximum of the values evaluated before?
This would help me in a way that I won't have to check both values for my magic value.
The Compare functor must implement strict weak ordering - acting as <(less than). It is not specified which parameter is the current maximum. It must be one of them because exactly N-1 comparisons are made. But it can be either of them depending on the if statement used in the for loop.
The logic then dictates that the larger of those parameters will be the new maximum.

How to delete specific elements from an array in c++

I dont know the numbers which are stored in the array[multidimensional].As I get these numbers from the sensor.I just know that If the same number is repeated more than 5 times, that number should be deleted.
please help.
How to delete specific elements from an array
Depends on what do you mean by "delete". An array of x numbers always has exactly x numbers. An integer can't have a state that represents a "deleted" number, unless you decide that a specific value signifies such state. A typical choice would be -1 if only positive values are used otherwise. A floating point number could be set to NaN, but considering the "repeated 5 times" requirement, remember that equality comparison of floating point numbers is not trivial.
Or, you could maintain a duplicate array of bools which signifies whether the number in corresponding index has been deleted.
Another approach would be to augment your array with a pointer to the last "used" number (or rather, point to the one after the last used number). This allows you to represent a smaller (dynamic)array than fits into the whole array. The size of such dynamic array would be the distance between address of the first number and the pointer and the size may change up to x. The numbers beyond the pointer would then be considered deleted. You must take care not to access the deleted numbers thinking they would contain valid data. If you want to delete a number in middle of the array, simply copy all numbers after it one index to the left and decrement the pointer. If you don't want to implement this yourself (and you shouldn't want to), you may want to use std::vector instead since this is pretty much what vector does under the hood.

Determining Array index upto which Array is filled?

I want to know upto what index my array is filled. I know one method in which I will maintain a temporary variable inside the loop and will keep it updating which will at last determine the size.
I want to know that beside this method is their any other way to do this task? Better be O(1)(if possible) or anything better than O(n).
There is no generic way to do that as all elements of an array always contain a value.
Couple common ways to handle that:
keep track of "valid" elements yourself as you suggested in the post.
have sentinel element that marks "missing" value and check each element for it - first element with such value will mark "end of filled array". For reference types you can use null, for other types sometimes there is specific value that rarely used and can be treated as "missing" - i.e. max value of integer types.
The second approach is the way C-style strings are implemented - it is array of characters up to 0 character - so you can always compute length of the string even if it is stored in longer array of chars.
will this do?
size_t size_of_array = sizeof(array)/sizeof(array[0]);
something like that , and do correct the syntax :)

Set all values of a row and/or column in c++ to 1 or 0

I have a problem which requires resetting all values in a column to 0 or 1. The code which i am using is normal naive approach to set values by iterating each time. Is there any faster implementation.
//Size of board n*n
i=0;
cin>>x>>y;x--;
if(query=="SetRow")
{
while(i!=N){ board[i][x]=y;i++;}
}
else
{
while(i!=N){ board[i][x]=y;i++;}
}
y can be 0 or 1
Well, other then iterating the columns there are few optimizations you might want to make:
Iterating columns is less efficient then iterating rows (about *4 slower) due to cache performance. In columns iteration, you have a cache miss for each element - while in rows iteration you have cache miss for 1 out of 4 elements (usually, it depends on architecture and size of data, but usually a cache line fits 4 integers).
Thus - if you iterate columns more often then rows- redesign it, in order to iterate rows more often. This thread discusses a similar issue.
Also, after you do it - you can use memset() which I believe is better optimized for this task.
(Note: Compilers might do that for you automatically in some cases)
You can use lazy initialization, there is actually O(1) algorithm to initialize an array with constant value, it is described with more details here: initalize an array in constant time. This comes at the cost of ~triple the amount of space, and more expansive seek later on.
The idea behind it (2) is to maintain additional stack (logically, implemented as array+ pointer to top) and array, the additional array will indicate when it was first initialized (a number from 0 to n) and the stack will indicate which elements were already modified.
When you access array[i], if stack[additionalArray[i]] == i && additionalArray[i] < top the value of the array is array[i]. Otherwise - it is the "initialized" value.
When doing array[i] = x, if it was not initialized yet (as seen before), you should set additionalArray[i] = stack[top] and increase top.
This results in O(1) initialization, but as said it requires additional memory and each access is more expansive.
The same principles described by the article regarding initializing an array in O(1) can also be applied here.
The problem is taken from running codechef long contest.... hail cheaters .. close this thread

Can I check in C(++) if an array is all 0 (or false)?

Can I check in C(++) if an array is all 0 (or false) without iterating/looping over every single value and without allocating a new array of the same size (to use memcmp)?
I'm abusing an array of bools to have arbitrary large bitsets at runtime and do some bitflipping on it
You can use the following condition:
(myvector.end() == std::find(myvector.begin(), myvector.end(), true))
Obviously, internally, this loops over all values.
The alternative (which really should avoid looping) is to override all write-access functions, and keep track of whether true has ever been written to your vector.
UPDATE
Lie Ryan's comments below describe a more robust method of doing this, based on the same principle.
If it's not sorted, no. How would you plan on accomplishing that? You would need to inspect every element to see if it's 0 or not! memcmp, of course, would also check every element. It would just be much more expensive since it reads another array as well.
Of course, you can early-out as soon as you hit a non-0 element.
Your only option would be to use SIMD (which technically still checks every element, but using fewer instructions), but you generally don't do that in a generic array.
(Btw, my answer assumes that you have a simple static C/C++ array. If you can specify what kind of array you have, we could be more specific.)
If you know that this is going to be a requirement, you could build a data structure consisting of an array (possibly dynamic) and a count or currently non-zero cells. Obviously the setting of cells must be abstracted through, but that is natural in c++ with overloading, and you can use an opaque type in c.
Assume that you have an array of N element, you can do a bit check against a set of base vectors.
For example, you have a 15-element array you want to test.
You can test it against an 8-element zero array, an 4-element zero array, a 2-element zero array and a 1-element zero array.
You only have to allocate these elements once given that you know the maximum size of arrays you want to test. Furthermore, the test can be done in parallel (and with assembly intrinsic if necessary).
Further improvement in term of memory allocation can be done with using only an 8-element array since a 4-element zero array is simply the first half of the 8-element zero array.
Consider using boost::dynamic_bitset instead. It has a none member and several other std::bitset-like operations, but its length can be set at runtime.
No, you can compare arrays with memcmp, but you can't compare one value against a block of memory.
What you can do is use algorithms in C++ but that still involves a loop internally.
You don't have to iterate over the entire thing, just stop looping on the first non-zero value.
I can't think of any way to check a set of values other than inspecting them each in turn - you could play games with checking the underlying memory as something larger than bool (__int64 say) but alignment is then an issue.
EDIT:
You could keep a separate count of set bits, and check that is non-zero. You'd have to be careful about maintenance of this, so that setting a set bit did not ++ it and so on.
knittl,
I don't suppose you have access to some fancy DMA hardware on the target computer? Sometimes DMA hardware supports exactly the operation you require, i.e. "Is this region of memory all-zero?" This sort of hardware-accelerated comparison is a common solution when dealing with large bit-buffers. For example, some RAID controllers use this mechanism for parity checking.