In the MSDN documentation for CComSafeArray::MultiDimSetAt, alIndex is documented as follows:
Pointer to a vector of indexes for each dimension in the array. The rightmost (least significant) dimension is alIndex[0].
In the documentation for CComSafeArray::MultiDimGetAt, alIndex is documented differently:
Pointer to a vector of indexes for each dimension in the array. The leftmost (most significant) dimension is alIndex[0].
This made me think that, to get to the same element, one would need to reverse the order of the indices in a multidimensional array. However, I have not found this to be the case in practice.
Am I misusing this interface and getting lucky, misunderstanding the documentation, or is this possibly an error in the docs?
It seems to be docs error - I'd suggest you to refer to SafeArrayGetElement/SafeArrayPutElement documentation as it seems to be more accurate.
To set and get the same element you should use the same array of indices (without reversing).
By the way, nice catch!
Related
I have two sorted arrays, one containing factors (array a) that when multiplied with values from another array (array b), yields the desired value:
a(idx1) * b(idx2) = value
With idx2 known, I would like find the idx1 of a that provides the factor necessary to get as close to value as possible.
I have looked at some different algorithms (like this one, for example), but I feel like they would all be subject to potential problems with floating point arithmetic in my particular case.
Could anyone suggest a method that would avoid this?
If I understand correctly, this expression
minloc(abs(a-value/b(idx2)))
will return the the index into a of the first occurrence of the value in a which minimises the difference. I expect that the compiler will write code to scan all the elements in a so this may not be faster in execution than a search which takes advantage of the knowledge that a and b are both sorted. In compensation, this is much quicker to write and, I expect, to debug.
A rather quick question concerning pointers in c++
My problem is,let's say I have a function isWon(char * sign, int i, int j). I call this method by giving
the address of an element in a 2D array
it's coordinates in a locally declared array
Is there any way of e.g. knowing the elements neighbors and getting to them?
Thanks for the help :)
If the array is a true array 2D array and not an array of pointers or something like that, then you can add/subtract to/from sign to get other elements' addresses.
For example, memory-wise the previous element in the array is at sign - 1. If you think of your 2D array as a grid, sign - 1 might not be the element in the previous "column".
You have to be careful how much you step in your array and ask yourself why you resort to such low-level dangerous mechanisms that feel out of place in C++.
I'm trying to make a 3 dimensional array of booleans that tells me if I previously visited a location in 3d space for a simple navigation algorithm. The array could be quite large (something along the lines of 1,000,000 x 1,000,000 x 1,000,000 or maybe larger), so I'm wondering if it would be faster to declare an array of that size and set each boolean value to false, or to make a map with a key of coordinate (x, y, z) and a value of type bool.
From what I figure, the array would take O(1) to find or modify a coordinate, and the map would take O(log n) to find or insert a value. Obviously, for accessing values, the array is faster. However, does this offset the time it takes to declare such an array?
Thanks
Even at 1 bit per bool, your array will take over 2**39 bytes. I'd suggest a set if there aren't too many elements that will be true.
You can use a class to hide the implementation details, and use a 1D set.
Have you tried calculating how much memory would be needed for an array like this? A lot!
Use std::map if ordering of the points is important, or std::unordeded_map if not. Also the unordered map gives you a constant time insertion and lookup.
I guess that some kind of search tree is probably what you're looking for (k-d tree for example).
You're going to make an array that is one exabyte, assuming that you use 8 bits per point? Wow, you have a lot of RAM!
I think you should re-think your approach.
Can I check in C(++) if an array is all 0 (or false) without iterating/looping over every single value and without allocating a new array of the same size (to use memcmp)?
I'm abusing an array of bools to have arbitrary large bitsets at runtime and do some bitflipping on it
You can use the following condition:
(myvector.end() == std::find(myvector.begin(), myvector.end(), true))
Obviously, internally, this loops over all values.
The alternative (which really should avoid looping) is to override all write-access functions, and keep track of whether true has ever been written to your vector.
UPDATE
Lie Ryan's comments below describe a more robust method of doing this, based on the same principle.
If it's not sorted, no. How would you plan on accomplishing that? You would need to inspect every element to see if it's 0 or not! memcmp, of course, would also check every element. It would just be much more expensive since it reads another array as well.
Of course, you can early-out as soon as you hit a non-0 element.
Your only option would be to use SIMD (which technically still checks every element, but using fewer instructions), but you generally don't do that in a generic array.
(Btw, my answer assumes that you have a simple static C/C++ array. If you can specify what kind of array you have, we could be more specific.)
If you know that this is going to be a requirement, you could build a data structure consisting of an array (possibly dynamic) and a count or currently non-zero cells. Obviously the setting of cells must be abstracted through, but that is natural in c++ with overloading, and you can use an opaque type in c.
Assume that you have an array of N element, you can do a bit check against a set of base vectors.
For example, you have a 15-element array you want to test.
You can test it against an 8-element zero array, an 4-element zero array, a 2-element zero array and a 1-element zero array.
You only have to allocate these elements once given that you know the maximum size of arrays you want to test. Furthermore, the test can be done in parallel (and with assembly intrinsic if necessary).
Further improvement in term of memory allocation can be done with using only an 8-element array since a 4-element zero array is simply the first half of the 8-element zero array.
Consider using boost::dynamic_bitset instead. It has a none member and several other std::bitset-like operations, but its length can be set at runtime.
No, you can compare arrays with memcmp, but you can't compare one value against a block of memory.
What you can do is use algorithms in C++ but that still involves a loop internally.
You don't have to iterate over the entire thing, just stop looping on the first non-zero value.
I can't think of any way to check a set of values other than inspecting them each in turn - you could play games with checking the underlying memory as something larger than bool (__int64 say) but alignment is then an issue.
EDIT:
You could keep a separate count of set bits, and check that is non-zero. You'd have to be careful about maintenance of this, so that setting a set bit did not ++ it and so on.
knittl,
I don't suppose you have access to some fancy DMA hardware on the target computer? Sometimes DMA hardware supports exactly the operation you require, i.e. "Is this region of memory all-zero?" This sort of hardware-accelerated comparison is a common solution when dealing with large bit-buffers. For example, some RAID controllers use this mechanism for parity checking.
i want to store some kind of distance-matrix (2D), where each entry has some alternatives (different coordinates). So i want to access the distance for example x=1 with x_alt=3 and y=3 with y_alt=1, looking in a 4-dim multi-array with array[1][3][3][1].
The important thing to notice is the following: the 2 most inner arrays/vectors don't have the same size for different values of the outer ones.
After a first init step, where i calculate the values, no more modifying is needed!
This should be easily possible with the use of stl-vectors:
vector<vector<vector<vector<double> > > >`extended_distance_matrix;
where i can dynamically iterate over the outer 2 dimensions and fill only as much alternatives to the inner 2 dimensions as i need (e.g. with push_back()).
Questions:
Is this kind of data-structure definition possible with Boost.MultiArray? How?
Is it a good idea to use Boost.MultiArray instead of the nested vectors? Performance (especially lookups! (Memory-layout))? Easy of use?
Thanks for any input!
sascha
PS: The boost documentation didn't help me. Maybe one can use multi_array_ref to get already sized arrays into the whole 4D-structure?
Edit:
At the moment i'm thinking of another approach: flattening the alternatives -> one bigger matrix with all the distances between the alternatives. Then i only need to calc the number of alternatives per node, build up the prefix sum (which is describing the matrix position/shift) and can then access the information in a 2-step-way.
But my questions are still open.
it sounds like you need:
multi_array<ublas::matrix<type>,2>
Boost.MultiArray deals with contiguous memory (arranged logically in many dimensions) so it is difficult to add elements in the inner dimensions. MultiArrays can be dynamically resized, e.g. to add elements in any dimension, but this is a costly operation that almost certainly need (internally) reallocation and copying.
Because of that requirement MultiArray is not the best option. But from what you say it looks like a combination of the two would be appropriate to you.
boost::multi_array<std::vector<std::vector<type>>, 2> data
The very nice thing is that the indexing interface doesn't change with respect to boost::multi_array<type, 4>. For example data[1][2][3][4] still makes sense.
I don't know from your post how you handle the inner dimension but it could even make sense to use this:
boost::multi_array<boost::multi_array<type>, 2>, 2> data
In any case, unless you really need to do linear algebra I would stay away from boost::ublas::array, or at most use it for the internal array if type is numeric. boost::multi_array<boost::ublas::array<type>, 2> data which is mentioned in the other answer.