"Tricks" to filling a "rolling" time-series array absent brute-force pushback of all values each iteration - c++

My applications is financial, in C++, Visual Studio 2003.
I'm trying to maintain an array of the last (x) values for an observation, and as each new value arrives I have a loop first push all of the other values back then add the new value in the front.
It's computationally intensive, and I've been trying to be clever and come up with some way around this. I've probably either stated an oxymoronic problem or reached the limit of my intellect, or both.
Here's an idea I have:
Suppose it's 60 seconds of data, and new data arrive each second. Suppose we have an integer between 0 and 59, that will serve to index an element of the array. Suppose each second, when the data arrives, we first iterate the integer then overwrite the element of the array at that index with the new data. Then, suppose in our calculations, we refer to the same integer as the base, work backwards to zero, then to 59 then back down again. The formulas in the math would be a bit more tedious to write. But my application does a lot of these pushback/fills of arrays, each second for several data points, and each array having 3600 elements per data series (one hour of seconds).
Does the board think this is a good idea? Or am I being silly here?

What you're describing is nothing more than a circular buffer.
There's an implementation in Boost, and probably in other
libraries as well, and a good algorithm description on the
Wikipedia (http://en.wikipedia.org/wiki/Circular_buffer).
And yes, it's a good solution for the problem you describe.

You could use modulo as you suggested (hint: x % y is the syntax for "x" modulo "y"), or you could maintain two buffers where you essentially swap which one is the current data to be read and which buffer is the stale data that is to be overwritten. For copying large quantities of plain-old-data (POD) data, you should take a look at memcpy. Of course, in quantitative trading, people will do all sorts of things to get a speed edge, including custom hardware that allows one to bypass several layers of copying.

Are you sure you are talking about arrays? They don't have a "push" operation - it sounds more like an std::vector.
Anyway, here is the solution for what I think that you want:
If I understood it right, you want a collection of 3600 elements and each second the last element drops off and a new element is added.
So you should use a linked list queue for that task. Operations are performed in O(1).

Related

How to efficiently add and remove vector values C++

I am trying to efficiently calculate the averages from a vector.
I have a matrix (vector of vectors) where:
row: the days I am going back (250)
column: the types of things I am calculating the average of (10,000 different things)
Currently I am using .push_back() which essentially iterates through each row in each column and then I use erase() in order to remove the last value. As this method goes through all the values, my code is very slow.
I am thinking of a method linked to substitution, however I have a hard time implementing the idea, as all the values have an order (i.e. I need to remove the old value and the value I add / substitute will be the newest).
Below is my code so far.
Any ideas for a solution or guides for the right direction will be much appreciated.
//declaration
vector <vector<float> > vectorOne;
//initialization
vectorOne(250, vector<float>(10000, 0)),
//This is the slow method
vectorOne[column].push_back(1);//add newest value
vectorOne[column].erase(vectorOne[column].begin() + 0); //remove latest value
You probably need a different data structure.
The problem sounds like a queue. You add to the end and take from the front. With real queues, everyone then shuffles up a step. With computer queues, we can use a circular buffer (you do need to be able to get a reasonable bound on maximum queue length).
I suggest implementing your own on top of a plain C array first, then using the STL version when you've understood the principle.

What's the most efficient way to store a subset of column indices of big matrix and in C++?

I am working with a very big matrix X (say, 1,000-by-1,000,000). My algorithm goes like following:
Scan the columns of X one by one, based on some filtering rules, to identify only a subset of columns that are needed. Denote the subset of indices of columns be S. Its size depends on the filter, so is unknown before computation and will change if the filtering rules are different.
Loop over S, do some computation with a column x_i if i is in S. This step needs to be parallelized with openMP.
Repeat 1 and 2 for 100 times with changed filtering rules, defined by a parameter.
I am wondering what the best way is to implement this procedure in C++. Here are two ways I can think of:
(a) Use a 0-1 array (with length 1,000,000) to indicate needed columns for Step 1 above; then in Step 2 loop over 1 to 1,000,000, use if-else to check indicator and do computation if indicator is 1 for that column;
(b) Use std::vector for S and push_back the column index if identified as needed; then only loop over S, each time extract column index from S and then do computation. (I thought about using this way, but it's said push_back is expensive if just storing integers.)
Since my algorithm is very time-consuming, I assume a little time saving in the basic step would mean a lot overall. So my question is, should I try (a) or (b) or other even better way for better performance (and for working with openMP)?
Any suggestions/comments for achieving better speedup are very appreciated. Thank you very much!
To me, it seems that "step #1 really does not matter much." (At the end of the day, you're going to wind up with: "a set of columns, however represented.")
To me, what's really going to matter is: "just what's gonna happen when you unleash ("parallelized ...") step #2.
"An array of 'ones and zeros,'" however large, should be fairly simple for parallelization, while a more-'advanced' data structure might well, in this case, "just get in the way."
"One thousand mega-bits, these days?" Sure. Done. No problem. ("And if not, a simple array of bit-sets.") However-many simultaneously executing entities should be able to navigate such a data structure, in parallel, with a minimum of conflict . . . Therefore, to my gut, "big bit-sets win."
I think you will find std::vector easier to use. Regarding push_back, the cost is when the vector reallocates (and maybe copies) the data. To avoid that (if it matters), you could set vector::capacity to 1,000,000. Your vector is then 8 MB, insignificant compared to your problem size. It's only 1 order magnitude bigger than a bitmap would be, and a lot simpler to deal with: If we call your vector S and the nth interesting column i, then your column access is just x[S[i]].
(Based on my gut feeling) I'd probably go for pushing back into a vector, but the answer is quite simple: Measure both methods (they are both trivial to implement). Most likely you won't see a noticeable difference.

3D-Grid of bins: nested std::vector vs std::unordered_map

pros, I need some performance-opinions with the following:
1st Question:
I want to store objects in a 3D-Grid-Structure, overall it will be ~33% filled, i.e. 2 out of 3 gridpoints will be empty.
Short image to illustrate:
Maybe Option A)
vector<vector<vector<deque<Obj>> grid;// (SizeX, SizeY, SizeZ);
grid[x][y][z].push_back(someObj);
This way I'd have a lot of empty deques, but accessing one of them would be fast, wouldn't it?
The Other Option B) would be
std::unordered_map<Pos3D, deque<Obj>, Pos3DHash, Pos3DEqual> Pos3DMap;
where I add&delete deques when data is added/deleted. Probably less memory used, but maybe less fast? What do you think?
2nd Question (follow up)
What if I had multiple containers at each position? Say 3 buckets for 3 different entities, say object types ObjA, ObjB, ObjC per grid point, then my data essentially becomes 4D?
Another illustration:
Using Option 1B I could just extend Pos3D to include the bucket number to account for even more sparse data.
Possible queries I want to optimize for:
Give me all Objects out of ObjA-buckets from the entire structure
Give me all Objects out of ObjB-buckets for a set of
grid-positions
Which is the nearest non-empty ObjC-bucket to
position x,y,z?
PS:
I had also thought about a tree based data-structure before, reading about nearest neighbour approaches. Since my data is so regular I had thought I'd save all the tree-building dividing of the cells into smaller pieces and just make a static 3D-grid of the final leafs. Thats how I came to ask about the best way to store this grid here.
Question associated with this, if I have a map<int, Obj> is there a fast way to ask for "all objects with keys between 780 and 790"? Or is the fastest way the building of the above mentioned tree?
EDIT
I ended up going with a 3D boost::multi_array that has fortran-ordering. It's a little bit like the chunks games like minecraft use. Which is a little like using a kd-tree with fixed leaf-size and fixed amount of leaves? Works pretty fast now so I'm happy with this approach.
Answer to 1st question
As #Joachim pointed out, this depends on whether you prefer fast access or small data. Roughly, this corresponds to your options A and B.
A) If you want fast access, go with a multidimensional std::vector or an array if you will. std::vector brings easier maintenance at a minimal overhead, so I'd prefer that. In terms of space it consumes O(N^3) space, where N is the number of grid points along one dimension. In order to get the best performance when iterating over the data, remember to resolve the indices in the reverse order as you defined it: innermost first, outermost last.
B) If you instead wish to keep things as small as possible, use a hash map, and use one which is optimized for space. That would result in space O(N), with N being the number of elements. Here is a benchmark comparing several hash maps. I made good experiences with google::sparse_hash_map, which has the smallest constant overhead I have seen so far. Plus, it is easy to add it to your build system.
If you need a mixture of speed and small data or don't know the size of each dimension in advance, use a hash map as well.
Answer to 2nd question
I'd say you data is 4D if you have a variable number of elements a long the 4th dimension, or a fixed large number of elements. With option 1B) you'd indeed add the bucket index, for 1A) you'd add another vector.
Which is the nearest non-empty ObjC-bucket to position x,y,z?
This operation is commonly called nearest neighbor search. You want a KDTree for that. There is libkdtree++, if you prefer small libraries. Otherwise, FLANN might be an option. It is a part of the Point Cloud Library which accomplishes a lot of tasks on multidimensional data and could be worth a look as well.

Why is it so slow to add or remove elements in the middle of a vector?

According to Accelerated C++:
To use this strategy, we need a way to remove an element from a vector. The good news is that such a facility exists; the bad news is that removing elements from vectors is slow enough to argue against using this approach for large amounts of input data. If the data we process get really big, performance degrades to an astonishing extent.
For example, if all of our students were to fail, the execution time of the function that we are about to see would grow proportionally to the square of the number of students. That means that for a class of 100 students, the program would take 10,000 times as long to run as it would for one student. The problem is that our input records are stored in a vector, which is optimized for fast random access. One price of that optimization is that it can be expensive to insert or delete elements other than at the end of the vector.
The authors do not explain why the vector would be so slow for 10,000+ students, and why in general it is slow to add or remove elements to the middle of a vector. Could somebody on Stack Overflow come up with a beautiful answer for me?
Take a row of houses: if you build them in a straight line, then finding No. 32 is really easy: just walk along the road about 32 houses' worth, and you're there. But it's not quite so fun to add house No. 31&half; in the middle — that's a big construction project with a lot of disruption to husband's/wife's and kids' lives. In the worst case, there is not enough space on the road for another house anyway, so you have to move all the houses to a different street before you even start.
Similarly, vectors store their data contiguously, i.e. in a continuous, sequential block in memory.
This is very good for quickly finding the nth element (as you simply have to trundle along n positions and dereference), but very bad for inserting into the middle as you have to move all the later elements along by one, one at a time.
Other containers are designed to be easy to insert elements, but the trade-off is that they are consequently not quite as easy to find things in. There is no container which is optimal for all operations.
When inserting elements into or removing elements from the middle of a std::vector<T> all elements after the modification point need to moved: when inserting they need to be moved further to the back, when removing they need to be moved forward to close the gap. The background is that std::vector<T> is basically just a contiguous sequence of elements.
Although this operation isn't too bad for certain types it can become comparatively slow. Note, however, that the size of the container needs to be of some sensible size or the cost of moving be significant: for small vectors, inserting into/removing from the middle is probably faster than using other data structures, e.g., lists. Eventually the cost of maintaining a more complex structure does pay off, however.
std::vector allocates memory as one extent. If you need to insert an element in the middle of the extend you have to shift right all elements of the vector that to make a free slot where you will nsert the new element. And moreover if the extend is already full of elements the vector need to allocate a new larger extend and copy all elements from the original extent to the new one.

Concatenate 2 STL vectors in constant O(1) time

I'll give some context as to why I'm trying to do this, but ultimately the context can be ignored as it is largely a classic Computer Science and C++ problem (which must surely have been asked before, but a couple of cursory searches didn't turn up anything...)
I'm working with (large) real time streaming point clouds, and have a case where I need to take 2/3/4 point clouds from multiple sensors and stick them together to create one big point cloud. I am in a situation where I do actually need all the data in one structure, whereas normally when people are just visualising point clouds they can get away with feeding them into the viewer separately.
I'm using Point Cloud Library 1.6, and on closer inspection its PointCloud class (under <pcl/point_cloud.h> if you're interested) stores all data points in an STL vector.
Now we're back in vanilla CS land...
PointCloud has a += operator for adding the contents of one point cloud to another. So far so good. But this method is pretty inefficient - if I understand it correctly, it 1) resizes the target vector, then 2) runs through all Points in the other vector, and copies them over.
This looks to me like a case of O(n) time complexity, which normally might not be too bad, but is bad news when dealing with at least 300K points per cloud in real time.
The vectors don't need to be sorted or analysed, they just need to be 'stuck together' at the memory level, so the program knows that once it hits the end of the first vector it just has to jump to the start location of the second one. In other words, I'm looking for an O(1) vector merging method. Is there any way to do this in the STL? Or is it more the domain of something like std::list#splice?
Note: This class is a pretty fundamental part of PCL, so 'non-invasive surgery' is preferable. If changes need to be made to the class itself (e.g. changing from vector to list, or reserving memory), they have to be considered in terms of the knock on effects on the rest of PCL, which could be far reaching.
Update: I have filed an issue over at PCL's GitHub repo to get a discussion going with the library authors about the suggestions below. Once there's some kind of resolution on which approach to go with, I'll accept the relevant suggestion(s) as answers.
A vector is not a list, it represents a sequence, but with the additional requirement that elements must be stored in contiguous memory. You cannot just bundle two vectors (whose buffers won't be contiguous) into a single vector without moving objects around.
This problem has been solved many times before such as with String Rope classes.
The basic approach is to make a new container type that stores pointers to point clouds. This is like a std::deque except that yours will have chunks of variable size. Unless your clouds chunk into standard sizes?
With this new container your iterators start in the first chunk, proceed to the end then move into the next chunk. Doing random access in such a container with variable sized chunks requires a binary search. In fact, such a data structure could be written as a distorted form of B+ tree.
There is no vector equivalent of splice - there can't be, specifically because of the memory layout requirements, which are probably the reason it was selected in the first place.
There's also no constant-time way to concatenate vectors.
I can think of one (fragile) way to concatenate raw arrays in constant time, but it depends on them being aligned on page boundaries at both the beginning and the end, and then re-mapping them to be adjacent. This is going to be pretty hard to generalise.
There's another way to make something that looks like a concatenated vector, and that's with a wrapper container which works like a deque, and provides a unified iterator and operator[] over them. I don't know if the point cloud library is flexible enough to work with this, though. (Jamin's suggestion is essentially to use something like this instead of the vector, and Zan's is roughly what I had in mind).
No, you can't concatenate two vectors by a simple link, you actually have to copy them.
However! If you implement move-semantics in your element type, you'd probably get significant speed gains, depending on what your element contains. This won't help if your elements don't contain any non-trivial types.
Further, if you have your vector reserve way in advance the memory needed, then that'd also help speed things up by not requiring a resize (which would cause an undesired huge new allocation, possibly having to defragment at that memory size, and then a huge memcpy).
Barring that, you might want to create some kind of mix between linked-lists and vectors, with each 'element' of the list being a vector with 10k elements, so you only need to jump list links once every 10k elements, but it allows you to dynamically grow much easier, and make your concatenation breeze.
std::list<std::vector<element>> forIllustrationOnly; //Just roll your own custom type.
index = 52403;
listIndex = index % 1000
vectorIndex = index / 1000
forIllustrationOnly[listIndex][vectorIndex] = still fairly fast lookups
forIllustrationOnly[listIndex].push_back(vector-of-points) = much faster appending and removing of blocks of points.
You will not get this scaling behaviour with a vector, because with a vector, you do not get around the copying. And you can not copy an arbitrary amount of data in fixed time.
I do not know PointCloud, but if you can use other list types, e.g. a linked list, this behaviour is well possible. You might find a linked list implementation which works in your environment, and which can simply stick the second list to the end of the first list, as you imagined.
Take a look at Boost range joint at http://www.boost.org/doc/libs/1_54_0/libs/range/doc/html/range/reference/utilities/join.html
This will take 2 ranges and join them. Say you have vector1 and vector 2.
You should be able to write
auto combined = join(vector1,vector2).
Then you can use combined with algorithms, etc as needed.
No O(1) copy for vector, ever, but, you should check:
Is the element type trivially copyable? (aka memcpy)
Iff, is my vector implementation leveraging this fact, or is it stupidly looping over all 300k elements executing a trivial assignment (or worse, copy-ctor-call) for each element?
What I have seen is that, while both memcpyas well as an assignment-for-loop have O(n) complexity, a solution leveraging memcpy can be much, much faster.
So, the problem might be that the vector implementation is suboptimal for trivial types.