I read an article about MapReduce but I am still confused about how the job is split into tasks (in detail) to take advantage of parallel processing, especially in cases like this:
Assume that after Map process, we have 100 millions records (key/value pairs) with 5 keys, namely 'key1', key2', 'key3', key4', 'key5'. The first key has 99 millions records, the rest of the keys have 0.25 million each.
If we have 3 workers to do the reduce tasks, how does the Master split the job?
I have read that each key is processed by only one reducer, so if a reducer has to process 'key1', then would it work a lot more than the others and the parallel processing of reducers doesn't help much in this case?
Map reduce technique has several assumptions comes by default:
That the jobs are not inter-dependent i.e. you don't have to run task1 first to get its output; then run task2 with the output of task1; etc.
That the jobs can be divided into tasks that are "similar" in execution power needed and time taken. Your example is an extreme case for this assumption thus Map-reduce doesn't work well.
That a sensible divide strategy exists and such strategy won't take more time than running the tasks.
That the task which can be paralleled is the major effort in the task and they don't depend on some serial / single resource. E.g. disk IO.
In the reality there are a lot of problems satisfy the 4 points above (of course a lot doesn't, that's why Map-reduce isn't a universal solution). Common examples including all problems that are large in input data count, need separate processing, expensive in computational time but small in input data total size. E.g.
Determine if a line intersect with a 3D structure where you could have a lot of triangle faces and you run intersection determination for each of the triangles
Price a large number of financial products
Hope the above helps.
The input data with same key doesn’t have to be assigned to one reducer. Many reducers can share the input data with same key.
Imagine the merge sort for example. Map jobs divide an array into several sub-arrays. Multiple layers of reduce job sort and merge those sub-arrays back into one array. No matter how the data is arranged in an array, the complexity is still be O(n log n). Actually, the complexity of merge sort in best case and worst case are the same as average case. The way of merge sort algorithm to divide and merge the array won’t be affected by the data arrangement.
Related
For designing some algorithm I need to simulate the map-reduce environment. I assume that I have couple of jobs and each of them consists of set of map and reduce tasks. I have to make assumption about processing time of maps and reduce tasks.
For example job "j1" has 3 map tasks and 2 reduce tasks. Now is there any constraint in processing time of map tasks vs reduce tasks? How is it usually?
It would be difficult to make any assumptions without knowing what your map and reduce tasks do. The processing time of the map or reduce tasks of depends entirely on what you want them to do, you can't really make a blanket assumption.
For example, your individual map function could be processing an individual file as input, or an individual line, or an individual word, all of which directly effect the processing time.
The reducer is the same way; it could do a lot of processing, a little processing, or even no processing at all. (With Hadoop's implementation of MapReduce, you don't even have to have a reducer for your MapReduce task, evidencing the fact that the amount of processing varies). It just depends what the individual task calls for.
If you have an idea of what the simulated MapReduce jobs would actually be doing, you can use that to determine what the general processing times of the different tasks would be in comparison to each other.
I'd like to ask fellow SO'ers for their opinions regarding best of breed data structures to be used for indexing time-series (aka column-wise data, aka flat linear).
Two basic types of time-series exist based on the sampling/discretisation characteristic:
Regular discretisation (Every sample is taken with a common frequency)
Irregular discretisation(Samples are taken at arbitary time-points)
Queries that will be required:
All values in the time range [t0,t1]
All values in the time range [t0,t1] that are greater/less than v0
All values in the time range [t0,t1] that are in the value range[v0,v1]
The data sets consist of summarized time-series (which sort of gets over the Irregular discretisation), and multivariate time-series. The data set(s) in question are about 15-20TB in size, hence processing is performed in a distributed manner - because some of the queries described above will result in datasets larger than the physical amount of memory available on any one system.
Distributed processing in this context also means dispatching the required data specific computation along with the time-series query, so that the computation can occur as close to the data as is possible - so as to reduce node to node communications (somewhat similar to map/reduce paradigm) - in short proximity of computation and data is very critical.
Another issue that the index should be able to cope with, is that the overwhelming majority of data is static/historic (99.999...%), however on a daily basis new data is added, think of "in the field senors" or "market data". The idea/requirement is to be able to update any running calculations (averages, garch's etc) with as low a latency as possible, some of these running calculations require historical data, some of which will be more than what can be reasonably cached.
I've already considered HDF5, it works well/efficiently for smaller datasets but starts to drag as the datasets become larger, also there isn't native parallel processing capabilities from the front-end.
Looking for suggestions, links, further reading etc. (C or C++ solutions, libraries)
You would probably want to use some type of large, balanced tree. Like Tobias mentioned, B-trees would be the standard choice for solving the first problem. If you also care about getting fast insertions and updates, there is a lot of new work being done at places like MIT and CMU into these new "cache oblivious B-trees". For some discussion of the implementation of these things, look up Tokutek DB, they've got a number of good presentations like the following:
http://tokutek.com/downloads/mysqluc-2010-fractal-trees.pdf
Questions 2 and 3 are in general a lot harder, since they involve higher dimensional range searching. The standard data structure for doing this would be the range tree (which gives O(log^{d-1}(n)) query time, at the cost of O(n log^d(n)) storage). You generally would not want to use a k-d tree for something like this. While it is true that kd trees have optimal, O(n), storage costs, it is a fact that you can't evaluate range queries any faster than O(n^{(d-1)/d}) if you only use O(n) storage. For d=2, this would be O(sqrt(n)) time complexity; and frankly that isn't going to cut it if you have 10^10 data points (who wants to wait for O(10^5) disk reads to complete on a simple range query?)
Fortunately, it sounds like your situation you really don't need to worry too much about the general case. Because all of your data comes from a time series, you only ever have at most one value per each time coordinate. Hypothetically, what you could do is just use a range query to pull some interval of points, then as a post process go through and apply the v constraints pointwise. This would be the first thing I would try (after getting a good database implementation), and if it works then you are done! It really only makes sense to try optimizing the latter two queries if you keep running into situations where the number of points in [t0, t1] x [-infty,+infty] is orders of magnitude larger than the number of points in [t0,t1] x [v0, v1].
General ideas:
Problem 1 is fairly common: Create an index that fits into your RAM and has links to the data on the secondary storage (datastructure: B-Tree family).
Problem 2 / 3 are quite complicated since your data is so large. You could partition your data into time ranges and calculate the min / max for that time range. Using that information, you can filter out time ranges (e.g. max value for a range is 50 and you search for v0>60 then the interval is out). The rest needs to be searched by going through the data. The effectiveness greatly depends on how fast the data is changing.
You can also do multiple indices by combining the time ranges of lower levels to do the filtering faster.
It is going to be really time consuming and complicated to implement this by your self. I recommend you use Cassandra.
Cassandra can give you horizontal scalability, redundancy and allow you to run complicated map reduce functions in future.
To learn how to store time series in cassandra please take a look at:
http://www.datastax.com/dev/blog/advanced-time-series-with-cassandra
and http://www.youtube.com/watch?v=OzBJrQZjge0.
I have a computational algebra task I need to code up. The problem is broken into well-defined individuals tasks that naturally form a tree - the task is combinatorial in nature, so there's a main task which requires a small number of sub-calculations to get its results. Those sub-calculations have sub-sub-calculations and so on. Each calculation only depends on the calculations below it in the tree (assuming the root node is the top). No data sharing needs to happen between branches. At lower levels the number of subtasks may be extremely large.
I had previously coded this up in a functional fashion, calling the functions as needed and storing everything in RAM. This was a terrible approach, but I was more concerned about the theory then.
I'm planning to rewrite the code in C++ for a variety of reasons. I have a few requirements:
Checkpointing: The calculation takes a long time, so I need to be able to stop at any point and resume later.
Separate individual tasks as objects: This helps me keep a good handle of where I am in the computations, and offers a clean way to do checkpointing via serialization.
Multi-threading: The task is clearly embarrassingly parallel, so it'd be neat to exploit that. I'd probably want to use Boost threads for this.
I would like suggestions on how to actually implement such a system. Ways I've thought of doing it:
Implement tasks as a simple stack. When you hit a task that needs subcalculations done, it checks if it has all the subcalculations it requires. If not, it creates the subtasks and throws them onto the stack. If it does, then it calculates its result and pops itself from the stack.
Store the tasks as a tree and do something like a depth-first visitor pattern. This would create all the tasks at the start and then computation would just traverse the tree.
These don't seem quite right because of the problems of the lower levels requiring a vast number of subtasks. I could approach it in a iterator fashion at this level, I guess.
I feel like I'm over-thinking it and there's already a simple, well-established way to do something like this. Is there one?
Technical details in case they matter:
The task tree has 5 levels.
Branching factor of the tree is really small (say, between 2 and 5) for all levels except the lowest which is on the order of a few million.
Each individual task would only need to store a result tens of bytes large. I don't mind using the disk as much as possible, so long as it doesn't kill performance.
For debugging, I'd have to be able to recall/recalculate any individual task.
All the calculations are discrete mathematics: calculations with integers, polynomials, and groups. No floating point at all.
there's a main task which requires a small number of sub-calculations to get its results. Those sub-calculations have sub-sub-calculations and so on. Each calculation only depends on the calculations below it in the tree (assuming the root node is the top). No data sharing needs to happen between branches. At lower levels the number of subtasks may be extremely large... blah blah resuming, multi-threading, etc.
Correct me if I'm wrong, but it seems to me that you are exactly describing a map-reduce algorithm.
Just read what wikipedia says about map-reduce :
"Map" step: The master node takes the input, partitions it up into smaller sub-problems, and distributes those to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes that smaller problem, and passes the answer back to its master node.
"Reduce" step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output – the answer to the problem it was originally trying to solve.
Using an existing mapreduce framework could save you a huge amount of time.
I just google "map reduce C++" and I start to get results, notably one in boost http://www.craighenderson.co.uk/mapreduce/
These don't seem quite right because of the problems of the lower levels requiring a vast number of subtasks. I could approach it in a iterator fashion at this level, I guess.
You definitely do not want millions of CPU-bound threads. You want at most N CPU-bound threads, where N is the product of the number of CPUs and the number of cores per CPU on your machine. Exceed N by a little bit and you are slowing things down a bit. Exceed N by a lot and you are slowing things down a whole lot. The machine will spend almost all its time swapping threads in and out of context, spending very little time executing the threads themselves. Exceed N by a whole lot and you will most likely crash your machine (or hit some limit on threads). If you want to farm lots and lots (and lots and lots) of parallel tasks out at once, you either need to use multiple machines or use your graphics card.
Today in a interview I have got the question asking which sort you use for multi threaded application.Weather it is a merge sort or quick sort.
You use merge sort for multi-threaded applications.
The reason:
Merge sort divides the problem into separate smaller problems (smaller arrays) and then merges them. That can be done in separate threads.
Quick sort does a pivot sort on a single array, so it's harder to divide the problem efficiently between threads.
Every divide and conquer algorithm can be quite easily parallelised. Merge sort and quicksort both follow the same basic schema which can be run in parallel:
procedure DivideAndConquer(X)
if X is a base case then
Process base case X
return
Divide X into [Y0 … Yn[
for Y ∈ [Y0 … Yn[ in parallel do
DivideAndConquer(Y)
Merge [Y0 … Yn[ back into X
Where they differ is that in quicksort, the division is difficult and merging is trivial (no operation). In merge sort, it’s the other way round: dividing is trivial and merging is difficult.
If you implement the above schema, quicksort is actually easier to parallelise because you can just forget about the merge step. For merge sort, you need to keep track of finished parallel tasks. This screws up the load balancing.
On the other hand, if you follow the above schema, you’ve got a problem: the very first division, and the very last merging, will only use a single processor and all other processors will be idle. Thus it makes sense to parallelise these operations as well. And here we see that parallelising the partitioning step in quicksort is much harder than parallelising the merge step in merge sort.
A merge sort seems like it would be easier to parallelize and distribute...think about it, you're breaking it up into clean sub problems that can easily be divided and distributed. But then again, the same is true of quicksort. However, I would probably prefer doing it with merge sort as it would likely be easier.
Assuming a decent pivot selection, it's not all that different.
Subproblems are trivial to parallelize; they use (mostly) disjoint memory and need no synchronization, so the actual difference lies in the bottlenecks: the initial partition of quick-sort vs. the final merge in merge-sort. Neglecting to parallelize these will result in bad speedups for many cores or few elements (This gets noticeable a lot faster than you might think!).
Both algorithms can be parallelized efficiently. See this MCSTL paper for some experimental results and implementation details. The MCSTL was the base for what is now the GNU C++ std-lib parallel mode.
It's not all clear which algorithm will perform better in all circumstances as it depends on data distribution and about whether swaps or comparisons are slower.
I think they are looking for merge-sort as an answer, since it is easy to see how to split this between threads. Though another comment indicates that qsort can also be split into smaller problems. Likely many can be split into smaller problems.
There is one critical aspect that cannot be ignored. Communicating with the other threads takes a lot of time. The data set your are sorting has to be huge, or very expensive to compare, before creating the threads and doing the communication between them will be better than just using a single thread.
Further to this, with any sort, you have a serious problem of false sharing. Having multiple threads work with the same data can (communication time notwithstanding) be slower as the CPU is forced to share and update data between multiple cores. Unless your algorithm can properly align the data, passing it off to various threads will slow it down.
How to implement Radix sort on multi-GPU – same way as on single GPU i.e. by splitting the data then building histograms on separate GPUs and then use merge data back (like bunch of cards)?
That method would work, but I don't think it would be the fastest approach. Specifically, merging histograms for every K bits (K=4 is currently best) would require the keys to be exchanged between GPUs 32/K = 8 times to sort 32-bit integers. Since the memory bandwidth between GPUs (~5GB/s) is much lower than the memory bandwidth on a GPU (~150GB/s) this will kill performance.
A better strategy would be to split the data into multiple parts, sort each part in parallel on a different GPU, and then merge the parts once at the end. This approach requires only one inter-GPU transfer (vs. 8 above) so it will be considerably faster.
Unfortunately this question is not adequately posed. It depends on element size, where the elements begin life in memory, and where you want the sorted elements to end up residing.
Sometimes it's possible to compress the sorted list by storing elements in groups sharing the same common prefix, or you can unique elements on the fly, storing each element once in the sorted list with an associated count. For example, you might sort a huge list of 32-bit integers into 64K distinct lists of 16-bit values, cutting your memory requirement in half.
The general principle is that you want to make the fewest number of passes over the data as possible and that your throughput will almost always correspond to bandwidth constraints associated with your storage policy.
If your data set exceeds the size of fast memory, you probably want to finish with a merge pass rather than continue to radix sort, as another person has already answered.
I'm just getting into GPU architecture and I don't understand the K=4 comment above. I've never seen an architecture yet where such a small K would prove optimal.
I suspect merging histograms is also the wrong approach. I'd probably let the elements fragment in memory rather than merge histograms. Is it that hard to manage meso-scale scatter/gather lists in the GPU fabric? I sure hope not.
Finally, it's hard to conceive of a reason why you would want to involve multiple GPUs for this task. Say your card has 2GB of memory and 60GB/s write bandwidth (that's what my mid-range card is showing). A three pass radix sort (11-bit histograms) requires 6GB of write bandwidth (likely your rate limiting factor), or about 100ms to sort a 2GB list of 32-bit integers. Great, they're sorted, now what? If you need to ship them anywhere else without some kind of preprocessing or compression, the sorting time will be small fish.
In any case, just compiled my first example programs today. There's still a lot to learn. My target application is permutation intensive, which is closely related to sorting. I'm sure I'll weigh in on this subject again in future.