I have a theoretical question about RavenDB - is it possible to build a MapReduce pipeline?
So for instance, let's say I have MapReduce output Y from Index 1 and I use that output for some queries, but I need to run another MapReduce operation on Y in order to get some lower-level statistics.
Is it possible for me to write Index 2 which can further MapReduce the output from Index 1?
Aaronontheweb,
Not at the moment, no.
It is a feature that you could handle using Indexed Properties bundle in 1.2
Related
I am using Python SDK for Apache Beam to run a feature extraction pipeline on Google DataFlow. I need to run multiple transformations all of which expect items to be grouped by key.
Based on the answer to this question, DataFlow is unable to automatically spot and reuse repeated transformations like GroupBy, so I hoped to run GroupBy first and then feed the result PCollection to other transformations (see sample code below).
I wonder if this is supposed to work efficiently in DataFlow. If not, what is a recommended workaround in Python SDK? Is there an efficient way to have multiple Map or Write transformations taking results of the same GroupBy? In my case, I observe DataFlow scale to the max number of workers at 5% utilization and make no progress at the steps following the GroupBy as described in this question.
Sample code. For simplicity, only 2 transformations are shown.
# Group by key once.
items_by_key = raw_items | GroupByKey()
# Write groupped items to a file.
(items_by_key | FlatMap(format_item) | WriteToText(path))
# Run another transformation over the same group.
features = (items_by_key | Map(extract_features))
Feeding output of a single GroupByKey step into multiple transforms should work fine. But the amount of parallelization you can get depends on the total number of keys available in the original GroupByKey step. If any one of the downstream steps are high fanout, consider adding a Reshuffle step after those steps which will allow Dataflow to further parallelize execution.
For example,
pipeline | Create([<list of globs>]) | ParDo(ExpandGlobDoFn()) | Reshuffle() | ParDo(MyreadDoFn()) | Reshuffle() | ParDo(MyProcessDoFn())
Here,
ExpandGlobDoFn: expands input globs and generates files
MyReadDoFn: reads a given file
MyProcessDoFn: processes an element read from a file
I used two Reshuffles here (note that Reshuffle has a GroupByKey in it) to allow (1) parallelizing reading of files from a given glob (2) parallelizing processing of elements from a given file.
Based on my experience in troubleshooting this SO question, reusing GroupBy output in more than one transformation can make your pipeline extremely slow. At least this was my experience with Apache Beam SDK 2.11.0 for Python.
Common sense told me that branching out from a single GroupBy in the execution graph should make my pipeline run faster. After 23 hours of running on 120+ workers, the pipeline was not able to make any significant progress. I tried adding reshuffles, using a combiner where possible and disabling the experimental shuffling service.
Nothing helped until I split the pipeline into two ones. The first pipeline computes the GroupBy and stores it in a file (I need to ingest it "as is" to the DB). The second reads the file with a GroupBy output, reads additional inputs and run further transformations. The result - all transformations successfully finished under 2 hours. I think if I just duplicated the GroupBy in my original pipeline, I would have probably achieved the same results.
I wonder if this is a bug in the DataFlow execution engine or the Python SDK, or it works as intended. If it is by design, then at least it should be documented, and the pipeline like this should not be accepted when submitted, or there should be a warning.
You can spot this issue by looking at the 2 branches coming out of the "Group keywords" step. It looks like the solution is to rerun GroupBy for each branch separately.
Hi
I'm new in pyspark and i'm going to implement DBSCAN using MAP_REDUCE technique which is explained in https://github.com/mraad/dbscan-spark, but i don't understand something ,
obviously if we have multiple computers then we assign each cell to a MAP and as explained in the link, after calling REDUCE we find out Contents of each epsilon neighbor of cell, but in single computer how we can run and assign MAP's to cell's .
how do we define multiple maps in single computer(pyspark) and assign them too cell's ?
I wrote fishnet(cell,eps) that return point location according to cell's epsilon neighbor .
I want to pass it to each MAP but i don't know how to do it in pyspark.
Something like(if we have 4 cell's) :
map1(fishnet) map2(fishnet) map3(fishnet) map4(fishnet)
I would appriciate for any solution
It's the job of Spark / MapReduce to distribute the mappers to different workers. Don't mess with that part, let Spark decide where to invoke the actual mappers.
Beware that Spark is not very well suited for clustering. It's clustering capabilities are very limited, and the performance is pretty bad. See e.g.:
Neukirchen, Helmut. "Performance of Big Data versus High-Performance Computing: Some Observations."
It needed 900 cores with Spark to outperform a good single-core application like ELKI! And other Spark DBSCAN implementations would either not work reliably (i.e. fail) or produce wrong results.
What is the difference between Tez engine and Map Reduce engine in Hive and in which process which engine is better to use (for eg:joins, aggregation?)
Tez is a DAG (Directed acyclic graph) architecture. A typical Map reduce job has following steps:
Read data from file -->one disk access
Run mappers
Write map output --> second disk access
Run shuffle and sort --> read map output, third disk access
write shuffle and sort --> write sorted data for reducers --> fourth disk access
Run reducers which reads sorted data --> fifth disk output
Write reducers output -->sixth disk access
Tez works very similar to Spark (Tez was created by Hortonworks well before Spark):
Execute the plan but no need to read data from disk.
Once ready to do some calculations (similar to actions in spark), get the data from disk and perform all steps and produce output.
Only one read and one write.
The efficiency is increased by not going to disk multiple times. Intermediate results are stored in memory (not written to disks)
Tez is a DAG-based system, it's aware of all opération in such a way that it optimizes these operations before starting execution.
MapReduce model simply states that any computation can be performed by two kinds of computation steps – a map step and a reduce step. One pair of map and reduce does one level of aggregation over the data. Complex computations typically require multiple such steps.
Tez is usually running under MaprReduce, so it's just a MapReduce optimized with less and compacted steps.
Apache Tez is plug-in compatible with MapReduce but reduces the amount of disk access. Tez is always better than MapReduce.
However, there are also systems better than Hive + Tez, such as Spark SQL.
I am optimizing parameters in Spark, and would like to know exactly how Spark is shuffling data.
Precisely, I have a simple word count program, and would like to know how spark.shuffle.file.buffer.kb is affecting the run time. Right now, I only see slowdown when I make this parameter very high (I am guessing this prevents every task's buffer from fitting in memory simultaneously).
Could someone explain how Spark is performing reductions? For example, the data is read and partitioned in an RDD, and when an "action" function is called, Spark sends out tasks to the worker nodes. If the action is a reduction, how does Spark handle this, and how are shuffle files / buffers related to this process?
Question : As for your question concerning when shuffling is triggered on Spark?
Answer : Any join, cogroup, or ByKey operation involves holding objects in hashmaps or in-memory buffers to group or sort. join, cogroup, and groupByKey use these data structures in the tasks for the stages that are on the fetching side of the shuffles they trigger. reduceByKey and aggregateByKey use data structures in the tasks for the stages on both sides of the shuffles they trigger.
Explanation : How does shuffle operation work in Spark?
The shuffle operation is implemented differently in Spark compared to Hadoop. I don't know if you are familiar with how it works with Hadoop but let's focus on Spark for now.
On the map side, each map task in Spark writes out a shuffle file (os disk buffer) for every reducer – which corresponds to a logical block in Spark. These files are not intermediary in the sense that Spark does not merge them into larger partitioned ones. Since scheduling overhead in Spark is lesser, the number of mappers (M) and reducers(R) is far higher than in Hadoop. Thus, shipping M*R files to the respective reducers could result in significant overheads.
Similar to Hadoop, Spark also provide a parameter spark.shuffle.compress to specify compression libraries to compress map outputs. In this case, it could be Snappy (by default) or LZF. Snappy uses only 33KB of buffer for each opened file and significantly reduces risk of encountering out-of-memory errors.
On the reduce side, Spark requires all shuffled data to fit into memory of the corresponding reducer task, on the contrary of Hadoop that had an option to spill this over to disk. This would of course happen only in cases where the reducer task demands all shuffled data for a GroupByKey or a ReduceByKey operation, for instance. Spark throws an out-of-memory exception in this case, which has proved quite a challenge for developers so far.
Also with Spark there is no overlapping copy phase, unlike Hadoop that has an overlapping copy phase where mappers push data to the reducers even before map is complete. This means that the shuffle is a pull operation in Spark, compared to a push operation in Hadoop. Each reducer should also maintain a network buffer to fetch map outputs. Size of this buffer is specified through the parameter spark.reducer.maxMbInFlight (by default, it is 48MB).
For more information about shuffling in Apache Spark, I suggest the following readings :
Optimizing Shuffle Performance in Spark by Aaron Davidson and Andrew Or.
SPARK-751 JIRA issue and Consolidating Shuffle files by Jason Dai.
It occurs whenever data needs to moved between executors (worker nodes)
I want to understand MapReduce of Hbase from application point of view, Need some real use cases of it to better understand the efficient use case of writing these jobs.
If there is any link to document or examples that explains the real use cases, Please share.
I can give some example based on my use cases. If you already store your data in hbase, you can write a java program, which scans a table and do something, then write the output to hbase or somewhere else. OR you can use mapreduce to do the same. The difference is, mapreduce will run where the data is and network traffic is used only for result data. We have hourly jobs to calculate sum and average of kpis and input data is huge but output data is tiny for this task. If i did not use mapreduce, i need to move one hour of data over network which is 18gb. But mapreduce output is only 1mb and i can write it to hbase or file or somewhere else.
Also mapreduce gives you parallel task execution ability, which you can have in java but why :)
Keep in mind that YARN creates map tasks according to your hbase table's split count. So if you need more map task, split your table.
If you already store your data in hadoop hdfs, you are lucky, a mapreduce reading from hdfs is much faster than reading from hbase. Also you can still write mapreduce output to hbase, if you want.
Please look into the usecases given
1. here.
2. And a small reference here - 30.Joins
3. May be an end to end example here
In the end, it all depends on your understanding of each concept Map reduce, Hbase and use it as per your need in your project. The same task can be done with or without map reduce. Happy coding