Why can MapReduce's intermediate results only be saved to disk - mapreduce

Why can MapReduce's intermediate results only be saved to disk? Why can't it save to memory like Spark?

Related

Does dask S3 reading cache the data on disk/RAM?

I've been reading about dask and how it can read data from S3 and do processing from that in a way that does not need the data to completely reside in RAM.
I want to understand what dask would do if I have a very large S3 file what I am trying to read. Would it:
Load that S3 file into RAM ?
Load that S3 file and cache it in /tmp or something ?
Make multiple calls to the S3 file in parts
I am assuming here I am doing a lot of different complicated computations on the dataframe and it may need multiple passes on the data - i.e. let's say a join, group by, etc.
Also, a side question is if I am doing a select from S3 > join > groupby > filter > join - would the temporary dataframes which I am joining with be on S3 ? or on disk ? or RAM ?
I know Spark uses RAM and overflows to HDFS for such cases.
I'm mainly thinking of single machine dask at the moment.
For many file-types, e.g., CSV, parquet, the original large files on S3 can be safely split into chunks for processing. In that case, each Dask task will work on one chunk of the data at a time by making separate calls to S3. Each chunk will be in the memory of a worker while it is processing it.
When doing a computation that involves joining data from many file-chunks, preprocessing of the chunks still happens as above, but now Dask keeps temporary structures around to accumulate partial results. How much memory will depend on the chunking size of the data, which you may or may not control, depending on the data format, and exactly what computation you want to apply to it.
Yes, Dask is able to spill to disc in the case that memory usage is large. This is better handled in the distributed scheduler (which is now the recommended default even on a single machine). Use the --memory-limit and --local-directory CLI arguments, or their equivalents if using the Client()/LocalCluster(), to control how much memory each worker can use and where temporary files get put.

how gemfirexd store the table data file greater than its in_memory?

i have in-memroy of 4GB.the data file iam going to load into GEMFIREXD is of 8GB. how in-memory organize the Remaining data 4 GB data.i read about EVICTION Class but i didn't get any clarification.
While loading the data it copied into disk OR after filling the 4GB it start coping into disk?
help onthis ..
thank you
If you use the EVICTION clause without using the PERSISTENT clause, the data will start being written to disk once you reach the eviction threshold. The least recently used rows will be written to disk and dropped from memory.
If you have a PERSISTENT table, the data is already on disk when you reach your eviction threshold. At that point, the least recently used rows are dropped from memory.
Note that there is still a per row overhead in memory even if the row is evicted.
Doc reference for details:
http://gemfirexd.docs.pivotal.io/latest/userguide/index.html#overflow/configuring_data_eviction.html
- http://gemfirexd.docs.pivotal.io/latest/userguide/index.html#caching_database/eviction_limitations.html

How in-memory databases store data larger than RAM memory in GemfireXD?

If I am using a cluster of 4 nodes, each having 4GB RAM, so total RAM memory is 16 GB. And I have to store 20 GB of data in a table.
Then how in-memory database will accommodate this data ? I read somewhere the data is swapped between RAM & Disk , but wouldn't it make data access slow. Please Explain
GemFire or GemFireXD evicts the data to disk if it feels memory pressure while accommodating more data.
It may have some performance implications. However, user can control how and when eviction takes place. All the algorithms use Least Recently Used algorithms to evict the data.
Also, when a row is evicted, the primary key value remains in memory while the remaining column data is evicted. This makes fetching the row from disk faster.
You can go through the following links to understand about evictions in GemFireXD:
http://gemfirexd.docs.pivotal.io/1.3.0/userguide/developers_guide/topics/cache/cache.html
HANA offers the possibility to unload data from the main memory. Since the data is then stored on the harddisc, queries accessing this data will run slowlier of course. Have a look at the hot/warm/cold data concept if you haven't heard about it.
This article gives you additional information about this topic: http://scn.sap.com/community/bw-hana/blog/2014/02/14/sap-bw-on-hana-data-classification-hotwarmcold
Though the question only targeted SQLITE & HANA wanted to share some insights on Oracle's Database Inmemory. It achieves loading huge tables into inmemory area by using various compression algorithms. Data populated into the IM column store is compressed using a new set of compression algorithms that not only help save space but also improve query performance. For example, table with 10GB in size when compressed with capacity high sizes to 3GB. This allows, table whose size greater than RAM be stored in a compressed format in inmemory area.
The OP specifically asked about a cluster, so that rules out SQLite (at least out of the box). You need a DBMS that can:
treat the 4 X 4GB of memory as 16GB of "storage" (IOW distribute the data across the noes of the cluster, but treat it as a whole)
compress the data to squeeze the 20GB of raw data into the available 16GB
eXtremeDB is one such solution. So is Oracle's Database In-Memory (with RAC). I'm sure there are others.
If you configure your tables so, GemFireXD can use offheap memory to be able to store larger amount of data in memory, consequently pushing off the need to evict data onto disk a bit farther (although reads of evicted data are optimized for faster lookup because the lookup keys are in memory)
http://gemfirexd.docs.pivotal.io/1.3.1/userguide/data_management/off-heap-guidelines.html

Keeping the physical address of the data which is in hard drive

Is it possible to keep a datas address which is placed in hard disk or solid state disk in a container in RAM?
For my application (C++/visual studio 2008) I'm going to create a repository(directory) on a SSD drive and in that repository there will be thousands (lets say 100000 files with size like 3 MB for each) of binary files (the names of the files are Unique Id's)
Some applications have to perform search operation on tihs directory with the names (Id's).
So I thought that, if I create a container like a map in RAM and set the key column ID (name of the file) and the value to the physical address of the file (which is in the SSD) and let the applications perform the search in this map (in RAM) and if the data found, retreive the data with the address(since we have the physical address) wouldn't be much more faster?
So is it possible to do something like that?
There are a few easy options: use a database, or memory map the bunch of files. In the latter case, you'll have to be aware that apparent memory operations are in fact disk I/O's, and much slower. But the result of memory mapping a file is still a fairly ordinary pointer to its contents. This is even easier than the "physical addresses" you're proposing.

Mysql database writes and file writes

I have a program which writes data to mysql database and also huge amounts of logs to a file.. i have noticed that if i give huge amounts of data as input to the program, i.e data that creates logs as big as 70GB and mysql database table count(*), of the table that i use, to >1,000,000 entries, the whole programme slows down after some time..
But when initially the reports were collected at the rate of around 1000/min but the same becomes < 400/min wen the data is as i said before. Is this the database writes or the file writes that makes the program slower?
The logs are just cout from my program that are redirected to a file. No buffering is done there.
There's an easy way to test for this.
If you create a blackhole table, MySQL will pretend to do everything but never really write any data to file.
create table(s) just like your
normal table(s),
Make a copy of the logs.
Now write to the blackhole database just like you would in the real database.
If it's much faster it's MySQL giving you grief.
See: http://dev.mysql.com/doc/refman/5.5/en/blackhole-storage-engine.html