what is responsible for HDFS master (Namenode) - hdfs

Hi could anyone explain me what is HDFS master (Namenode is responsible for? also what is exactly Namenode and Datanode metadata in HDFS. I am recently started studying SPARK but our lecture was not deep enough for HDFS. Many thanks

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.
https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes
https://www.edureka.co/blog/apache-hadoop-hdfs-architecture/

Related

Does Dask communicate with HDFS to optimize for data locality?

In Dask distributed documentation, they have the following information:
For example Dask developers use this ability to build in data locality
when we communicate to data-local storage systems like the Hadoop File
System. When users use high-level functions like
dask.dataframe.read_csv('hdfs:///path/to/files.*.csv') Dask talks to
the HDFS name node, finds the locations of all of the blocks of data,
and sends that information to the scheduler so that it can make
smarter decisions and improve load times for users.
However, it seems that the get_block_locations() was removed from the HDFS fs backend, so my question is: what is the current state of Dask regarding to HDFS ? Is it sending computation to nodes where data is local ? Is it optimizing the scheduler to take into account data locality on HDFS ?
Quite right, with the appearance of arrow's HDFS interface, which is now preferred over hdfs3, the consideration of block locations is no longer part of workloads accessing HDFS, since arrow's implementation doesn't include the get_block_locations() method.
However, we already wanted to remove the somewhat convoluted code which made this work, because we found that the inter-node bandwidth on test HDFS deployments was perfectly adequate that it made little practical difference in most workloads. The extra constrains on the size of the blocks versus the size of the partitions you would like in-memory created an additional layer of complexity.
By removing the specialised code, we could avoid the very special case that was being made for HDFS as opposed to external cloud storage (s3, gcs, azure) where it didn't matter which worker accessed which part of the data.
In short, yes the docs should be updated.

Optimal File Size of S3 Files for Hadoop Job on EMR?

I am trying to determine the ideal size for a file stored in S3 that will be used in Hadoop jobs on EMR.
Currently I have large text files around 5-10gb. I am worried about the delay in copying these large files to HDFS to run MapReduce jobs. I have the option of making these files smaller.
I know S3 files are copied in parallel to HDFS when using S3 as an input directory in MapReduce jobs. But will a single large file be copied to HDFS using single thread, or will this file be copied as multiple parts in parallel? Also, does Gzip compression affect copying a single file in multiple parts?
There are two factors to consider:
Compressed files cannot be split between tasks. For example, if you have a single, large, compressed input file, only one Mapper can read it.
Using more, smaller files makes parallel processing easier but there is more overhead when starting the Map/Reduce jobs for each file. So, fewer files are faster.
Thus, there is a trade-off between the size and quantity of files. The recommended size is listed in a few places:
The Amazon EMR FAQ recommends:
If you are using GZIP, keep your file size to 1–2 GB because GZIP files cannot be split.
The Best Practices for Amazon EMR whitepaper recommends:
That means that a single mapper (a single thread) is responsible for fetching the data from Amazon S3. Since a single thread is limited to how much data it can pull from Amazon S3 at any given time (throughput), the
process of reading the entire file from Amazon S3 into the mapper becomes the bottleneck in your data processing workflow. On the other hand, if your data files can be split, more than a single mapper can process your file. The suitable size for such data files is between 2 GB and 4 GB.
The main goal is to keep all of your nodes busy by processing as many files in parallel as possible, without introducing too much overhead.
Oh, and keep using compression. The savings in disk space and data transfer time makes it more advantageous than enabling splitting.

Share data across Amazon Elastic Beanstalk nodes

I have a spring boot application which downloads around 300 MB of data at start up and saves it to a path /app/local/mydata. Currently, I have just one dev environment with a single node and it is not a problem. However, once I create a prod instance with (say) 10 nodes, it would be a waste of data bandwidth for each node to individually download the same 300 MB data. It will put a lot of stress on the service it is downloading the data from. And there is cost associated with data flowing in/out of EC2.
I can build a logic using a touchfile to make sure that only one box downloads the data and others just wait until the download is complete. However, I don't know where to download these data such that the other nodes can read it too.
Any suggestions?
Download it to S3 if you want to keep it in a file, but it sounds like you might need to put the data in a database (RDS) or maybe cache it in Redis (ElastiCache).
I'm not sure what a "touchfile" is but I assume you mean some sort of file lock mechanism. I don't see that as the best option for coordinating this across multiple servers. I would probably use a DynamoDB table with consistent reads and conditional writes as a distributed locking mechanism.
How often does the data you are downloading change? Perhaps you could just schedule a Lambda function to refresh the data periodically and update a database or something?
In general, you need to stop thinking about using the web server's local file system for this sort of thing.

EC2 spark master instance size

I intend to setup spark cluster on EC2. How much resources spark master instance actually needs? Since master is not involved in processing any of the tasks can it be the smallest EC2 instance?
This obviously depends on what kinds of jobs you're planning to run, how big is the cluster etc, so in that sense the advice to simply try different configurations is good. However, in my purely personal experience the driver instance should be at least at the level of the slave instances. This is mainly due to two reasons.
First of all, there are times when you need the result of the job in a single place. Maybe you just don't want to spend time combining files, maybe you need the results in some specific order which would be hard to achieve in a distributed way etc. but this means the driver should be able to hold all the data (as rdd.collect gathers the results to the driver instance).
Second of all, many of the shuffle-based operations seem to require a lot of memory from the driver. I'm not exactly sure about the details of why this happens (if anyone knows, please do share) but I can't count the number of times I've seen reduceyKey causing an out of memory error from the driver.
Edit: I have assumed you were using Spark's spark-ec2 script, which I believe does install the NameNode in the master instance. If the NameNode is not installed at the master intance, however, my answer has no validity as correctly pointed by #DemetriKots in the comments.
Although the master instance is not involved in data processing, it plays a major role during the management of the workload and resource allocation, e.g (all info is taken from the sources):
NameNode
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself.
Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
This (look for Hardware Recommendations for Hadoop on the left index) Hortonworks document specifies some recommendations for the master instance in a Hadoop cluster. While it might not be adequate for the slave instances (due to Spark's memory usage), I would say it can be useful in the case of the master instance in a Spark cluster.

A distributed file storage for blob data

I wanted to use HDFS to store images/files. But I read online, about the drawbacks of HDFS due to the single namenode. I came across this framework called Cassandra which is a nosql distributed database, but once again it does not perform well for blob storage. Any suggestions on what to do for this problem i.e. a distributed file storage for blob data?
There is a new version of HDFS coming (in beta) which solves the problem of single point failure of name node. Look at HDFS Federation and Namenode High Availability in CHD 4. You can find more information about them on cloudera website.