Hadoop Distribution File System - hdfs

I would like to modify the way an input file is split into blocks and stored in Hadoop Distributed File System.(example it splits the file based on block size but my application requires to split the file based on the file content).
So i would like to know exactly the class which splits the file into blocks based on the block size Property of HADOOP.

Blocks are the abstractions for HDFS and InputSplits are the abstractions for MapReduce. By default, one HDFS block corresponds to one InputSplit which can be modified.
HDFS by default divides the blocks into exact 64MB blocks and might also split across record boundaries. It's upto the InputFormat to create InputSplits from the blocks of data in case of file input format. Each InputSplit will be processed by a separate mapper.
example it splits the file based on block size but my application requires to split the file based on the file content
Think in terms of InputSplits and create a new InputFormat as per the application requirement. Here are some tutorials (1, 2 and 3) on creating a new InputFormat.

Related

How can I explicitly specify the size of the files to be split or the number of files?

Situation:
If only specify the partition clause, it will be divided into multiple files. The size of one file is less than 1MB (~ 40 files).
What I am thinking of:
I want to explicitly specify the size of the files to be split or the number of files when registering data with CTAS or INSERT INTO.
I have read this article: https://aws.amazon.com/premiumsupport/knowledge-center/set-file-number-size-ctas-athena/
Problem:
Using bucketing method (like said in above article ) can help me specify the number of file or file size. However, it also said that "Note: The INSERT INTO statement isn't supported on bucketed tables". I would like to register data daily with Athena's INSERT INTO in the data mart.
what is the best way to build a partitioned data mart without compromising search efficiency? Is it best to register the data with Glue and save it as one file?

AWS Athena - how to process huge results file

Looking for a way to process ~ 4Gb file which is a result of Athena query and I am trying to know:
Is there some way to split Athena's query result file into small pieces? As I understand - it is not possible from Athena side. Also, looks like it is not possible to split it with Lambda - this file too large and looks like s3.open(input_file, 'r') does not work in Lambda :(
Is there some other AWS services that can solve this issue? I want to split this CSV file to small (about 3 - 4 Mb) to send them to external source (POST requests)
You can use the option to CTAS with Athena and use the built-in partition capabilities.
A common way to use Athena is to ETL raw data into a more optimized and enriched format. You can turn every SELECT query that you run into a CREATE TABLE ... AS SELECT (CTAS) statement that will transform the original data into a new set of files in S3 based on your desired transformation logic and output format.
It is usually advised to have the newly created table in a compressed format such as Parquet, however, you can also define it to be CSV ('TEXTFILE').
Lastly, it is advised to partition a large table into meaningful partitions to reduce the cost to query the data, especially in Athena that is charged by data scanned. The meaningful partitioning is based on your use case and the way that you want to split your data. The most common way is using time partitions, such as yearly, monthly, weekly, or daily. Use the logic that you would like to split your files as the partition key of the newly created table.
CREATE TABLE random_table_name
WITH (
format = 'TEXTFILE',
external_location = 's3://bucket/folder/',
partitioned_by = ARRAY['year','month'])
AS SELECT ...
When you go to s3://bucket/folder/ you will have a long list of folders and files based on the selected partition.
Note that you might have different sizes of files based on the amount of data in each partition. If this is a problem or you don't have any meaningful partition logic, you can add a random column to the data and partition with it:
substr(to_base64(sha256(some_column_in_your_data)), 1, 1) as partition_char
Or you can use bucketing and provide how many buckets you want:
WITH (
format = 'TEXTFILE',
external_location = 's3://bucket/folder/',
bucketed_by = ARRAY['column_with_high_cardinality'],
bucket_count = 100
)
You won't be able to do this with Lambda as your memory is maxed out around 3GB and your file system storage is maxed out at 512 MB.
Have you tried just running the split command on the filesystem (if you are using a Unix based OS)?
If this job is reoccurring and needs to be automated and you wanted to still be "serverless", you could create a Docker image that contains a script to perform this task and then run it via a Fargate task.
As for the specific of how to use split, this other stack overflow question may help:
How to split CSV files as per number of rows specified?
You can ask S3 for a range of the file with the Range option. This is a byte range (inclusive), for example bytes=0-1000 to get the first 1000 bytes.
If you want to process the whole file in the same Lambda invocation you can request a range that is about what you think you can fit in memory, process it, and then request the next. Request the next chunk when you see the last line break, and prepend the partial line to the next chunk. As long as you make sure that the previous chunk gets garbage collected and you don't aggregate a huge data structure you should be fine.
You can also run multiple invocations in parallel, each processing its own chunk. You could have one invocation check the file size and then invoke the processing function as many times as necessary to ensure each gets a chunk it can handle.
Just splitting the file into equal parts won't work, though, you have no way of knowing where lines end, so a chunk may split a line in half. If you know the maximum byte size of a line you can pad each chunk with that amount (both at the beginning and end). When you read a chunk you skip ahead until you see the last line break in the start padding, and you skip everything after the first line break inside the end padding – with special handling of the first and last chunk, obviously.

Read Large CSV from S3 using Lambda

I have multiple compressed (.gzip) csv file in S3 which I wish to parse using preferably Lambda. The largest compressed file seen so far is 80MB. On decompressing, the file size becomes 1.6GB. It is approximately that a single uncompressed file can be approximately 2GB (the file be stored in compressed in S3).
After parsing, I am interested in selected rows from the csv file. I do not expect the memory used by filtered rows to be more than 200MB.
However, given Lambda's limit on time(15 min) & memory (3GB), is using Lambda for such use case a feasible option in longer run? Any alternatives to consider?

Apache Hadoop: Insert compress data into HDFS

I need to upload 100 text files into HDFS to do some data transformation with Apache Pig.
In you opinion, what is the best option:
a) Compress all the text files and upload only one file,
b) Load all the text files individually?
It depends - on your files size, cluster parameters and processing methods.
If your text files are comparable in size with HDFS block size (i.e. block size = 256 MB, file size = 200 MB), it makes sense to load them as is.
If your text files are very small, there would be typical HDFS & small files problem - each file will occupy 1 hdfs block (not physically), so NameNode (which handles metadata) will suffer some overhead on managing lot of blocks. To solve this you could either merge your files into single one, use hadoop archives (HAR) or some custom file format (Sequence Files for example).
If custom format is used, you will have to do extra work with processing - it will be required to use custom input formats.
In my opinion, 100 is not that much to significantly affect NameNode performance, so both options seem to be viable.

Cloudera Impala: How does it read data from HDFS blocks?

I had a basic question in Impala. We know that Impala allows you to query data that is stored in HDFS. Now, if a file is split into multiple blocks, and let us say a line of text is spread across two blocks. In Hive/MapReduce, the RecordReader takes care of this.
How does Impala read the record in such a scenario?
Referencing my answer on the Impala user list:
When Impala finds an incomplete record (e.g. which can happen scanning certain file formats such as text or rc files), it will continue to read incrementally from the next block(s) until it has read the entire record. Note that this may require small amounts of 'remote reads' (reading from a remote datanode), but usually this is a very small amount compared to the entire block which should have been read locally (and ideally via a short circuit read).