Is it possible to run a binary executable on HDFS? I have to process some files on HDFS. How I've been doing it so far is I hdfs dfs -get the files to my local server, process it, and then hdfs dfs -put it back to the HDFS. But this is kind of hassle, and I'd rather just run my processor binary code on HDFS. Is this possible?
Related
I am trying to copy a text file on my Mac Desktop to hdfs, for that purpose I am using this code
hadoop fs -copyFromLocal Users/Vishnu/Desktop/deckofcards.txt /user/gsaikiran/cards1
But it is throwing an Error
copyFromLocal: `deckofcards.txt': No such file or directory
It sure exists on the desktop
Your command is missing a slash / at the source file path. It should be:
hadoop fs -copyFromLocal /Users/Vishnu/Desktop/deckofcards.txt /user/gsaikiran/cards1
more correctly/efficiently,
hdfs dfs -put /Users/Vishnu/Desktop/deckofcards.txt /user/gsaikiran/cards1
Also, if you are dealing with HDFS specifically, better to use hdfs dfs syntax instead of hadoop fs [1]. (It doesn't change the output in your case, but hdfs dfs command is designed for interacting with HDFS whereas hadoop fs is the deprecated one)
I want to see contents of the hdfs file which I have import mysql data using sqoop.
I ran the command hadoop dfs -cat /user/cloudera/products/part-m-00000.
I am getting error:
cat: Zero blocklocations for /user/cloudera/products/part-m-00000. Name node is in safe mode.
It s not possible to read hdfs data when namenode is in safe mode. Leave from safe mode, run the below command,
hadoop dfsadmin -safemode leave
Then cat the file in hdfs.
I'm trying to move my daily apache access log files to a Hive external table by coping the daily log files to the relevant HDFS folder for each month.
I try to use wildcard, but it seems that hdfs dfs doesn't support it? (documentation seems to be saying that it should support it).
Copying individual files works:
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
"/mnt/prod-old/apache/log/access_log-20150102.bz2"
/user/myuser/prod/apache_log/2015/01/
But all of the following ones throw "No such file or directory":
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
"/mnt/prod-old/apache/log/access_log-201501*.bz2"
/user/myuser/prod/apache_log/2015/01/
put:
`/mnt/prod-old/apache/log/access_log-201501*.bz2': No such file or
directory
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
/mnt/prod-old/apache/log/access_log-201501*
/user/myuser/prod/apache_log/2015/01/
put:
`/mnt/prod-old/apache/log/access_log-201501*': No such file or
directory
The environment is on Hadoop 2.3.0-cdh5.1.3
I'm going to answer my own question.
So hdfs dfs put does work with wildcard, the problem is that the input directory is not a local directory, but a mounted SSHFS (fuse) drive.
It seems that SSHFS is the one not able to handle wildcard characters.
Below is the proof the hdfs dfs put works just fine with wildcards when using the local filesystem and not the mounted drive:
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
/tmp/access_log-201501*
/user/myuser/prod/apache_log/2015/01/
put: '/user/myuser/prod/apache_log/2015/01/access_log-20150101.bz2':
File exists
put:
'/user/myuser/prod/apache_log/2015/01/access_log-20150102.bz2': File
exists
My question is similar to (Spark writing to hdfs not working with the saveAsNewAPIHadoopFile method)! I am using Spark 1.1.0 on CDH 5.2.1
I am trying to save a file to hdfs system through saveAsTextFile method from Spark. The job completes successfully but when I look into the folder path, I see _temporary folder with data files inside it in various tasks and attempt folder. This tells me Spark is quitting the job as succeeded even before the files are completely moved into hdfs in the right output folder. This is the same issue with saveAsParquetFile method too. Please let me know if you have any idea about this?
Thanks
When I am running spark locally(non hdfs), RDD saveAsObjectFile writes the file to local file system (ex : path /data/temp.txt)
when I am running spark on YARN cluster, RDD saveAsObjectFile writes the file to hdfs. (ex : path /data/temp.txt )
Is there a way to explictly mention local file system instead of hdfs when running spark on YARN cluster.
You can explicitly specify "file:///" prefix in the argument.
yourRDD. saveAsObjectFile("file:///path/to/local/filesystem")