I am trying to open the following path in HDFS:
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/flume/tweets
I opened a new browser, and I typed http://localhost:50070/dfshealth.html#tab-overview
I get the following error:
There are 2 missing blocks.
The following files may be corrupted:
blk_1073742237
/hbase/data/hbase/meta/1588230740/info/c5da7e591d294ae58968f4d0f2e8ffd9
blk_1073742231
/hbase/WALs/quickstart.cloudera,60020,1482726320014-splitting/quickstart.cloudera%2C60020%2C1482726320014..meta.1482726370496.meta
It is saying how to find possible solution for this, but is there any simplified way of solving this problem?
This might be helpful:
Check the corrupted blocks using the command:
hdfs fsck <path> -list-corruptfileblocks
e.g. hdfs fsck /hbase -list-corruptfileblocks
Move the corrupted blocks to /lost+found using:
hdfs fsck <path> -move
e.g. hdfs fsck /hbase -move
OR Delete the corrupted blocks using:
hdfs fsck <path> -delete
e.g. hdfs fsck /hbase -delete
Sometimes you'll be asked for Superuser privileges, in which case, append sudo -u hdfs before your command e.g. sudo -u hdfs hdfs fsck /hbase -list-corruptfileblocks
Related
I am not able to run any hadoop command even -ls is also not working getting same error and not able to create directory using
hadoop fs -mkdir users.
You can create one directory in HDFS using the command :
$ hdfs dfs -mkdir "path"
and, then use the given below command to copy data from the local file to HDFS:
$ hdfs dfs -put /root/Hadoop/sample.txt /"your_hdfs_dir_path"
Alternatively, you can also use the below command:
$ hdfs dfs -copyFromLocal /root/Hadoop/sample.txt /"your_hdfs_dir_path"
I am using putty ssh to import my csv file to HUE AWS HDFS.
So far I have made a directory using the command
hadoop fs -mdkir /data
after the directory I am trying to import my csv file using command:
hadoop fs -cp s3://cis4567-fall19/Hadoop/SalesJan2 009.csv
However I am getting a error that states :
-cp: Not enough arguments: expected 2 but got 1
In your command the destination HDFS directory to where the file needs to be copied is missing.
The syntax is hadoop fs -cp <source_HDFS_dir/file> <destination_HDFS_dir/>
The correct command should be
hadoop fs -cp s3://cis4567-fall19/Hadoop/SalesJan2 009.csv /tmp
Please note that I've mentioned the destination as /tmp just as an example. You can replace it with the required destination directory name.
In order to access hdfs. I unknowing gave the following command in root user.( I had tried to resolve the following error )
sudo su - hdfs
hdfs dfs -mkdir /user/root
hdfs dfs -chown root:hdfs /user/root
exit
Now when I tried to access hdfs it says,
Call From headnode.name.com/192.168.21.110 to headnode.name.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
What can I do to resolve this issue.It would be great if you could explain what the command 'hdfs dfs -chown root:hdfs /user/root'does.
I am using HDP 3.0.1.0 (Ambari)
It seems like your HDFS is down.. Check if your namenode is up.
The command hdfs dfs -chown root:hdfs /user/root changes the ownership of the HDFS directory /user/root(if it exists) to user root and group hdfs. User hdfs should be able to perform this command(or any command in the HDFS in the matter of fact). The "root" user of the HDFS is hdfs.
If you want to make user root an HDFS superuser, you can change the group of the root user to hdfs using(with user root) usermod -g hdfs root and then run(from user hdfs) hdfs dfsadmin -refreshUserToGroupsMappings. This will sync the user groups mappings in the server with the HDFS, making user root a superuser.
I want to see contents of the hdfs file which I have import mysql data using sqoop.
I ran the command hadoop dfs -cat /user/cloudera/products/part-m-00000.
I am getting error:
cat: Zero blocklocations for /user/cloudera/products/part-m-00000. Name node is in safe mode.
It s not possible to read hdfs data when namenode is in safe mode. Leave from safe mode, run the below command,
hadoop dfsadmin -safemode leave
Then cat the file in hdfs.
I'm trying to move my daily apache access log files to a Hive external table by coping the daily log files to the relevant HDFS folder for each month.
I try to use wildcard, but it seems that hdfs dfs doesn't support it? (documentation seems to be saying that it should support it).
Copying individual files works:
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
"/mnt/prod-old/apache/log/access_log-20150102.bz2"
/user/myuser/prod/apache_log/2015/01/
But all of the following ones throw "No such file or directory":
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
"/mnt/prod-old/apache/log/access_log-201501*.bz2"
/user/myuser/prod/apache_log/2015/01/
put:
`/mnt/prod-old/apache/log/access_log-201501*.bz2': No such file or
directory
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
/mnt/prod-old/apache/log/access_log-201501*
/user/myuser/prod/apache_log/2015/01/
put:
`/mnt/prod-old/apache/log/access_log-201501*': No such file or
directory
The environment is on Hadoop 2.3.0-cdh5.1.3
I'm going to answer my own question.
So hdfs dfs put does work with wildcard, the problem is that the input directory is not a local directory, but a mounted SSHFS (fuse) drive.
It seems that SSHFS is the one not able to handle wildcard characters.
Below is the proof the hdfs dfs put works just fine with wildcards when using the local filesystem and not the mounted drive:
$ sudo HADOOP_USER_NAME=myuser hdfs dfs -put
/tmp/access_log-201501*
/user/myuser/prod/apache_log/2015/01/
put: '/user/myuser/prod/apache_log/2015/01/access_log-20150101.bz2':
File exists
put:
'/user/myuser/prod/apache_log/2015/01/access_log-20150102.bz2': File
exists