This is the path when I have deleted the fie from the existing folder
Moved: 'hdfs://nameservice1/user/edureka_978336/Assignment24/abc.txt' to trash at: hdfs://nameservice1/
user/edureka_978336/.Trash/Current/user/edureka_978336/Assignment24/abc.txt
Where Im trying to restore it through MV function but it's not working
hdfs dfs -mv /user/edureka_978336/.Trash/Current/user/edureka_978336/Assignment24/abc.txt/user/edureka_978336/Asign
ment24
Can you paste the error which is coming when you say it's not working.
hdfs dfs -mv sourcePath targetPath
This command should work for moving the file back from trash. Make sure you have permission to pull the data from trash. Can try running with sudo :
sudo -u <user.name> hdfs dfs -mv sourcePath targetPath
Actually the move should work, if all your paths are correct.
But the important thing is, for how much time the files are retained in Trash.
This value is configured in core-site.xml as shown below.
<property>
<name>fs.trash.interval</name>
<value>30</value>
</property>
The value is in minutes and the files will be permanently deleted, after the specified time.
More details about restoring the file here. Take a look.
https://www.linkedin.com/pulse/recovering-deleted-hdfs-files-cloudera-certified-developer-hadoop-/
Related
I have a folder that contains a large number of subfolders that are dates from 2018. In my HDFS I have created a folder of just December dates (formatted 2018-12-) and I need to delete specifically days 21 - 25. I copied this folder from my HDFS to my docker container and used the command
rm -r *[21-25]
in the folder it worked as expected. But when I run this same command adapted to hdfs
hdfs dfs –rm -r /home/cloudera/logs/2018-Dec/*[21-25]
it gives me the error
rm: `/home/cloudera/logs/2018-Dec/*[21-25]': No such file or directory."
If you need something to be explained in more detail leave a comment. I'm brand new to all of this and I don't 100% understand how to say some of these things.
I figured it out with the help of #Barmer. I was referring to my local systems base directory also I had to change the regular expression to 2[1-5]. So the command ended up being hdfs dfs -rm -r /user/cloudera/logs/2018-Dec/*2[1-5].
I was looking for a way to extract an iso file without root access.
I succeeded using xorriso.
I used this command:
xorriso -osirrox on -indev image.iso -extract / extracted_path
Now when I want to delete the extracted files I get a permission denied error.
lsattr lists -------------e-- for all files.
ls -l lists -r-xr-xr-x for all files.
I tried chmod go+w on a test file but still can't delete it.
Can anyone help me out?
obviously your files were marked read-only in the ISO. xorriso preserves
the permissions when extracting files.
The reason why you cannot remove the test file after chmod +w is that
the directory which holds that file is still read-only. (Anyways, your
chmod command did not give w-permission to the owner of the file.)
Try this tree changing command:
chmod -R u+w extracted_path
Have a nice day :)
Thomas
I deleted a folder from HDFS, I found it under
/user/hdfs/.Trash/Current/
but I can't restore it. I looked in the forum but I don't find the good solution.
Please someone have a solution I can help me how can I restore my folder in the best directory ?
Thank you very much
Did you try cp or mv? e.g.,
hdfs dfs -cp -r /user/hdfs/.Trash/Current/ /hdfs/Current
Before moving back your directory, you should locate where your file is in:
hadoop fs -lsr /user/<user-name>/.Trash | less
Eg, you may found:
-rw-r--r-- 3 <user-name> supergroup 111792 2020-06-28 13:17 /user/<user-name>/.Trash/200630163000/user/<user-name>/dir1/dir2/file
If dir1 is your deleted dir, move it back:
hadoop fs -mv /user/<user-name>/.Trash/200630163000/user/<user-name>/dir1 <destination>
To move from
/user/hdfs/.Trash/Current/<your file>
Use the -cp command, like this
hdfs dfs -cp /user/hdfs/.Trash/Current/<your file> <destination>
Also you will find that your dir/file name is changed you can change it back to whatever you want by using '-mv' like this:
hdfs dfs -mv <Your deleted filename with its path> <Your new filename with its path>
Example:
hdfs dfs -mv /hdfs/weirdName1613730289428 /hdfs/normalName
Following the standard Spark Streaming example using ssc.textFileStream for reading a file from an HDFS directory, I noticed that trying to read files placed there with mv did not work whereas cp did. That has surprised me.
I am surprised as cp does not seem to be a good idea to me because it is a copy in progress.
What could be going on here and why do I read to use mv then? - which seems sort of obvious.
I'm having a strange behavior from s3cmd.
when running mv command on multiple files in a folder (on by one), some of the files are only being copied to destination dir but not deleted from the source dir.
did anyone experienced anything like that?
thanks in advnaced,
Oren
S3cmd first copies the object from source to dest, and then delete form source. Apparently it is doing it right (https://github.com/s3tools/s3cmd/blob/master/S3/S3.py) and I've never had this kind of problem.
Are you running on the latest version of s3cmd?
Have you tried to run another version?
Is there some pattern related to these files you are trying to delete (ie: larger than 1GB)?
I guess there is some should-be-escaped characters in your files, like (, ), such that s3cmd can't handle them well.