I know I can use ls -ltr to list files in time order, but is there a way to execute a command to only list the files created after the creation date of another file (let's call it acknowledge.tmp)?
I assume that the command will look something like this:
ls -1 /path/to/directory | ???
Use the -newer option to the find command:
-newer file True if the current file has been modi-
fied more recently than the argument
file.
So your command would be
find /path/to/directory -newer acknowledge.tmp
However, this will descend into, and return results from, sub-directories
Related
I have copied few files to the path. but when I tried to run the command hdfs dfs -ls path/filename then it returns no file found.
hdfs dfs -ls till directory works but when i use the file name it returns no files found. For one of the file, I copied and pasted the file name using ambari. Then file started getting returned on using hdfs dfs -ls path/filename.
What is causing this issue?
Because when you are executing HDFS dfs -ls path/filename what you are saying to hdfs is show me all the files that are in the directory and if end path is a file, of course, you are not listing anything. You must point to a directory not a file.
#saravanan it seems like a permission issue if the file shows up only after using ambari. Make sure the files are owned correctly to confirm the commands. The ls command will list files and folders per documentation.
Here is full documentation for ls command:
[root#hdp ~]# hdfs dfs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...] :
List the contents that match the specified file pattern. If path is not
specified, the contents of /user/<currentUser> will be listed. For a directory a
list of its direct children is returned (unless -d option is specified).
Directory entries are of the form:
permissions - userId groupId sizeOfDirectory(in bytes)
modificationDate(yyyy-MM-dd HH:mm) directoryName
and file entries are of the form:
permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
modificationDate(yyyy-MM-dd HH:mm) fileName
-C Display the paths of files and directories only.
-d Directories are listed as plain files.
-h Formats the sizes of files in a human-readable fashion
rather than a number of bytes.
-q Print ? instead of non-printable characters.
-R Recursively list the contents of directories.
-t Sort files by modification time (most recent first).
-S Sort files by size.
-r Reverse the order of the sort.
-u Use time of last access instead of modification for
display and sorting.
-e Display the erasure coding policy of files and directories.
First of all, the server runs Solaris.
The context of my question is Informatica PowerCenter.
I need to list files situated in the Inbox directory. Basically, the outcome should be one file list by type of file. The different file types are distinguished by the file name. I don't want to update the script every time a new file type starts to exist so I was thinking of a parameterized shell script with the regex, the inbox directory and the file list
An example:
/Inbox/ABC.DEFGHI.PAC.AE.1236547.49566
/Inbox/ABC.DEFGHI.PAC.AE.9876543.21036
/Inbox/DEF.JKLMNO.PAC.AI.1236547.49566
... has to result in 2 list files containing the path and file name of the listed files:
/Inbox/PAC.AE.FILELIST
-->/Inbox/ABC.DEFGHI.PAC.AE.1236547.49566
-->/Inbox/ABC.DEFGHI.PAC.AE.9876543.21036
/Inbox/PAC.AI.FILELIST
-->/Inbox/DEF.JKLMNO.PAC.AI.1236547.49566
Assuming all input files follow the convention you indicate (when splitting on dots, the 3rd and 4th column determine the type), this script might do the trick:
#! /usr/bin/env bash
# First parameter or current directory
INPUTDIR=${1:-.}
# Second parameter (or first input directory if not given)
OUTPUTDIR=${2:-$INPUTDIR}
# Filter out directories
INPUTFILES=$(ls -p $INPUTDIR | grep -v "/")
echo "Input: $INPUTDIR, output: $OUTPUTDIR"
for FILE in $INPUTFILES; do
FILETYPE=$(echo $FILE | cut -d. -f3,4)
COLLECTION_FILENAME="$OUTPUTDIR/${FILETYPE:-UNKNOWN}.FILELIST"
echo "$FILE" >> $COLLECTION_FILENAME
done
Usage:
./script.sh Inbox Inbox/collections
Will read all files (not directories) from Inbox, and write the collection files to Inbox/collections. Filenames inside collections should be sorted alphabetically.
I am trying to take a text file that contains a list of files and copy them all to a directory. Within this directory, they will have unique directory names. An example of text file the structure can be seen below:
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000003/s01_2011_11_01/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000003/s01_2011_11_01/a_1.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000003/s02_2011_11_11/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000003/s02_2011_11_11/a_1.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s01_2009_02_13/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s02_2010_10_02/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s03_2010_10_02/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s04_2010_10_03/a_.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s04_2010_10_03/a_1.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s04_2010_10_03/a_2.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s04_2010_10_03/a_3.edf
/data/isip/data/tuh_eeg/v0.6.0/edf/001/00000005/s04_2010_10_03/a_4.edf
I need a shell command or an EMACS macro to go through this list and copy them all to unique directories within the current working directory. The unique directory will depend on the file; for example, for the first two files, the directory would be
/001/00000003/s01_2011_11_01/
I have tried doing this using an EMACS macro, but I was not able to get it to work. A shell command or EMACs macro would work.
Something as simple as:
cat list | sed "s/^.*edf\/\(.*\)\/\(.*\)$/mkdir -p root_dir\/\1 \&\& cp \0 root_dir\/\1\/\2/" | sh
If on OSX - install gnu-sed and use gsed instead of sed. Run command without | sh to see what it'll do. Make sure to tweak root_dir, of course.
I would like to attach all .csv files from /home/john recursively, something like the following would be fine but only attaches all .csv files from /home/john/ it doesnt include all the sub folders of /home/john/
mutt -s "all csv files" me#mail.com -a /home/john/*.csv < /home/john/message.txt
but what if I can do this and if there happens to be a .csv file in one of the sub folders with the same name? eg /home/john/1.csv and /home/john/tom/1.csv what will happen? will it still be attached?
thanks
No that command wont't find *.csv in subdirectories. You may want to use find and xargs, something like this:
find /home/john -name \*.csv | xargs mutt -s "all csv files" me#mail.com -i /home/john/message.txt -a
I think -i would have the same effect as < from the file.
I'm (still) not a shell-wizard, but I'm trying to find a way to create directories and files from a list of file names.
Let's take this source file (source.txt) as an example:
README.md
foo/index.html
foo/bar/README.md
foo/bar/index.html
foo/baz/README.md
I'll use this command to remove empty lines and trim useless spaces:
$ more source.txt | sed '/^$/d;s/^ *//;s/ *$//'
It will give me this list:
README.md
foo/index.html
foo/bar/README.md
foo/bar/index.html
foo/baz/README.md
Now I'm trying to loop on every line and create the related file (it it doesn't already exists), with it's parents directories.
How could I do this?
Ideally, I would put this script in an alias to quickly use it.
As always, posting a question brings me to the end of the problem...
I came to a satisfying solution, using dirname and basename in a for .. in loop:
for i in `cat source.txt | sed '/^$/d;s/^ *//;s/ *$//'`;
do mkdir -p `dirname $i`;
touch `echo $(dirname $i)$(echo "/")$(basename $i)`;
done
This one-line command will:
read the file names list
create directories
create empty files in their own directory