Using xargs, eval, and mv ensemble - regex

I've been using the command line more frequently lately to increase my proficiency. I've created a .txt file containing URLs for libraries that I'd like to download. I batch-downloaded these files using
$ cat downloads.txt | xargs wget
When using the wget command I didn't specify a destination directory. I'd like to move each of the files that I've just downloaded into a directory called "vendor".
For the record, it has occurred to me that if I ran...
$ open .
...I could drag-and-drop these files into the desired directory. But in my opinion that would defeat the purpose of this exercise.
Now that I have the files in my cwd, I'd like to be able to target them and move them into the "vendor" directory.
As a side-question: Is there a useful way to print the most recently created files to STDOUT? Currently, I can grab the filenames from the URLs within downloads.txt pretty simply using the following pipeline and Perl script...
$ cat downloads.txt | perl -n -e 'if (/(?<=\/)([-.a-z]+)$/) { print $1 . "\n" }'
This will produce...
react.js
redux.js
react-dom.js
expect.js
...which is great as these are file that I intended on targeting. I'd like to transform each of these lines into a command within a pipeline that resembles this...
$ mv {./,./vendor/}<filename>
... where <filename> is "react.js" then "redux.js", and so forth.
I figure that I may be able to accomplish this using some combination of xargs, eval, and mv. This is where my bash skills drop-off.
Just to reiterate, I'm aware that the method in which I am approaching this problem is neither simple nor ideal. This is intentionally a convoluted exercise intended to stretch my bash knowledge.
Is there anyone who knows how I can use xargs, eval, and mv to accomplish this goal?
Thank you!

xargs -l -a downloads.txt basename | xargs -i mv {} ./vendor
How this works: The first instance of xargs reads the file names from downloads.txt and calls basename for each of these file names individually (alternatively, you could use basename -a). These basenames are then piped to another instance of xargs, which uses the arguments to call mv, replacing the string {} with the current argument.
mv $(basename -a $(<downloads.txt)) ./vendor
How this works: Since you want to move all the files into the same directory, you can use a single call to mv. The command substitution ("backticks") inserts the output of the command basename -a, which, in turn, reads its arguments from the file.

Related

Use sed/regex to rename a file - bash with macOS

I have a list of files that a date has been added to the end.
ex: Chorus Left Octave (consolidated) (2020_10_14 20_27_18 UTC). The files will end with .wav or .mp3
I want to leave the (consolidated) but take out the date. I have come up with the regex and tested with regexr.com. It does format the text correctly there.
The regex is: /(\([0-9]+(.*)(?=.wav|.mp3))+/g
Now, I am trying to actually rename the files. In my terminal I have cd'ed into the folder with the files. Based on other answers here I have tried:
rename -n '/(\([0-9]+(.*)(?=.wav|.mp3))+/g' *.wav|*.mp3 - using rename installed with homebrew
sed '/(\([0-9]+(.*))+/g' *.wav|*.mp3
for f in *.wav|*.mp3; do mv "$f" "${f/(\([0-9]+(.*)(?=.wav|.mp3))+/g}” done
The first two do not throw any errors, but do not do any renames (I know that the -n after rename just prints out the files that will be changed, it doesn't actually change the files)
The last one starts a bash session.
I'd rather use the rename or sed, seems simpler to me. But, what am I doing wrong?.
In plain bash:
#!/bin/bash
pat='([0-9][0-9][0-9][0-9]_[0-9][0-9]_[0-9][0-9] [0-9][0-9]_[0-9][0-9]_[0-9][0-9] UTC)'
for f in *.mp3 *.wav; do echo mv "$f" "${f/$pat}"; done
Remove the echo preceding the mv after making sure it will work as intended. You may also consider adding the -i option to the mv in order to avoid clobbering an existing file unintentionally.

use regex to specify output filename

I have a folder with many files where I only need some columns so I tried this to extract what I need:
mkdir ./raw_data/selection
doit() {
csvfix read_dsv -f 1,3,7 -s \; $1 > $1 | sed 's/raw_data/raw_data\/selection/'
}
export -f doit
Files_To_Parse=`ls ./raw_data/*csv`
parallel doit ::: $Files_To_Parse
This doesn't work.
But if I to this:
cd ./raw_data
doit() {
csvfix read_dsv -f 1,3,7 -s \; $1 > selection/$1
}
export -f doit
Files_To_Parse=`ls -1 *csv`
parallel doit ::: $Files_To_Parse
it works but I'd like to be able to run this from the top folder in this project (i.e to put this in a file named brief_csv.sh and call it from IDEs)
If you used Bash, you could:
for f in raw_data/*.csv
do
csvfix ... "$f" > raw_data/selection/"${f##*/}"
done
Also, instead of csvfix for extracting columns you could use cut:
$ cut -d \; -f 1,3,7 $f ...
I don't know the commands you are using, but this line:
csvfix read_dsv -f 1,3,7 -s \; $1 > $1 | sed ...
redirects the output in the same file you are reading; this can not work. In fact, you say that your modified code instead works. You could use temporary files to store intermediate results, don't be afraid to use many of them: debugging will be easier (you can see intermediate passages) and the system doesn't suffer. /tmp is a good place to put those intermediate files.
Use csvfix to do the first step, and redirect in /tmp/my-csvfix-intermediate; then use sed to read /tmp/my-csvfix-intermediate, and write in /tmp/my-grep-intermediate. After the last passage, you can take the last intermediate result and overwrite the original file, perhaps after having backed it up. You can move files everywhere you need, I don't see any problem in running your script from an IDE - just use as many passages as you need.
Avoid to parallelize when debugging, when the script will work, you can add parallelizing.
When two or more parallel processes will try to write in the same file (/tmp/my-...-intermediate), you will have one more problem. To overcome this you need to use different files for every process. The bash variable "$$" comes to help, just use file names like "/tmp/my-$$-blablabla", the $$ will be substituted with the PID of the process, and parallel processes can not have the same PID.
Hope it helps, regards.

Use [msys] bash to remove all files whose name matches a pattern, regardless of file-name letter-case

I need a way to clean up a directory, which is populated with C/C++ built-files (.o, .a, .EXE, .OBJ, .LIB, etc.) produced by (1) some tools which always create files having UPPER-CASE names, and (2) other tools which always create lower-case file names. (I have no control over the tools.)
I need to do this from a MinGW 'msys' bash.exe shell script (or bash command prompt). I understand piping (|), but haven't come up with the right combination of exec's yet. I have successfully filtered the file names, using commands like this example:
ls | grep '.\.[eE][xX][eE]'
to list all files having any case-combination of letters in the file-extension--this example gets all the executable (e.g. ".EXE") files.
(I'll be doing similar for .o, .a, .OBJ, .LIB, .lib, .MAP, etc., which all share the same directory as the C/C++ source files. I don't want to delete the source files, only the built-files. And yes, I probably should rework the directory structure, to use a separate directory for the built-files [only], but that will take time, and I need a quick solution now.)
How can I merge the above command with "something" else (e.g., like the 'rm -f' command???), to carry this the one step further, to actually delete [only] those filtered-out files from the current directory? (I'm hopeful for a solution which does not require a temporary file to hold the filtered file names.)
Adding this answer because the accepted answer is suggesting practices which are not-recommended in actual scripts. (Please don't feel bad, I was also on that track once..)
Parsing ls output is a NO-NO! See http://mywiki.wooledge.org/ParsingLs for more detailed explanation on why.
In short, ls separates the filenames with newline; which can be present in the filename itself. (Plus, ls does not handle other special characters properly. ls prints the output in human readable form.) In unix/linux, it's perfectly valid to have a newline in the filename.
A unix filename cannot have a NULL character though. Hence below command should work.
find /path/to/some/directory -iname '*.exe' -print0 | xargs -0 rm -f
find: is a tool used to, well, find files matching the required pattern/criterion.
-iname: search using particular names, case insensitive. Note that the argument to -iname is wildcard, not regex.
-print0: Print the file names separated by NULL character.
xargs: Takes the input from stdin & runs the commands supplied (rm -f in this case) on them. The input is separaed by white-space by default.
-0 specifies that the input is separated by null character.
Or even better approach,
find /path/to/some/directory -iname '*.exe' -delete
-delete is a built-in feature of find, which deletes the files found with the pattern.
Note that if you want to do some other operation, like move them to particular directory, you'd need to use first option with xargs.
Finally, this command find /path/to/some/directory -iname '*.exe' -delete would recursively find the *.exe files/directories. You can restrict the search to current directory with -maxdepth 1 & filetype to simple file (not directory, pipe etc.) using -type f. Check the manual link I provided for more details.
this is what you mean?
rm -f `ls | grep '.\.[eE][xX][eE]'`
but usually your "ls | grep ..." output will have some other fields that you have to strip out such as date etc., so you might just want to output the file name itself.
try something like:
rm -f `ls | grep '.\.[eE][xX][eE]' | awk '{print $9}'`
where you file name is in the 9th field like:
-rwxr-xr-x 1 Administrators None 283 Jul 2 2014 search.exe
You can use following command:
ls | grep '.\.[eE][xX][eE]' | xargs rm -f
Use of "xargs" would turn standard input ( in this case output of the previous command) as arguments for "rm -f" command.

Remove duplicate filename extensions

I have thousands of files named something like filename.gz.gz.gz.gz.gz.gz.gz.gz.gz.gz.gz
I am using the find command like this find . -name "*.gz*" to locate these files and either use -exec or pipe to xargs and have some magic command to clean this mess, so that I end up with filename.gz
Someone please help me come up with this magic command that would remove the unneeded instances of .gz. I had tried experimenting with sed 's/\.gz//' and sed 's/(\.gz)//' but they do not seem to work (or to be more honest, I am not very familiar with sed). I do not have to use sed by the way, any solution that would help solve this problem would be welcome :-)
one way with find and awk:
find $(pwd) -name '*.gz'|awk '{n=$0;sub(/(\.gz)+$/,".gz",n);print "mv",$0,n}'|sh
Note:
I assume there is no special chars (like spaces...) in your filename. If there were, you need quote the filename in mv command.
I added a $(pwd) to get the absolute path of found name.
you can remove the ending |sh to check generated mv ... .... cmd, if it is correct.
If everything looks good, add the |sh to execute the mv
see example here:
You may use
ls a.gz.gz.gz |sed -r 's/(\.gz)+/.gz/'
or without the regex flag
ls a.gz.gz.gz |sed 's/\(\.gz\)\+/.gz/'
ls *.gz | perl -ne '/((.*?.gz).*)/; print "mv $1 $2\n"'
It will print shell commands to rename your files, it won't execute those commands. It is safe. To execute it, you can save it to file and execute, or simply pipe to shell:
ls *.gz | ... | sh
sed is great for replacing text inside files.
You can do that with bash string substitution:
for file in *.gz.gz; do
mv "${file}" "${file%%.*}.gz"
done
This might work for you (GNU sed):
echo *.gz | sed -r 's/^([^.]*)(\.gz){2,}$/mv -v & \1\2/e'
find . -name "*.gz.gz" |
while read f; do echo mv "$f" "$(sed -r 's/(\.gz)+$/.gz/' <<<"$f")"; done
This only previews the renaming (mv) command; remove the echo to perform actual renaming.
Processes matching files in the current directory tree, as in the OP (and not just files located directly in the current directory).
Limits matching to files that end in at least 2 .gz extensions (so as not to needlessly process files that end in just one).
When determining the new name with sed, makes sure that substring .gz doesn't just match anywhere in the filename, but only as part of a contiguous sequence of .gz extensions at the end of the filename.
Handles filenames with special chars. such as embedded spaces correctly (with the exception of filenames with embedded newlines.)
Using bash string substitution:
for f in *.gz.gz; do
mv "$f" "${f%%.gz.gz*}.gz"
done
This is a slight modification of jaypal's nice answer (which would fail if any of your files had a period as part of its name, such as foo.c.gz.gz). (Mine is not perfect, either) Note the use of double-quotes, which protects against filenames with "bad" characters, such as spaces or stars.
If you wish to use find to process an entire directory tree, the variant is:
find . -name \*.gz.gz | \
while read f; do
mv "$f" "${f%%.gz.gz*}.gz"
done
And if you are fussy and need to handle filenames with embedded newlines, change the while read to while IFS= read -r -d $'\0', and add a -print0 to find; see How do I use a for-each loop to iterate over file paths output by the find utility in the shell / Bash?.
But is this renaming a good idea? How was your filename.gz.gz created? gzip has guards against accidentally doing so. If you circumvent these via something like gzip -c $1 > $1.gz, buried in some script, then renaming these files will give you grief.
Another way with rename:
find . -iname '*.gz.gz' -exec rename -n 's/(\.\w+)\1+$/$1/' {} +
When happy with the results remove -n (dry-run) option.

Copying html files to create erb versions with bash script

I'm trying to write a bash (OSX) script that finds all html files in a directory and copies them to create erb files with underscores at the beginning of the file name. So test1.html would become _test1.html.erb for instance.
I was trying to do it a bit like this but there's probably a better way (and this way isn't finished)
find . -regex '.*/[^_].*\.html$' | while read file; do [need to do the copy X.html file to create new _X.html.erb file in here]; done
Any ideas?
Thanks!
Here is a for loop version:
for file in *html ; do
cp ${file} _${file}.ebr
done
and here is a find version:
find ./ -name "*html" -exec sh -c 'cp {} _$(basename {}).ebr' \;
find *.html | while read files
do
newname="_${files}.erb"
mv -v "${files}" "${newname}"
done