Regex add file extension - regex

I'm looking for the regular expression to find all the files in a folder (and its subfolders) that do not have an extension and add the extension .mp3 to these files only (i.e. the files which already have the extension should not get an additional one)?
For example:
test is made into test.mp3
test1.mp3 remains as it is
An additional problem I have is that some of my file names have spaces.
So far I use the following expression for the first part (with maxdepth specifying the depth in terms of folder structure I want to have):
find . -maxdepth 1 ! -name "*.*" -o -name ".*[^.]*"
I cannot work out how to do the adding of the .mp3 extension.

The below find command would recursively rename(adding .mp3 as extension to those files) the files which don't have any extension.
find . ! -name *.* -type f -exec mv {} {}.mp3 \;

The following command won't modify any files as long as -nono argument is present:
rename -v -nono 's{.+(\.mp3)?\Z}{.mp3}i' *
rename(1) utility is provided by "rename" package on Debian.

Related

Can git rm take a regex or can I pipe the contents of a file to git rm?

I'm trying to remove all of the folder meta files from a unity project in the git repo my team is using. Other members don't delete the meta file associated to the folder they deleted/emptied and it's propagating to everyone else. It's a minor annoyance that shouldn't need to be seen so I've added this to the .gitignore:
*.meta
!*.*.meta
and now need to remove only the folder metas. I'd rather remove the metas now than wait for them to appear and have git remove them later. I'm using git bash on Windows and have tried the following commands to find just the folder metas:
find . -name '*.meta' > test.txt #returns folders and files
find . -regex '.*\.meta' > test.txt #again folders and files
find . -regex '\.[^\.]{0,}\.meta' > test.txt #nothing
find . -regex '\.[^.]{0,}\.meta' > test.txt #nothing
find . -regex '\.{2}' > test.txt #nothing
find . -regex '(\..*){2}' > test.txt #nothing
I know regex is interpreted differently per program/language but the following will produce the results I want in Notepad++ and I'm not sure how to translate it for git or git bash:
^.*/[^.]{0,}\.meta$
by capturing the lines (file paths from root of repo) that end with a /<foldername>.meta since I realized some folders contained a '.' in their name.
Once this is figured out I need to go line by line and git rm the files.
NOTE
I can also run:
^.*/.*?\..*?\.meta$\n
and replace with nothing to delete all of the file metas from the folders and files result, and use that result to get all of the folder metas, but I'd also like to know how to avoid needing Notepad++ as an extra step.
To confine the results only to indexed files use git ls-files, the swiss-army knife of index-aware file listing. git update-index is the core-command index munger,
git ls-files -i -x '*.meta' -x '!*.*.meta' | git update-index --force-remove --stdin
which will remove the files from your index but leave them in the work tree.
It's easier to express with two conditions just like in .gitignore. Match *.meta but exclude *.*.meta:
find . -name '*.meta' ! -name '*.*.meta'
Use -exec to run the command of your choice on the matched files. {} is a placeholder for the file names and ';' signifies the end of the -exec command (weird syntax but it's useful if you append other things after the -exec ... ';').
find . -name '*.meta' ! -name '*.*.meta' -exec git rm {} ';'

How to ignore file with .<numberic>.ext in git?

I have a list of file in my project:
For example:
1. src/index.1.js
2. src/screens/index.1.js
3. src/screens/index.2.js
I want to ignore all the files having the numeric number.
I have tried using **/*.1.* , **/*.2.*. Is there a way to ignore all the file with numeric value?
You can use a range. For your example:
**/*.[0-9].js
Would match a js file in any directory that ends with .(number).js
Git uses glob pattern to match ignored files. Use the following to ignore all such above-mentioned files (with multi-digit numbers also).
**/*.[0-9]*.js
Why don't you run the following find command after eventually adapting the \.js part if you do not want to take into account only the .js files:
find . -type f -regextype sed -regex '.*\/.*\.[0-9]\+\.js'
./src/screens/index.2.js
./src/screens/index.123.js
./src/index.1.js
when you find all the files you are interested in, change your find command into:
find . -type f -regextype sed -regex '.*\/.*\.[0-9]\+\.js' -exec git checkout {} \;
to checkout those files.

Script for deleting files whose name do not contain certain phrases?

If I had a folder of files, what script could I write to remove the files whose names don't have certain phrases?
My folder contains
oneApple.zip
twoApples.zip
threeApples.zip
fourApples.zip
I would want to remove the files whose names don't contain "one" or "three" anywhere within their filename.
After executing the script, the folder would only contain:
oneApple.zip
threeApples.zip
Using bash
With a modern bash with extglob enabled, we can delete files whose names do not contain one or three with:
rm !(*one*|*three*)
To experiment with how extglobs work, just use echo:
$ echo !(*one*|*three*)
fourApples.zip twoApples.zip
If the above doesn't work properly, then either your bash is out of date or extglob is turned off. To turn it on:
shopt -s extglob
Using find
find . -maxdepth 1 -type f ! -name '*one*' ! -name '*three*' -delete
Before running that command, you probably want to test it. Just remove the -delete and it will show you the files that it found:
$ find . -maxdepth 1 -type f ! -name '*one*' ! -name '*three*'
./twoApples.zip
./fourApples.zip
How it works:
.
This tells find to look in the current directory.
-maxdepth 1
This tells find not to recurse into subdirectories
-type f
This tells find that we only want regular files.
! -name '*one*'
This tells find to exclude files with one in their name.
! -name '*three*'
This tells find to exclude files with three in their name.
-delete
This tells find to delete the files that it found.

How to use shell magic to create a recursive etags using GNU etags?

The standard GNU etags does not support a recursive walk of directories as done by exuberant ctags -R. If I only have access to the GNU etags, how can I use bash shell magic to get etags to produce a TAGS table for all the C++ files *.cpp and *.h files in the current directory and all directories below the current one recursively to create a TAGS table in the current directory which has the proper path name for emacs to resolve the TAGS table entries.
The Emacs Wiki is often a good source for answers to common problems or best practices. For your specific problem there is a solution for both Windows and Unixen:
http://www.emacswiki.org/emacs/RecursiveTags#toc2
Basically you run a command to find all .cpp and all .h files (change file selectors if you use different file endings, such as e.g., .C) and pipe the result into etags. Since Windows does not seem to have xargs, you need a more recent version of etags that can read from stdin (note the dash at the end of the line which symbolizes stdin). Of course, if you use a recent version of etags, you can use the dash parameter instead of xargs there, too.
Windows:
cd c:\source-root
dir /b /s *.cpp *.h *.hpp | etags --your_options -
Unix:
cd /path/to/source-root
find . -name "*.cpp" -print -or -name "*.h" -print | xargs etags --append
This command creates etags file with default name "TAGS" for .c, .cpp, .Cpp, .hpp, .Hpp .h files recursively
find . -regex ".*\.[cChH]\(pp\)?" -print | etags -
Most of the answers posted here pipe the find output to xargs. This breaks if there are spaces in filenames inside the directory tree.
A more general solution that works if there are spaces in filenames (for .c and .h files) could be:
find . -name "*.[cChH]" -exec etags --append {} \;
Use find. man find if you need to.

Move all images in folder to subfolder, and update all references in text files to those images to their new location?

I have a folder which contains a ~50 text files (PHP) and hundreds of images. I would like to move all the images to a subfolder, and update the PHP files so any reference to those images point to the new subfolder.
I know I can move all the images quite easily (mv *.jpg /image, mv *.gif /image, etc...), but don't know how to go about updating all the text files - I assume a Regex has to be created to match all the images in a file, and then somehow the new directory has to be appended to the image file name? Is this best done with a shell script? Any help is appreciated (Server is Linux/CentOs5)
Thanks!
sed with the -i switch is probably what you're looking for. -i tells sed to edit the file in-place.
Something like this should work:
find /my/php/location -name '*.php' | xargs sed -ie 's,/old/location/,/new/location/,g'
You could do it like this:
#!/bin/sh
for f in *.jpg *.png *.gif; do
mv $f gfx/
for p in *.txt; do
sed -i bak s,`echo $f`,gfx/`echo $f`,g $p
done
done
It finds all jpg/png/gif files and moves them to the "gfx" subfolder, then for each txt file (or whatever kind of file you want it edited in) it uses "sed" in-place to alter the path.
Btw. it will create backup files of the edited files with the extra extension of "bak". This can be avoided by omitting the "bak" part in the script.
This will move all images to a subdir called 'images' and then change only links to image files by adding 'images/' just before the basename.
mkdir images
mv -f *.{jpg,gif,png,jpeg} images/
sed -i 's%[^/"'\'']\+\.\(gif\|jpg\|jpeg\|png\)%images/\0%g' *.php
If you have thousands of files, you may need to utilize find and xargs. So, a bit slower
find ./ -regex '.*\(gif\|jpg\|png\|jpeg\)' -exec mv {} /tmp \;
find ./ -name \*.php -print0 | \
xargs -0 sed -i 's%[^/"'\'']\+\.\(gif\|jpg\|jpeg\|png\)%images/\0%g' *.php
Caution, it will also change the path to images with remote urls. Also, make sure you have a full backup of your directory, php syntax and variable names might cause problems.