Finding soft links with grep - regex

I'm writing a quick program that lists all of the soft/symbolic links in the working directory to a file which is given in argument 1. I'm aware that I need to use grep in order to do so, but in general I have difficulty figuring out how to write the regular expression. In this case, it is especially difficult due to the fact that a variable ($argv[1]) is involved.
The (poorly-written) line of code in question is as follows:
ls -l | xargs grep '-> $argv[1]'
My intention with this was to catch all of the lines that contained the -> and the specified file, such as
link1 -> file
link2 -> file
and so on. Is there any way that I can use grep to accomplish this?

What kind of language is $argv[1]? The (POSIX) Bourne Shell doesn't support arrays. Arguments to scripts and functions are referenced by $1, $2 and so on.
In order for grep to not treat the first hyphen in the pattern as an option, use -- to signal the end of options. Next, there is no parameter substitution in single quotes, only in double qouotes. Putting it all together, this might work:
set somename # Sets $1 to somename
ls -l | xargs grep -- "-> $1"
If your grep doesn't understand --, try
ls -l | xargs grep ".*-> $1"

Below script can find only the soft / symbolic link files and list only if the argument found on those files,
# cat sygrep.sh
#!/bin/bash
if [ $# -eq 0 ]
then
echo "No arguments supplied"
else
for a in `find . -type l` ; do grep -irl '$1' $a ; done
fi
Output:
# ./sygrep.sh
No arguments supplied
# ./sygrep.sh root
./mytest.sh

Related

Hiding all directories that begin with a capitalized letter using `ls` in zsh

I'm trying to use ls in zsh (running on macOS) to show all files and directories except directories that begin with a capital letter.
For example, my directory contains
Archive/ data/ README.md test.txt
and I would like to run an ls command that returns only
data/ README.md test.txt
I can use ls -d [A-Z]*/ (note the terminating backslash to indicate directories) to show the directories I want to hide (e.g. only returns Archive/),
and referencing this helpful answer on using the inverted expansion in zsh with the ls *~ syntax,
I tried (what I think is) the negation of the above using ls -d *~[A-Z]*/ but this doesn't work (it hides nothing).
Moreover, using ls -d *~[A-Z]* (without the terminating backslash) returns data/ test.txt but this is not my desired result since I also want to show the file README.md which begins with a capital letter.
Note that I have enabled the extended glob option in zsh, using setopt extendedglob.
Any help on the correct regex/glob syntax for ls in zsh to obtain my desired output would be very much appreciated. Thank you! :)
Edit: There are two very useful answers that work, but any concise answers using ls in zsh (using the extended glob option) would still be awesome!
Maybe a bit verbose, but you might use ls -l and prevent the total using grep -v "^total" and then pipe the output to awk.
In awk, print the last field followed by / if the first field starts with d and the last field does not start with an uppercase char A-Z
Or print the last field if the the first field does not start with d
ls -l | grep -v "^total" | awk '{
if ($1 ~ /^d/ && $NF ~ /^[^A-Z]/){
print $NF"/"
} else if ($1 ~ /^[^d]/){
print $NF
}
}'
In a single line:
ls -l | grep -v "^total" | awk '{if($1~/^d/ && $NF~/^[^A-Z]/){print $NF"/"} else if($1~/^[^d]/){print $NF}}'
My version of ls does not directly support regular expressions, I think you are depending upon shell globbing. Perhaps try piping to grep. Something like:
ls | grep -v '^[A-Z].*'
Notice that the regular expression is in quotes. The ^ specifies the beginning of the string. The switch -v is the negation of the match.
Sorry, now I understand your requirements better. You could string the find command to do this. Try the following:
find -E . -regex "\./[^A-Z].*" -type d -exec echo Directory: {} ';' -exec find {} -type f -maxdepth 1 \;
I tested this on a Mac with the following hierarchy:
find .
.
./Test
./Test/Plain File
./aatest
./aatest/BigTest
./aatest/BigTest/Big File
./aatest/BigTest/small File
./aatest/bb File
./aatest/aaFile
And got the following output:
Directory: ./aatest
./aatest/bb File
./aatest/aaFile
Directory: ./aatest/BigTest
./aatest/BigTest/Big File
./aatest/BigTest/small File
Please note that I added the text "Directory" to the echo command to differentiate the two types of output.

How to grep only the matching regex?

I write in a file NotEmpty.txt all my non-empty text files in a directory dir with the following command:
find dir/ -not -empty -ls | grep -E "*.txt" > NotEmpty.txt
I'd like to print only the matching regex and not all the information on the line. How is it possible?
The problem is that you are executing ls against every match, so the output contains a lot of stuff. Instead, use this find command to print the name.
Note, in fact, that you can do everything in one shot, including selecting just .txt files:
find your_dir/ -not -empty -name "*.txt" -print > NotEmpty.txt
# ^^^^^^^^^^^^^ ^^^^^^
# | |
# just .txt files |
# |
# print its name instead of `ls`ing it
You can also say -type f to just check files, which in fact I guess it is assumed by the -not -empty parameter.
Use the -o parameter of grep to specify that you only want the matching portion.
Example:
$ echo foo bar baz | grep -o "foo"
foo

UNIX: Finding lines

I need to write a small script, which it find lines according to a regular expression (for example "^folder#) and it will write the number of lines where it matchs.
My idea is, that I will use "find", then delete all slash and then use grep with a regular expression.
I don't know why it doesn't work. Could you give some advice how to improve, or how I should find that lines with another function?
In
./example
./folder/.DS_Store
./folder/file.png
Out
2: ./folder/.DS_Store
3: ./folder/file.png
IGN="^folder$"
find . -type f | sed -e "s/\// /g" | grep -n "${IGN}"
You say you want to use ^folder$ pattern but you want to get output like:
2: ./folder/.DS_Store
3: ./folder/file.png
These two requests contradict each other. A line like ./folder/.DS_Store cannot match pattern ^folder$ because the line doesn't start with "folder" and doesn't end with "folder".
To get the output you describe you need to change the pattern used with grep to ^\./folder/
You tried
IGN="^folder$"
find . -type f | sed -e "s/\// /g" | grep -n "${IGN}"
This script isn't working since IGN looks for start-of-line, not start-of-word.
You can make lines from the parts of your paths with
IGN="^folder$"
find . -type f | tr -s "/" "\n" | grep -n "${IGN}"

Find directories with names matching pattern and move them

I have a bunch of directories like 001/ 002/ 003/ mixed in with others that have letters in their names. I just want to grab all the directories with numeric names and move them into another directory.
I try this:
file */ | grep ^[0-9]*/ | xargs -I{} mv {} newdir
The matching part works, but it ends up moving everything to the newdir...
I am not sure I understood correctly but here is at least something to help.
Use a combination of find and xargs to manipulate lists of files.
find -maxdepth 1 -regex './[0-9]*' -print0 | xargs -0 -I'{}' mv "{}" "newdir/{}"
Using -print0 and -0 and quoting the replacement symbol {} make your script more robust. It will handle most situations where non-printable chars are presents. This basically says it passes the lines using a \0 char delimiter instead of a \n.
mv is not powerfull enough by itself. It cannot work on patterns.
Try this approach: Rename multiple files by replacing a particular pattern in the filenames using a shell script
Either use a loop or a rename command.
With loop and array,
Your script would be something like this:
#!/bin/bash
DIR=( $(file */ | grep ^[0-9]*/ | awk -F/ '{print $1}') )
for dir in "${DIR[#]}"; do
mv $dir /path/to/DIRECTORY
done

Using grep to find dynamic text

Need help with a bash script. We are modifying our database structure, the problem is we have many live sites that have pre-written queries referencing the current database structure. I need to find all of our scripts with references to MySQL tables. Here is what I started:
grep -ir 'from' /var/www/sites/inspection.certifymyshop.com/ > resultsList.txt
I am trying to grep through our scripts recursively and export ALL table names found to a text file, we can use the "->from" and the "->join" prefixes to help us:
->from('databaseName.table_name dtn') // dtn = table alias
OR
->join('databaseName.table_name dtn') // dtn = table alias
I need to find the database and table name within the single quotes (i.e. databaseName.table_name). I also need to list the filename this was found in underneath or next to the match like so:
someDatabaseName.someTableName | /var/www/sites/blah.com/index.php
| line 36
Try doing this :
grep -oPriHn -- "->(?:from|join)\('\K[^']+" . |
awk -F'[ :]' '{print $3, "|", $1, "| line " $2}'
If this fits your needs, I can explain the snippet more as well.
The one problem you have with only using grep is removing the from, join or whatever identifying prefix from the result. To fix this we can also use sed
grep -EHroi -- '->(from|join)\('\''[^'\'' ]*' /path/to/files | sed -re 's/:.*(from|join)\('\''/:/g'
You could also use sed alone in a for loop
for i in `find /path/to/files -type f -print`
do
echo $i
sed -nre 's/^.*->(from|join)\('\''([^'\'' ]*)['\'' ].*$/\2/gp' $i
done
Edit: The above for loop breaks with filenames with spaces so here's the previous sed statement using find
find ./ -type f -exec sh -c "echo {} ; sed -nre 's/^.*->(from|join)\('\''([^'\'' ]*)['\'' ].*$/\2/gp' \"{}\" ;" \;