Trying to print all entries which start with /Volumes/, this to list mounted volumes on mac. See Updates.
IFS=$'\n' read -r -d '' -a volumes < <(
df | egrep -o '/Volumes/.*'
)
echo "${volumes}"
Update 1: This worked, but prints a space before each new line.
#!/usr/bin/env bash
IFS=$'\n' read -r -d '' -a volumes < <(
df | egrep -oi '(\s+/Volumes/\S+)'
)
printf "%s\n" "${volumes[#]}"
Update 2: Worked, but doesn't print volume names with spaces in it
IFS=$'\n' read -d '' -ra volumes < <(
df | awk 'index($NF, "/Volumes/")==1 { print $NF }'
)
printf '%s\n' ${volumes[#]}
Update 3: Prints the second part of the volume name with spaces in it on a new line
IFS=$'\n' read -d '' -ra volumes < <(
df | awk -F ' {2,}' 'index($NF, "/Volumes/")==1 { print $NF }'
)
printf '%s\n' ${volumes[#]}
Solution:
Tested Platform: macOS Catalina
IFS=$'\n' read -d '' -ra volumes < <(
df | sed -En 's~.* (/Volumes/.+)$~\1~p'
)
printf '%s\n' "${volumes[#]}"
DF Output
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1s5 976490576 21517232 529729936 4% 484332 4881968548 0% /
devfs 781 781 0 100% 1352 0 100% /dev
/dev/disk1s1 976490576 413251888 529729936 44% 576448 4881876432 0% /System/Volumes/Data
/dev/disk1s4 976490576 10487872 529729936 2% 6 4882452874 0% /private/var/vm
map auto_home 0 0 0 100% 0 0 100% /System/Volumes/Data/home
/dev/disk7s1 40880 5760 35120 15% 186 4294967093 0% /private/tmp/tnt12079/mount
/dev/disk8s1 21448 1560 19888 8% 7 4294967272 0% /Volumes/usb drive
/dev/disk6s1 9766926680 8646662552 1119135456 89% 18530 48834614870 0% /Volumes/root
/dev/disk2s1 60425344 26823168 33602176 45% 419112 525034 44% /Volumes/KINGS TON
You may use this pipeline in OSX:
IFS=$'\n' read -d '' -ra volumes < <(
df | sed -En 's~.* (/Volumes/.+)$~\1~p'
)
Check array content:
printf '%s\n' "${volumes[#]}"
or
declare -p volumes
declare -a volumes=([0]="/Volumes/Recovery" [1]="/Volumes/Preboot")
You may use
IFS=$'\n' read -r -d '' -a volumes < <(
df -h | awk 'NR>1 && $6 ~ /^\/Volumes\//{print $6}'
)
printf "%s\n" "${volumes[#]}"
The awk command gets all lines other than the first one (NR>1) and where Field 6 ("Mounted on") starts with /Volumes/ (see $6 ~ /^\/Volumes\/), and then prints the Field 6 value.
The printf "%s\n" "${volumes[#]}" command will print all the items in the volumes array on separate lines.
If the volume paths happen to contain spaces, you may check if there is a digit followed with % followed with whitespaces and /Volume/ and then get join the fields starting with Field 6 with a space:
df -h | awk 'NR>1 && $0 ~ /[0-9]%[ \t]+\/Volumes\//{for (i=6;i<=NF;i++) {a=a" "$i}; print a}'
It's a little unclear just what output you want, but you can always use awk to parse the information. For example if you want the "Filesytem" and "Mounted on" information, you can use with df:
df | awk '{
for (i=1; i<=NF; i++)
if ($i ~ /^\/Volumes/) {
print $1, substr($0, match($0,/\/Volumes/))
break
}
}'
Or using the input you provided in the file dfout, you could read the file as:
awk '{
for (i=1; i<=NF; i++)
if ($i ~ /^\/Volumes/) {
print $1, substr($0, print $1, substr($0, match($0,/\/Volumes/)))
break
}
}' dfout
Example Output
Using the file dfout with your data you would receive:
/dev/disk8s1 /Volumes/usb drive
/dev/disk6s1 /Volumes/root
/dev/disk2s1 /Volumes/KINGS TON
If you need more or less of each record, you can just output whatever other fields you like in the print statement.
Let me know if you want the format or output different and I'm happy to help further. I don't have a Mac to test on, but the functions uses are standard awk and not GNU awk specific.
Related
I have the following string as output
Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)
I want to capture all 4 values into 4 different variables in bash and I think the cleanest way to do that is with regex. And I think the best way to use regex for this is with sed.
I have tested the regex and can capture the value1 with
value1:(\d+)
With sed I am trying this based on other answers:
echo "Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)" | sed -n 's/^\s*value1\:\(\d\+\)\s\?.*/\1/p'
This returns nothing
BASH supports regular expressions natively:
#!/bin/bash
s='Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)'
pattern='value1:([0-9]+) value2:([0-9]+) value3:([0-9]+) value4:([0-9]+)'
if [[ "$s" =~ $pattern ]]
then
echo "${BASH_REMATCH[1]}"
echo "${BASH_REMATCH[2]}"
echo "${BASH_REMATCH[3]}"
echo "${BASH_REMATCH[4]}"
fi
4000
2000
500
1000
You could grep for the value with the -o flag to only output the match.
This outputs 4000
echo "Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)" | grep -Po '(?<=value1:)\d+'
Though it's tough to advise about if its the cleanest way to achieve your goal without more context, a program (in awk maybe?) that parses that output format might be interesting here.
This will work
Create a simple two statement script
var=`echo "Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)" | grep -Eo "\( .*\)"|sed 's/^.\(.*\).$/\1/'`
for v in $var; do
echo $v| awk -F: '{print $2}'
done
Run as
root#114855-T480:/home/yadav22ji# ./tpr
4000
2000
500
1000
You can assign these values to variables as you said.
Parsing and capturing every value to the variable:
result=`echo "Config(1) = ( value1:4000 value2:2000 value3:500 value4:1000)"`
declare -A variables=( ["variableone"]="1" ["variabletwo"]="2" ["variablethree"]="3" ["variablefour"]="4" )
for index in ${!variables[*]}
do
export $index=$(echo $result | tr ' ' '\n' | sed "s/[()]//g" | grep value | awk -F ":" '{print $2}' | head -"${variables[$index]}" | tail -1)
done
Array item- name of the variable
Array index - counter line using in head command
[root#centos ~]# env | grep variable
variablefour=1000
variableone=4000
variabletwo=2000
variablethree=500
I think following regex could be more shorter form in #hmm answer
value[0-9]{1}:([0-9]+)
I have a bash script that SSH'es to a list of servers (given a .txt file), runs another script inside each server, and shows the results. But I need to parse the verbose data from output, and eventually save some meaningful results as a .CSV file.
Here is my main script:
set +e
while read line
do
ssh myUser#"$line" -t 'sudo su /path/to/script.sh' < /dev/null
done < "/home/listOfServers.txt"
where the listOfServers.txt is like
server1
server2
server3
The output of running my script looks like this, showing the results for each servers one after another.
SNAME:WORKFLOW_APS_001 |10891 | Alive:2018-06-18:06:54 |TCP
SNAME:WORKFLOW_APSWEB_001 |11343 | Alive:2018-06-18:06:54 |TCP
Processes in Instance: WORKFLOW_OHS_002
WORKFLOW_OHS_002 | OHS | 8925 | Alive | 852960621 | 1367120 | 510:11:51 | http:9881
Processes in Instance: WORKFLOW_OHS_003
WORKFLOW_OHS_003 | OHS | 9187 | Alive | 2041606684 | 1367120 | 510:11:51 | http:9883
SNAME:WORKFLOW_RPSF_001 |10431 | Alive:2018-06-18:06:55 |TCP
SNAME:WORKFLOW_SCPTL_001 |9788 | Alive:2018-06-18:06:55 |TCP
...
From this output, I only need the OHS names and their status, and save along with the original server's name as a CSV. The pattern to me looks like this: I need to look at each line, and if the line doesn't contain "Processes in Instance" or "SNAME", then split based on space, and grab the 1st (OHS name) and 4th field (status). So my CSV will look like:
server1, WORKFLOW_OHS_002, Alive
server1, WORKFLOW_OHS_003, Alive
server2, .....
...
How can I modify my bash to do this?
You can use awk:
while read -r line; do
ssh myUser#"$line" -t 'sudo su /path/to/script.sh' < /dev/null |
awk -v s="$line" -F '|' -v OFS=', ' '!/^[[:blank:]]*SNAME:/ && NF>2 {
gsub(/^[[:blank:]]+|[[:blank:]]+$/, "");
gsub(/[[:blank:]]*\|[[:blank:]]*/, "|");
print s, $1, $4
}'
done < "/home/listOfServers.txt"
EDIT: As per your comment below, you can do this to handle error conditions:
while read -r line; do
out=$(ssh myUser#"$line" -t 'sudo su /path/to/script.sh' < /dev/null 2>&1)
if [[ -z $out ]]; then
echo "$line, NULL, NULL"
elif [[ $out == *"timed out"* ]]; then
echo "$line, FAIL, FAIL"
else
awk -v s="$line" -F '|' -v OFS=', ' '!/^[[:blank:]]*SNAME:/ && NF>2 {
gsub(/^[[:blank:]]+|[[:blank:]]+$/, "");
gsub(/[[:blank:]]*\|[[:blank:]]*/, "|");
print s, $1, $4
}' <<< "$out"
fi
done < "/home/listOfServers.txt"
Something to try - and good luck.
while read line
do ssh myUser#"$line" -t 'sudo su /path/to/script.sh' < /dev/null |
sed -E "/Processes in Instance/d; /SNAME/d;
s/^ *([^| ]*) *[|][^|]*[|][^|]*[|] *([^| ]*).*/$line,\1,\2/;"
done < "/home/listOfServers.txt"
You ought to be able to improve on that. :)
What is the best way to place a dot after the last three dots in a terminal output?
I want to get from this: 10902MB/88%
to this: 10.902MB/88%
Currently I have this command for that:
df -m /dev/sda8 | grep -Eo '[0-9]* [0-9]*%'| sed 's/ */MB\//g'
Edit
The output of `df -m /dev/sda8 is
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/sda8 90349 74910 10828 88% /
I need only the Available and Use% values.
echo 10902MB/88% | awk '{gsub(/[0-9][0-9][0-9]MB\//, ".&")} 1'
or
df -m /dev/sda8 | awk '{gsub(/[0-9][0-9][0-9]MB\//, ".&")} 1'
You are trying to get values of "Available" and "Use%" sections from df command output for a certain mounted file system.
Use the following sed approach:
df -m /dev/sda8 | sed -En 's~.* ([0-9]+)([0-9]{3}) +([0-9]+%).*~\1.\2MB/\3~gp'
I have an ugly looking command that takes output from sensors and strips it of unneeded characters and decimal places, then prints it with awk with a trailing fahrenheit symbol. The command looks like this:
sensors -f | awk '/temp1/ { print $2 }' | sed 's/+//g' | awk '{ sub (/\..*/, ""); print $1 "°F" }'
Output from this command looks like this.
101°F
The command is a bit of a mess but it gets the job done, and considering my limited knowledge of awk and sed that is something to be said. I was hoping someone with a bit more knowledge could show me how this command could be condensed down, as I am unclear how to run an awk with a search pattern, followed by a replace, and print. Any help would be appreciated. Thanks!
EDIT: Output of sensors -f as requested.
acpitz-virtual-0
Adapter: Virtual device
temp1: +112.2°F (crit = +203.0°F)
temp2: +118.6°F (crit = +221.0°F)
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +119.0°F (high = +212.0°F, crit = +212.0°F)
Core 1: +121.0°F (high = +212.0°F, crit = +212.0°F)
# Posix
sensors -f | sed -e '/^temp1:[[:blank:]]\{1,\}[+]/ !d' -e 's///;s/F.*/F/'
# GNU
sensors -f | sed '/^temp1:[[:blank:]]+[+]/!d;s///;s/F.*/F/'
another sed approach
You can use use this single awk command instead:
sensors -f | awk '/temp1/ { gsub(/^\+|\.[[:digit:]]+/, "", $2); print $2 }'
Output:
112°F
gsub(/^\+|\.[[:digit:]]+/, "", $2); will replace any leading + or decimal values from the reported temperature.
With GNU sed:
sensors -f | sed -nE 's/^temp1: +\+([^.]+).[0-9]*(°F).*/\1\2/p'
Output:
112°F
This is less messy:
sensors -f | awk '/temp1/{ printf "%d°F\n", $2 }'
It works, because awk begins parsing the string $2 at the beginning and stops if it doesn't fit the requirements anymore. So, as an example, "+234.76abcd" * 1 is parsed to the number 234.76. The same thing happens here when it looks for an integer to fill the %d in the format string.
I'd prefer:
sensors -f | awk '/temp1/{ print int($2) "°F" }'
i've got a bash variable that contains an IP address (no CIDR or anything, just the four octets).
i need to break that variable into four separate octets like this:
$ip = 1.2.3.4;
$ip1 = 1
$ip2 = 2
# etc
so i can escape the period in sed. is there a better way to do this? is awk what i'm looking for?
You could use bash. Here's a one-liner that assumes your address is in $ip:
IFS=. read ip1 ip2 ip3 ip4 <<< "$ip"
It works by setting the "internal field separator" for one command only, changing it from the usual white space delimiter to a period. The read command will honor it.
If you want to assign each octet to its own variable without using an array or a single variable with newline breaks (so you can easily run it through a for loop), you could use # and % modifiers to ${x} like so:
[ 20:08 jon#MacBookPro ~ ]$ x=192.160.1.1 && echo $x
192.160.1.1
[ 20:08 jon#MacBookPro ~ ]$ oc1=${x%%.*} && echo $o1
192
[ 20:08 jon#MacBookPro ~ ]$ x=${x#*.*} && echo $x
160.1.1
[ 20:08 jon#MacBookPro ~ ]$ oc2={x%%.*} && echo $o2
160
[ 20:08 jon#MacBookPro ~ ]$ x=${x#*.*} && echo $x
1.1
[ 20:08 jon#MacBookPro ~ ]$ oc3=${x%%.*} && echo $o3
1
[ 20:08 jon#MacBookPro ~ ]$ x=${x#*.*} && echo $x
1
[ 20:08 jon#MacBookPro ~ ]$ oc4=${x%%.*} && echo $oc4
1
[ 20:09 jon#MacBookPro ~ ]$ echo "$oc1\.$oc2\.$oc3\.$oc4"
192\.160\.1\.1
See this /wiki/Bash:_Append_to_array_using_while-loop
and more in this article.
You can split strings using the set built-in, with IFS as separator (normally space and tab).
splitip () {
local IFS
IFS=.
set -- $*
echo "$#"
}
splitip 12.34.56.78
# Now $1 contains 12, $2 contains 34, etc
If you just need to backslash-escape the dots, use string substitution - bash has ${ip//./\\.}
This code is something that I found on another site when I was looking to do the same thing. Works perfectly for my application.
read ICINGAIPADDRESS
# The following lines will break the ICINGAIPADDRESS variable into the four octets
# and assign each octet to a variable.
ipoct1=$(echo ${ICINGAIPADDRESS} | tr "." " " | awk '{ print $1 }')
ipoct2=$(echo ${ICINGAIPADDRESS} | tr "." " " | awk '{ print $2 }')
ipoct3=$(echo ${ICINGAIPADDRESS} | tr "." " " | awk '{ print $3 }')
ipoct4=$(echo ${ICINGAIPADDRESS} | tr "." " " | awk '{ print $4 }')
The easier way is using AWK:
echo 192.168.0.12 | awk -F. '{print $1 $2 $3 $4}'
-F is a field separator, in this case we use the dot "." as separator and print each column individually.
mortiz#florida:~/Documents/projects$ echo 76.220.156.100 | awk -F. '{print $1 $2 $3 $4}'
76220156100
mortiz#florida:~/Documents/projects$ echo 76.220.156.100 | awk -F. '{print $1}'
76
mortiz#florida:~/Documents/projects$ echo 76.220.156.100 | awk -F. '{print $2}'
220
mortiz#florida:~/Documents/projects$ echo 76.220.156.100 | awk -F. '{print $3}'
156
mortiz#florida:~/Documents/projects$ echo 76.220.156.100 | awk -F. '{print $4}'
100