How to grep value from a php array - regex

I have simple php array in a php file. Here is the content :
<?php
$arr = array(
'fookey' => 'foovalue',
'barkey' => 'barvalue'
);
How can I fetch value foovalue using grep command ?
I have tried :
cat file.php | grep 'fookey=>'
Or
cat file.php | grep 'fookey=>*'
but always return the full line.

Your grep command shouldn’t have worked if you are doing it just the way you posted it here.
But if you are getting that line from grep whatever way you are doing,
Pass the output you got from grep through a pipe to
awk -F"'" '{print $4}'
I tested it this way on my pc:
echo "'fookey' => 'foovalue'" | awk -F"'" '{print $4}'

grep 'fookey=>' doesn't return any matches because this regex is not matched. Your example shows a record with single quotes around fookey and a space before the =>.
Also, you want to lose the useless use of cat.
Because your regex contains literal single quotes, we instead use double quotes to protect the regex from the shell.
grep "'fookey' =>" file.php
If your goal is to extract the value inside single quotes after the => the simple standard solution is to use sed instead of grep. On a matching line, replace the surrounding text with nothing before printing the line.
sed "/.*'fookey' => '/!d;s///;s/'.*//" file.php
In some more detail,
/.*'fookey' => '/!d skips any lines which do not match this regex;
s/// replaces the matched regex (which is implied when you pass in an empty regex) with nothing;
s/'.*// replaces everything after the remaining single quote with nothing;
and then sed prints the resulting line (because that's what it always does)
If you get "event not found" errors, you want to set +H or (in the very unlikely event that you really want to use Csh history expansion) figure out how to escape the !; see also echo "#!" fails -- "event not found"
Other than that, we are lucky that the script doesn't contain any characters which are special within double quotes; generally speaking, single quotes are much safer because they really preserve the text between them verbatim, whereas double quotes in the shell are weaker (you have to separately escape any dollar signs, backquotes, or backslashes).

This should do:
awk -F "'" '$2~/fookey/ {print $4}' file
or in your case
awk -F "'" '$2~/secret/ {print $4}' file
It searches for all lines where second filed contains fookey/secret and the print fort field with your password.

To fetch a value from an array why can't you use array_search method instead of grep?
<?php
$arr = array(
'fookey' => 'foovalue',
'barkey' => 'barvalue'
);
echo array_search("foovalue",$arr);
?>

You can use cut in combination with grep to get what you need.
cat file.php | grep 'fookey' | cut -c18-25
cut is used to get substring. In -cN-M, N and M are starting and ending position of the substring.

Related

How can I get "grep -zoP" to display every match separately?

I have a file on this form:
X/this is the first match/blabla
X-this is
the second match-
and here we have some fluff.
And I want to extract everything that appears after "X" and between the same markers. So if I have "X+match+", I want to get "match", because it appears after "X" and between the marker "+".
So for the given sample file I would like to have this output:
this is the first match
and then
this is
the second match
I managed to get all the content between X followed by a marker by using:
grep -zPo '(?<=X(.))(.|\n)+(?=\1)' file
That is:
grep -Po '(?<=X(.))(.|\n)+(?=\1)' to match X followed by (something) that gets captured and matched at the end with (?=\1) (I based the code on my answer here).
Note I use (.|\n) to match anything, including a new line, and that I also use -z in grep to match new lines as well.
So this works well, the only problem comes from the display of the output:
$ grep -zPo '(?<=X(.))(.|\n)+(?=\1)' file
this is the first matchthis is
the second match
As you can see, all the matches appear together, with "this is the first match" being followed by "this is the second match" with no separator at all. I know this comes from the usage of "-z", that treats all the file as a set of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline (quoting "man grep").
So: is there a way to get all these results separately?
I tried also in GNU Awk:
awk 'match($0, /X(.)(\n|.*)\1/, a) {print a[1]}' file
but not even the (\n|.*) worked.
awk doesn't support backreferences within regexp definition.
Workarounds:
$ grep -zPo '(?s)(?<=X(.)).+(?=\1)' ip.txt | tr '\0' '\n'
this is the first match
this is
the second match
# with ripgrep, which supports multiline matching
$ rg -NoUP '(?s)(?<=X(.)).+(?=\1)' ip.txt
this is the first match
this is
the second match
Can also use (?s)X(.)\K.+(?=\1) instead of (?s)(?<=X(.)).+(?=\1). Also, you might want to use non-greedy quantifier here to avoid matching match+xyz+foobaz for an input X+match+xyz+foobaz+
With perl
$ perl -0777 -nE 'say $& while(/X(.)\K.+(?=\1)/sg)' ip.txt
this is the first match
this is
the second match
Here is another gnu-awk solution making use of RS and RT:
awk -v RS='X.' 'ch != "" && n=index($0, ch) {
print substr($0, 1, n-1)
}
RT {
ch = substr(RT, 2, 1)
}' file
this is the first match
this is
the second match
With GNU awk for multi-char RS, RT, and gensub() and without having to read the whole file into memory:
$ awk -v RS='X.' 'NR>1{print "<" gensub(end".*","",1) ">"} {end=substr(RT,2,1)}' file
<this is the first match>
<this is
the second match>
Obviously I added the "<" and ">" so you could see where each output record starts/ends.
The above assumes that the character after X isn't a non-repetition regexp metachar (e.g. ., ^, [, etc.) so YMMV
The use case is kind of problematic, because as soon as you print the matches, you lose the information about where exactly the separator was. But if that's acceptable, try piping to xargs -r0.
grep -zPo '(?<=X(.))(.|\n)+(?=\1)' file | xargs -r0
These options are GNU extensions, but then so is grep -z and (mostly) grep -P, so perhaps that's acceptable.
GNU grep -z terminates input/output records with null characters (useful in conjunction with other tools such as sort -z). pcregrep will not do that:
pcregrep -Mo2 '(?s)X(.)(.+?)\1' file
-onumber used instead of lookarounds. ? lazy quantifier added (in case \1 occurs later).

Remove hostnames from a single line that follow a pattern in bash script

I need to cat a file and edit a single line with multiple domains names. Removing any domain name that has a set certain pattern of 4 letters ex: ozar.
This will be used in a bash script so the number of domain names can range, I will save this to a csv later on but right now returning a string is fine.
I tried multiple commands, loops, and if statements but sending the output to variable I can use further in the script proved to be another difficult task.
Example file
$ echo file.txt
ozarkzshared.com win.ad.win.edu win_fl.ozarkzsp.com ap.allk.org allk.org >ozarkz.com website.com
What I attempted (that was close)
domains_1=$(cat /tmp/file.txt | sed 's/ozar*//g')
domains_2=$( cat /tmp/file.txt | printf '%s' "${string##*ozar}")
Goal
echo domain_x
win.ad.win.edu ap.allk.org allk.org website.com
If all the domains are on a single line separated by spaces, this might work:
awk '/ozar/ {next} 1' RS=" " file.txt
This sets RS, your record separator, then skips any record that matches the keyword. If you wanted to be able to skip a substring provided in a shell variable, you could do something like this:
$ s=ozar
$ awk -v re="$s" '$0 ~ re {next} 1' RS=" " file.txt
Note that the ~ operator is comparing a regular expression, not precisely a substring. You could leverage the index() function if you really want to check a substring:
$ awk -v s="$s" 'index($0,s) {next} 1' RS=" " file.txt
Note that all of the above is awk, which isn't what you asked for. If you'd like to do this with bash alone, the following might be for you:
while read -r -a a; do
for i in "${a[#]}"; do
[[ "$i" = *"$s"* ]] || echo "$i"
done
done < file.txt
This assigns each line of input to the array $a[], then steps through that array testing for a substring match and printing if there is none. Text processing in bash is MUCH less efficient than in a more specialized tool like awk or sed. YMMV.
you want to delete the words until a space delimiter
$ sed 's/ozar[^ ]*//g' file
win.ad.win.edu win_fl. ap.allk.org allk.org website.com

unix sed not backtracking to finish the job

I'm trying to make a script to convert postgres CSV dumps into Oracle csv dumps. Aka, I'm trying to replace "true" with "Y" and "false" with "N".
So I want a script called to_oracle like this:
echo "false,false,false,true" | to_oracle
N,N,N,Y
So here is my attempt:
sed -E -e 's:(,|^)true(,|$):\1Y\2:g' -e 's:(,|^)false(,|$):\1N\2:g' "$#"
The logic is that a field in a CSV file either starts with beginning of line or a comma "," and it ends with either the end of line or a comma ","
The problem with this script is that it greedily absorbs the comma and thus every second field doesn't work:
echo "false,false,false,true" | to_oracle
N,false,N,Y
Now I suppose I could pipe it to the script twice, and that would do the job, but I'm wondering is there a more elegant solution?
An awk version:
echo "false,false,false,true" | awk -F, -v OFS=, '{for(i=1;i<=NF;i++) $i=$i=="true"?"Y":"N"}1'
N,N,N,Y
It test one by one field, if its true use Y, else use N
If you like to test for false as well
echo "false,false,false,true" | awk -F, -v OFS=, '{for(i=1;i<=NF;i++) $i=($i=="true"?"Y":($i=="false"?"N":"other"))}1'
N,N,N,Y
With GNU sed, you may use
sed -E ':a;s/(,|^)false(,|$)/\1N\2/;ta; :b;s/(,|^)true(,|$)/\1Y\2/;tb'
See the online demo
Details
-E will enable POSIX ERE syntax
':a;s/(,|^)false(,|$)/\1N\2/;ta; will recursively replace false in between commas or start/end of string with N
:b;s/(,|^)true(,|$)/\1Y\2/;tb' will recursively replace true in between commas or start/end of string with Y.

sed substitution with user-specified replacement string

The general form of the substitution command in sed is:
s/regexp/replacement/flags
where the '/' characters may be uniformly replaced by any other single character. But how do you choose this separator character when the replacement string is being fed in by an environment variable and might contain any printable character? Is there a straightforward way to escape the separator character in the variable using bash?
The values are coming from trusted administrators so security is not my main concern. (In other words, please don't answer with: "Never do this!") Nevertheless, I can't predict what characters will need to appear in the replacement string.
You can use control character as regex delimiters also like this:
s^Aregexp^Areplacement^Ag
Where ^A is CTRLva pressed together.
Or else use awk and don't worry about delimiters:
awk -v s="search" -v r="replacement" '{gsub(s, r)} 1' file
Here isn't (easy) solution for the following using the sed.
while read -r string from to wanted
do
echo "in [$string] want replace [$from] to [$to] wanted result: [$wanted]"
final=$(echo "$string" | sed "s/$from/$to/")
[[ "$final" == "$wanted" ]] && echo OK || echo WRONG
echo
done <<EOF
=xxx= xxx === =====
=abc= abc /// =///=
=///= /// abc =abc=
EOF
what prints
in [=xxx=] want replace [xxx] to [===] wanted result: [=====]
OK
in [=abc=] want replace [abc] to [///] wanted result: [=///=]
sed: 1: "s/abc/////": bad flag in substitute command: '/'
WRONG
in [=///=] want replace [///] to [abc] wanted result: [=abc=]
sed: 1: "s/////abc/": bad flag in substitute command: '/'
WRONG
Can't resists: Never do this! (with sed). :)
Is there a straightforward way to escape the separator character in
the variable using bash?
No, because you passing the strings from variables, you can't easily escape the separator character, because in "s/$from/$to/" the separator can appear not only in the $to part but in the $from part too. E.g. when you escape the separator it in the $from part it will not do the replacement at all, because will not find the $from.
Solution: use something other as sed
1.) Using pure bash. In the above script instead of the sed use the
final=${string//$from/$to}
2.) If the bash's substitutions are not enough, use something to what you can pass the $from and $to as variables.
as #anubhava already said, can use: awk -v f="$from" -v t="$to" '{gsub(f, t)} 1' file
or you can use perl and passing values as environment variables
final=$(echo "$string" | perl_from="$from" perl_to="$to" perl -pe 's/$ENV{perl_from}/$ENV{perl_to}/')
or passing the variables to perl via the command line arguments
final=$(echo "$string" | perl -spe 's/$f/$t/' -- -f="$from" -t="$to")
2 options:
1) take a char not in the string (need a pre process on content check and possible char without warranty that a char is available)
# Quick and dirty sample using `'/_##|!%=:;,-` arbitrary sequence
Separator="$( printf "%sa%s%s" '/_##|!%=:;,-' "${regexp}" "${replacement}" \
| sed -n ':cycle
s/\(.\)\(.*a.*\1.*\)\1/\1\2/g;t cycle
s/\(.\)\(.*a.*\)\1/\2/g;t cycle
s/^\(.\).*a.*/\1/p
' )"
echo "Separator: [ ${Separator} ]"
sed "s${Separator}${regexp}${Separator}${replacement}${Separator}flag" YourFile
2) escape the wanted char in the string patterns (need a pre process to escape char).
# Quick and dirty sample using # arbitrary with few escape security check
regexpEsc="$( printf "%s" "${regexp}" | sed 's/#/\\#/g' )"
replacementEsc"$( printf "%s" "${replacement}" | sed 's/#/\\#/g' )"
sed 's#regexpEsc#replacementEsc#flags' YourFile
From man sed
\cregexpc
Match lines matching the regular expression regexp. The c may be any
character.
When working with paths i often use # as separator:
sed s\#find/path#replace/path#
No need to escape / with ugly \/.

pipe sed command to create multiple files

I need to get X to Y in the file with multiple occurrences, each time it matches an occurrence it will save to a file.
Here is an example file (demo.txt):
\x00START how are you? END\x00
\x00START good thanks END\x00
sometimes random things\x00\x00 inbetween it (ignore this text)
\x00START thats nice END\x00
And now after running a command each file (/folder/demo1.txt, /folder/demo2.txt, etc) should have the contents between \x00START and END\x00 (\x00 is null) in addition to 'START' but not 'END'.
/folder/demo1.txt should say "START how are you? ", /folder/demo2.txt should say "START good thanks".
So basicly it should pipe "how are you?" and using 'echo' I can prepend the 'START'.
It's worth keeping in mind that I am dealing with a very large binary file.
I am currently using
sed -n -e '/\x00START/,/END\x00/ p' demo.txt > demo1.txt
but that's not working as expected (it's getting lines before the '\x00START' and doesn't stop at the first 'END\x00').
If you have GNU awk, try:
awk -v RS='\0START|END\0' '
length($0) {printf "START%s\n", $0 > ("folder/demo"++i".txt")}
' demo.txt
RS='\0START|END\0' defines a regular expression acting as the [input] Record Separator which breaks the input file into records by strings (byte sequences) between \0START and END\0 (\0 represents NUL (null char.) here).
Using a multi-character, regex-based record separate is NOT POSIX-compliant; GNU awk supports it (as does mawk in general, but seemingly not with NUL chars.).
Pattern length($0) ensures that the associated action ({...}) is only executed if the records is nonempty.
{printf "START%s\n", $0 > ("folder/demo"++i)} outputs each nonempty record preceded by "START", into file folder/demo{n}.txt", where {n} represent a sequence number starting with 1.
You can use grep for that:
grep -Po "START\s+\K.*?(?=END)" file
how are you?
good thanks
thats nice
Explanation:
-P To allow Perl regex
-o To extract only matched pattern
-K Positive lookbehind
(?=something) Positive lookahead
EDIT: To match \00 as START and END may appear in between:
echo -e '\00START hi how are you END\00' | grep -aPo '\00START\K.*?(?=END\00)'
hi how are you
EDIT2: The solution using grep would only match single line, for multi-line it's better use perl instead. The syntax will be very similar:
echo -e '\00START hi \n how\n are\n you END\00' | perl -ne 'BEGIN{undef $/ } /\A.*?\00START\K((.|\n)*?)(?=END)/gm; print $1'
hi
how
are
you
What's new here:
undef $/ Undefine INPUT separator $/ which defaults to '\n'
(.|\n)* Dot matches almost any character, but it does not match
\n so we need to add it here.
/gm Modifiers, g for global m for multi-line
I would translate the nulls into newlines so that grep can find your wanted text on a clean line by itself:
tr '\000' '\n' < yourfile.bin | grep "^START"
from there you can take it into sed as before.