Following the syntax given in the documentation here.
# Makefile
S=' '
spam:
ifneq ($(strip $(S)),)
#echo nonempty
else
#echo empty
endif
But when executing make spam, it still goes into the nonempty block here, expected the empty block.
What am I doing wrong?
make variable assignments aren't like shell assignments. You don't need the quotes.
You are setting the value of your variable to ' ' and not like you are expecting.
So strip then converts it to ' ' which is not equal to the empty string.
Remove the quotes on the assignment line.
Related
I'm very new to sed bash command, so trying to learn.
I'm currently faced with a few thousand markdown files i need to clean up and I'm trying to create a command that deletes part of the following
# null 864: Headline
body text
I need anything that come before the headline deleted which is '# null 864: '
it's allways: '# null ' then some digits ': '
I'm using gnu-sed because I'm using mac
The best I've come up with sofar is
gsed -i '/#\snull\s([1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]):\s/d' *.md
The above does not seem to work?
however if I do
gsed -i '/#\snull/d' *.md
it does what I want, however it does some unintended stuff in the body test.
How do I control so only the headline and the body text remains?
Considering that you want to print values before headline and don't want to print any other lines, then try following.
sed -E -n 's/^(#\s+null\s+[0-9]+:\s+)Headline/\1/p' Input_file
In case you want to print value before Headline and if match is not found want to print that complete line then try following:
sed -E 's/^(#\s+null\s+[0-9]+:\s+)Headline/\1/' Input_file
Explanation: Simple using -E option of sed to enable ERE(extended regular expression), then using s option of sed to perform substitution here. matching # followed by space(s) null followed by space(s) digits colon and space(s) and keeping it in 1st capturing group, while substitution, substituting it with 1st capturing group.
NOTE: Above commands will print values on terminal, in case you want to save them inplace then use -i option once you are satisfied with above code's output.
If I'm understanding correctly, you have files like this:
This should get deleted
This should too.
# null 864: Headline
body text
this should get kept
You want to keep the headline, and everything after, right? You can do this in awk:
awk '/# null [0-9]+:/,eof {print}' foo.md
You might use awk, and replace the # null 864: part with an empty string using sub.
See this page to either create a new file, or to overwrite the same file.
The }1 prints the whole line as 1 evaluates to true.
awk '{sub(/^# null [0-9]+:[[:blank:]]+/,"")}1' file
The pattern matches
^# null Match literally from the start of the string
[0-9]+:[[:blank:]]+ match 1+ digits, then : and 1+ spaces
Output
Headline
body text
On a mac ed should be installed by default so.
The content of script.ed
g/^# null [[:digit:]]\{1,\}: Headline$/s/^.\{1,\}: //
,p
Q
for file in *.md; do ed -s "$file" < ./script.ed; done
If the output is ok, remove the ,p and change the Q to w so it can edit the file in-place
g/^# null [[:digit:]]\{1,\}: Headline$/s/^.\{1,\}: //
w
Run the loop again.
I'd use a range in sed same as Andy Lester's awk solution.
Borrowing his infile,
$: cat tst.md
This should get deleted
This should too.
# null 864: Headline
body text
this should get kept
$: sed -Ein '/^# null [0-9]+:/,${p;d};d;' tst.md
$: cat tst.md
# null 864: Headline
body text
this should get kept
sample_file:
this is a test line!
this is a test line!
this is a test line!
this is a test line!
I'm using a Perl one-liner to replace text like this:
perl -pi -e 's{this(.*)\s}{\n\1\n}i && s{.*\n}{}' sample_file
The this(.*)\s part and \1 part are variable and I cannot change that (it comes from user input).
My problem is that I need to adjust the {\n\1\n} part depending on whether the first regex includes the newline character .
For example, if the first regex is {this(.*)\s} I need {\n\1\n}, but if the first is like {(.*)a test} I need {\n\1}.
How can I check whether the newline character is lost and put it back if necessary?
Generally speaking, you want to chomp inputs lines, and add a newline to output lines. -l (in conjunction with -n or -p) will do both.
For example, the following doesn't replace the newline with ! because it was removed by -l (and subsequently re-added by the print).
perl -i -ple's/\s/!/g' file
By the way, \1 ("match what the first capture captured") makes no sense in a substitution. You want $1 (as -w would tell you).
I am having trouble splitting a line which contains double quoted strings separated by comma. String looks something like:
"DevLoc","/Root/Docs/srvr/temp test","171.118.108.22","/Results/data/Procesos Batch","C:\DataExport\ExportTool\Winsock Folder DB","C:\Export\ExportTool\Temp Folder","22"
Some strings values contain spaces. I want to store each double quoted string into a variable. Can anyone please help
Below is my batch script. variable 'EnvDetails' contains above line which need to be parsed.
FOR /F "tokens=1,2,3,4,5,6,7 delims=," %%i in ("%EnvDetails%") do (
SET TEMPS=%%i
SET Path=%%j
SET host=%%k
SET scriptPath=%%l
SET WINSP_HOME=%%m
SET PUTTY_HOME=%%n
SET portNum=%%o
#echo %TEMPS% > temp.txt
#echo %MPath% >> temp.txt
#echo %host% >> temp.txt
#echo %scriptPath% >> temp.txt
#echo %WINSCP_HOME% >> temp.txt
#echo %PUTTY_HOME% >> temp.txt
#echo %portNum% >> temp.txt
)
Part of the problem is that you're attempting to retrieve your variable values within the same parenthetical code block as they're set. Because the cmd interpreter replaces variables with their values before the commands are executed, you're basically echoing empty values to temp.txt. To wait until the variables have been defined before expanding them, you'd need delayed expansion.
But you're really making this more complicated than it needs to be. What else are you doing with the variables, besides echoing them out to a text file?
What you should do instead is use a basic for loop rather than for /f. for without any switches evaluates lines similar to CSV parsers anyway, splitting on commas, semicolons, unquoted spaces and tabs, and so forth.
Given that you're basically splitting a line on commas and echoing each token, in order, to a text file, one token per line, you can simplify your code quite a bit like this:
#echo off
>temp.txt (
for %%I in (%EnvDetails%) do echo %%~I
)
If I'm mistaken and you do indeed intend to perform further processing on the data; if you do actually need the variables, then this example demonstrates delayed expansion:
#echo off
setlocal
set EnvDetails="DevLoc","/Root/Docs/srvr/temp test","171.118.108.22","/Results/data/Procesos Batch","C:\DataExport\ExportTool\Winsock Folder DB","C:\Export\ExportTool\Temp Folder","22"
>temp.txt (
FOR /F "tokens=1-7 delims=," %%i in ("%EnvDetails%") do (
SET "TEMPS=%%~i"
SET "MPath=%%~j"
SET "host=%%~k"
SET "scriptPath=%%~l"
SET "WINSCP_HOME=%%~m"
SET "PUTTY_HOME=%%~n"
SET "portNum=%%~o"
setlocal enabledelayedexpansion
echo !TEMPS!
echo !MPath!
echo !host!
echo !scriptPath!
echo !WINSCP_HOME!
echo !PUTTY_HOME!
echo !portNum!
endlocal
)
)
Final note: The tilde notation of %%~i, %%~j, etc, strips surrounding quotation marks from each token. If you intentionally wish to preserve the quotation marks as part of the variable values, remove the tildes.
Let's say I have some text in a variable called $1. Now I want to check if that $1 contains a certain string. If it contains a certain string I want to print a message. The printing is not the problem, the problem is the check. Any ideas how to do that?
The easiest way in my opinion is this :
set YourString=This is a test
If NOT "%YourString%"=="%YourString:test=%" (
echo Yes
) else (
echo No
)
Basiclly the string after ':' is the string you are looking for and you are using not infront of the if because %string:*% will remove the * from the string making them not equal.
The SET search and replace trick works in many cases, but it does not support case sensitive or regular expression searches.
If you need a case sensitive search or limited regular expression support, you can use FINDSTR.
To avoid complications of escaping special characters, it is best if the search string is in a variable and both search and target are accessed via delayed expansion.
You can pipe $1 into the FINDSTR command with the ECHO command. Use ECHO( in case $1 is undefined, and be careful not to add extra spaces. ECHO !$1! will echo ECHO is off. (or on) if $1 is undefined, whereas ECHO(!$1! will echo a blank line if undefined.
FINDSTR will echo $1 if it finds the search string - you don't want that so you redirect output to nul. FINDSTR sets ERRORLEVEL to 0 if the search string is found, and 1 if it is not found. That is what is used to check if the string was found. The && and || is a convenient syntax to use to test for match (ERRORLEVEL 0) or no match (ERRORLEVEL not 0)
The regular expression support is rudimentary, but still useful.
See FINDSTR /? for more info.
This regular expression example will search $1 for "BEGIN" at start of string, "MID" anywhere in middle, and "END" at end. The search is case sensitive by default.
set "search=^BEGIN.*MID.*END$"
setlocal enableDelayedExpansion
echo(!$1!|findstr /r /c:"!search!" >nul && (
echo FOUND
rem any commands can go here
) || (
echo NOT FOUND
rem any commands can go here
)
As far as I know cmd.exe has no built-in function which answers your question directly. But it does support replace operation. So the trick is: in your $1 replace the substring you need to test the presence of with an empty string, then check if $1 has changed. If it has then it did contain the substring (otherwise the replace operation would have had nothing to replace in the first place!). See the code below:
set longString=the variable contating (or not containing) some text
#rem replace xxxxxx with the string you are looking for
set tempStr=%longString:xxxxxx=%
if "%longString%"=="%tempStr%" goto notFound
echo Substring found!
goto end
:notFound
echo Substring not found
:end
I'm trying to do some fairly simple string parsing in bash script.
Basically, I have a file that is comprised of multiple multi-line fields. Each field is surrounded by a known header and footer.
I want to extract each field separately into an array or similar, like this
>FILE=`cat file`
>REGEX="######[\s\S]+?#####"
>
>if [[$FILE =~ $REGEX ]] then
> echo $BASH_REMATCH
>fi
FILE:
######################################
this is field one
######
######################################
this is field two
they can be any number of lines
######
Now I'm pretty sure the problem is that bash doesn't match newlines with the "."
I can match this with "pcregrep -M", but of course the whole file is going to match. Can I get one match at a time from pcregrep?
I'm not opposed to using some inline perl or similar.
if you have gawk
awk 'BEGIN{ RS="##*#" }
NF{
gsub("\n"," ") #remove this is you want to retain new lines
print "-->"$0
# put to array
arr[++d]=$0
} ' file
output
$ ./shell.sh
--> this is field one
--> this is field two they can be any number of lines
The TXR language performs whole-document multi-line matching, binds variables, and (with the -B "dump bindings" option) emits properly escaped shell variable assignments that can be eval-ed. Arrays are supported.
The # character is special so it has to be doubled up to match literally.
$ cat fields.txr
#(collect)
#########################################
# (collect)
#field
# (until)
#########
# (end)
# (cat field)## <- catenate the fields together with a space separator by default
#(end)
$ txr -B fields.txr data
field[0]="this is field one"
field[1]="this is field two they can be any number of lines"
$ eval $(txr -B fields.txr data)
$ echo ${field[0]}
this is field one
$ echo ${field[1]}
this is field two they can be any number of lines
The #field syntax matches an entire line. These are collected into a list since it is inside a #(collect), and the lists are collected into lists-of-lists because that is nested inside another #(collect). The inner #(cat field) however, reduces the inner lists to a single string, so we end up with a list of strings.
This is "classic TXR": how it was originally designed and used, sparked by the idea:
Why don't we make here-documents work backwards and do parsing from reams of text into variables?
This implicit emission of matched variables by default, in the shell syntax by default, continues to be a supported behavior even though the language has grown much more powerful, so there is less of a need to integrate with shell scripts.
I would build something around awk. Here is a first proof of concept:
awk '
BEGIN{ f=0; fi="" }
/^######################################$/{ f=1 }
/^######$/{ f=0; print"Field:"fi; fi="" }
{ if(f==2)fi=fi"-"$0; if(f==1)f++ }
' file
begin="######################################"
end="######"
i=0
flag=0
while read -r line
do
case $line in
$begin)
flag=1;;
$end)
((i++))
flag=0;;
*)
if [[ $flag == 1 ]]
then
array[i]+="$line"$'\n' # retain the newline
fi;;
esac
done < datafile
If you want to keep the marker lines in the array elements, move the assignment statement (with its flag test) to the top of the while loop before the case.