how to use target specific variable in gnu make - build

i have a makefile like:
file1 = "path/to/some/file"
header="col1;col2;col3"
$(file1):
some steps to create the file
call_perl_script:$(file1)
${perl} script.pl in=header
The header is currently hardcoded and it is also there in generated file1. I need to fetch the header from file1. Somehow i have changed it like
file1 = "path/to/some/file"
$(file1):
some steps to create the file
$(eval header="$(shell $(sed) -n "/^col1;col2;col3/Ip" $(file1))")
call_perl_script:$(file1)
${perl} script.pl in=$(header)
It works fine but want to know if it is correct way to work with target specific variable. The header does not get its value passed until used with eval.
Also if I print $(header) in the call_perl_script target it prints correctly but if i use "if" condition to check if variable is empty and set default value, then it does not work. It sets the value of header in "if" block irrespective of the the value in header from "sed" output.
call_perl_script:$(file1)
${echo} $(header)
ifeq "$(header)" ""
$(eval header="col1;col2;col3")
endif
${perl} script.pl in=$(header)

I don’t think target-specific variables will help you here, because they’re usually static things. For example, if you need to silence one type of warning for one specific C file, you can add a rule like foo.o: CFLAGS+=-Whatever.
The problem you’re running into is that $(eval header=...) is only executed when $(file1) is made. If it already exists, then the target won’t get rebuilt, and header won’t get set.
A more natural way of doing this in a Makefile would be to save the header to a separate file. That way, it will automatically get regenerated whenever $(file) changes:
.DELETE_ON_ERROR:
file = foo.txt
call_perl_script: $(file) $(file).header
echo perl script.pl in="$(shell cat $(file).header)"
$(file):
echo "col1;col2;col3;$$(head -c1 /dev/random)" > $(file)
%.header: %
sed -n '/^col1;col2;col3/p' $< > $#
clean::
rm -f $(file)
rm -f *.header
which results in:
echo "col1;col2;col3;$(head -c1 /dev/random)" > foo.txt
sed -n '/^col1;col2;col3/p' foo.txt > foo.txt.header
perl script.pl in="col1;col2;col3;?"
However this is still a bit of a kludge, so for long-term maintainability, you may want to consider updating script.pl to parse out the header itself.

Related

Why can't I debug C++ file which contains spacebars in path (vs code)? [duplicate]

I have a directory containing several files, some of which have spaces in their names:
Test workspace/
Another directory/
file1.ext
file2.ext
demo 2012-03-23.odp
I use GNU's $(wildcard) command on this directory, and then iterate over the result using $(foreach), printing everything out. Here's the code:
FOO := $(wildcard *)
$(info FOO = $(FOO))
$(foreach PLACE,$(FOO),$(info PLACE = $(PLACE)))
Here's what I would expect to see printed out:
Test workspace
Another directory
file1.ext
file2.ext
demo 2012-03-23.odp
Here's what I would actually get:
Test
workspace
Another
directory
file1.ext
file2.ext
demo
2012-03-23.odp
The latter is obviously of no use to me. The documentation for $(wildcard) flat-out states that it returns a "space-separated list of names" but completely fails to acknowledge the huge problems this raises. Nor does the documentation for $(foreach).
Is it possible to work around this? If so, how? Renaming every file and directory to remove the spaces is not an option.
The bug #712 suggests that make does not handle names with spaces. Nowhere, never.
I found a blog post saying it's partially implemented by escaping the spaces with \ (\\ seems to be typo or formatting artefact), but:
It does not work in any functions except $(wildcard).
It does not work when expanding lists of names from variables, which includes the special variables $?, $^ and $+ as well as any user-defined variable. Which in turn means that while $(wildcard) will match correct files, you won't be able to interpret the result anyway.
So with explicit or very simple pattern rules you can get it to work, but beyond that you are out of luck. You'll have to look for some other build system that does support spaces. I am not sure whether jam/bjam does, scons, waf, ant, nant and msbuild all should work.
GNU Make does very poorly with space-separated filenames.
Spaces are used as delimiters in word list all over the place.
This blog post summarizes the situation well, but WARNING: it incorrectly uses \\ rather than \
target: some\ file some\ other\ file
some\ file some\ other\ file:
echo done
You can also use variables, so this would also work
VAR := some\ file some\ other\ file
target: $(VAR)
$(VAR):
echo done
Only the wildcard function recognizes the escaping, so you can't do anything fancy without lots of pain.
But don't forget that your shell uses spaces as delimiters too.
If I wanted to change the echo done to touch $#, I'd have to add slash to escape it for my shell.
VAR := some\ file
target: $(VAR)
$(VAR):
touch $(subst \,\\,$#)
or, more likely, use quotes
VAR := some\ file some\ other\ file
target: $(VAR)
$(VAR):
touch '$#'
In the end, if you want to avoid a lot of pain, both in GNU make, and in your shell, don't put spaces in your filenames. If you do, hopefully the limited capabilities of Make will be sufficient.
This method will also allow use of listed file names such as $? and user variables that are lists of files.
The best way to deal with spaces in Make is to substitute spaces for other characters.
s+ = $(subst \ ,+,$1)
+s = $(subst +,\ ,$1)
$(call s+,foo bar): $(call s+,bar baz) $(call s+,bar\ baz2)
# Will also shows list of dependencies with spaces.
#echo Making $(call +s,$#) from $(call +s,$?)
$(call s+,bar\ baz):
#echo Making $(call +s,$#)
$(call s+,bar\ baz2):
#echo Making $(call +s,$#)
Outputs
Making bar baz
Making bar baz2
Making foo bar from bar baz bar baz2
You can then safely manipulate lists of file names using all the GNU Make
functions. Just be sure to remove the +'s before using these names in a rule.
SRCS := a\ b.c c\ d.c e\ f.c
SRCS := $(call s+,$(SRCS))
# Can manipulate list with substituted spaces
OBJS := $(SRCS:.c=.o)
# Rule that has object files as dependencies.
exampleRule:$(call +s,$(OBJS))
# You can now use the list of OBJS (spaces are converted back).
#echo Object files: $(call +s,$(OBJS))
a\ b.o:
# a b.o rule commands go here...
#echo in rule: a b.o
c\ d.o:
e\ f.o:
Outputs
in rule: a b.o
Object files: a b.o c d.o e f.o
This info is all from the blog that everyone else was posting.
Most people seem to be recommending using no spaces in paths or using Windows 8.3 paths, but if you must use spaces, escaping spaces and substitution works.
If you are willing to rely on your shell a bit more, this gives a list which can hold names with spaces just fine:
$(shell find | sed 's: :\\ :g')
The original question said that "renaming is not an option", yet many commenters have pointed out that renaming is pretty much the only way Make can handle spaces. I suggest a middle way: Use Make to temporarily rename the files and then rename them back. This gives you all the power of Make with implicit rules and other goodness, but doesn't mess up your file naming scheme.
# Make cannot handle spaces in filenames, so temporarily rename them
nospaces:
rename -v 's/ /%20/g' *\ *
# After Make is done, rename files back to having spaces
yesspaces:
rename -v 's/%20/ /g' *%20*
You could call these targets by hand with make nospaces and make yesspaces, or you can have other targets depends on them. For example, you might want to have a "push" target which makes sure to put the spaces back in filenames before syncing files back with a server:
# Put spaces back in filenames before uploading
push: yesspaces
git push
[Sidenote: I tried the answer which suggested using +s and s+ but it made my Makefile harder to read and debug. I gave up on it when it gave me guff over implicit rules likes: %.wav : %.ogg ; oggdec "$<".]

Expanding path variable in makefile using SED on Windows

On Windows machine, a makefile is taking path option and creating another file by appending this path value.
My problem is that path variable is not expanding correct in resultant file.
For example
$ make var=c:\test\kernel
by using below makefile code this $(var) value is being appending to output file
all:
#sed -i '1 i\export PATH := $(var)' output.txt
Expected result
export PATH := c:\test\kernel
But instead I'm getting
export PATH := c: estkernel
So, how I can fix this problem in makefile?
First, I strongly urge you to always use forward slashes in paths even on Windows, especially when working with make. There are very few programs on Windows that won't work with forward-slashes (mainly old-school CMD commands etc.) and using backslashes in tools which have their provenance in UNIX will always be an uncomfortable fit.
For your situation you can do something like this:
all:
#sed -i '1 i\export PATH := $(subst \,\\,$(var))' output.txt
to convert your backslashes to escaped backslashes.

use regex to specify output filename

I have a folder with many files where I only need some columns so I tried this to extract what I need:
mkdir ./raw_data/selection
doit() {
csvfix read_dsv -f 1,3,7 -s \; $1 > $1 | sed 's/raw_data/raw_data\/selection/'
}
export -f doit
Files_To_Parse=`ls ./raw_data/*csv`
parallel doit ::: $Files_To_Parse
This doesn't work.
But if I to this:
cd ./raw_data
doit() {
csvfix read_dsv -f 1,3,7 -s \; $1 > selection/$1
}
export -f doit
Files_To_Parse=`ls -1 *csv`
parallel doit ::: $Files_To_Parse
it works but I'd like to be able to run this from the top folder in this project (i.e to put this in a file named brief_csv.sh and call it from IDEs)
If you used Bash, you could:
for f in raw_data/*.csv
do
csvfix ... "$f" > raw_data/selection/"${f##*/}"
done
Also, instead of csvfix for extracting columns you could use cut:
$ cut -d \; -f 1,3,7 $f ...
I don't know the commands you are using, but this line:
csvfix read_dsv -f 1,3,7 -s \; $1 > $1 | sed ...
redirects the output in the same file you are reading; this can not work. In fact, you say that your modified code instead works. You could use temporary files to store intermediate results, don't be afraid to use many of them: debugging will be easier (you can see intermediate passages) and the system doesn't suffer. /tmp is a good place to put those intermediate files.
Use csvfix to do the first step, and redirect in /tmp/my-csvfix-intermediate; then use sed to read /tmp/my-csvfix-intermediate, and write in /tmp/my-grep-intermediate. After the last passage, you can take the last intermediate result and overwrite the original file, perhaps after having backed it up. You can move files everywhere you need, I don't see any problem in running your script from an IDE - just use as many passages as you need.
Avoid to parallelize when debugging, when the script will work, you can add parallelizing.
When two or more parallel processes will try to write in the same file (/tmp/my-...-intermediate), you will have one more problem. To overcome this you need to use different files for every process. The bash variable "$$" comes to help, just use file names like "/tmp/my-$$-blablabla", the $$ will be substituted with the PID of the process, and parallel processes can not have the same PID.
Hope it helps, regards.

Why does `perl -i -p0e <expression>` work, but not `perl -0 -pie <expression>`?

If I try perl -pie 's/foo/bar/' file.txt it works as expected: the find-replace expression is executed, and the result is saved to the original file.
However, if I want to use the -0 to run an expression that includes newlines, simply prepending the option doesn't work:
$ perl -0 -pie 's/foo\nbar/qux/' file.txt
Can't open perl script "s/foo\nbar/qux/": No such file or directory
After several attempts, the following combination worked:
$ perl -i -p0e 's/foo\nbar/qux/' file.txt
My question is: why does the first order of options produce an error (especially when plain -pie works as expected), while the second ordering is correctly handled?
-i means work in-place without backup.
-ie means work in-place, with backup. The backup has the same name as the original file, but with e appended.
That means that perl -pie 's/foo/bar/' file.txt didn't work either (unless you have a Perl file named s/foo/bar/).
If you simply arrange the options logically, you avoid the problem. -i has nothing to do with the program —it'll still work if added/removed— so it makes more sense to place it first anyway. -p and -0777, otoh, are part of the program, so it makes sense to place them next to -e. So writing the command sensibly results in one of the following:
perl -i -0777pe'...' ...
perl -i~ -0777pe'...' ...
perl -0777pe'...' ...
Note that I used -0777, since -0 treats the input as NUL-terminated lines rather than activating slurp mode.

How do you loop through your current directory and store files in an array? Bash shell script

I am rather new to programming, and completely new to BASH. As described in the title, I am trying to loop through the current directory and store the files ending with .cpp into an array. I also am trying to create a second array which replaces the ".cpp" suffix with ".o" Whenever I try to compile I get "syntax error in conditional statement"
x=0
cwd=$(pwd)
for i in $cwd; do
if [[ $i == *.cpp]]
then
cppfield[$x] = $i
ofield[$x] = field[$x] | sed s/.cpp/.o/
x=$((count+1))
fi
done
Use:
shopt -s nullglob # In case there are no matches
for i in *.cpp; do
...
done
In your code, you're just setting i to $cwd, not the files in the directory.
I'm not sure what's your purpose of doing this. But if you just want to generate file name that replaces .cpp with .o, it can be done in a much easier way
for f in *.cpp
do
echo ${f/.cpp/.o}
done