I have a build where it would be easiest for now to just have unique build rules for each build target (eg object file and library). Is it possible to specify in the build.ninja-file what to do without first specifying a rule for it.
For example (toy syntax)
build this_file: depends_on_this.o depends_on_that.o - gcc argumentss_to_use files.o etc
As usual I found out a way to do this a minute after asking.
We could just forward the whole command as a variable
rule run
command = $cmd
build log.txt: run
cmd = echo $hello >> log.txt
Related
I have the following ninja file because I could not find a way to pass environment variables to ninja steps:
rule init
command = sh init.sh
rule plan
command = sh plan.sh
rule apply
command = sh apply.sh
build plan: plan
build init: init
build apply: apply
default plan
I would like to skip using shell scripts and capture all the steps in Ninja. I am not sure if this is supported at all.
It is a design decision of Ninja not to do that. If you need to pass in any parameter to Ninja the supported way to generate the build file with a build generator.
I want to create one unix command, which will unzip the folder.
so, I am searching for the code, but I am not aware that how should I use such code to make Unix command?
I have gone through various questions & answers but I don't get any perfect information.
So, can any one please suggest me any code (in C++ or C or any language to make exe) and to use it as a Unix command.
NOTE: I know command like 'unzip' is available in 'Mks toolkit' type of software but we can not use it, so I want to make command which can run through 'command prompt'
If you want to add a command, you only need to create your executable and put its link in the /usr/bin folder.
Just compile your code and set a link to it's executable like this:
ln -s /path/to/your_executable /usr/bin/command_name
If there exists a command that you need to modify, you should set an alias to it. For example, you want ls -1 to run whenever ls is used, then you only need to use the command:
alias ls=ls -1
or put the same command in the .bashrc file in your home directory.
Hello, I wrote a bash script that compile several cpp's and object files
in g++. My goal is to run the script in vim by :!, but
it doesn't works within vim, only when I'm outside.
In addition I wanted to why using % in a script
doesn't give me the current file, but gives an error
instead.
the script:
#!/bin/bash
# Search for the main module and remove the ext.
delimain=`grep main *.cpp | cut -d. -f1`
# Checks if there are also object files
if [ -f ./*.o ]; then
g++ -g *.cpp *.o -o $delimain.exe
# If There are only cpp file
else
g++ -g *.cpp -o $delimain.exe
fi
Thanks!
RE your comment
I'm using ubuntu 13.10 alias: alias link 'sh ~/bin/lin_script %'
You should not invoke a shell script with an explicit interpreter; the #!/bin/bash first line tells the shell already which interpreter to use. You're obviously a beginner in Bash; try to read some introductions to gain a better understanding.
Aliases won't work in Vim because they are only defined in an interactive shell, but the commands launched from Vim usually are launched in a non-interactive shell (because this is faster and comes with less unnecessary stuff).
The alias is interpreted by the shell, but the % symbol is special to Vim. The two are not the same. See my other answer how to pass a filename to the script.
You're right that % is automatically expanded when supplied to an Ex command inside Vim, but this does not apply to external scripts. What you have to do is pass the current file when invoking the external script, and in there reference the command-line argument:
Inside Vim:
:!linkage.sh %
In your script:
if [ $# -gt 0 ]; then
delimain=$1
else
delimain=`grep ...
In relatively big projects which are using plain old make, even building the project when nothing has changed takes a few tens of seconds. Especially with many executions of make -C, which have the new process overhead.
The obvious solution to this problem is a build tool based on inotify-like feature of the OS. It would look out when a certain file is changed, and based on that list it would compile this file alone.
Is there such machinery out there? Bonus points for open source projects.
You mean like Tup:
From the home page:
"Tup is a file-based build system - it inputs a list of file changes and a directed acyclic graph (DAG), then processes the DAG to execute the appropriate commands required to update dependent files. The DAG is stored in an SQLite database. By default, the list of file changes is generated by scanning the filesystem. Alternatively, the list can be provided up front by running the included file monitor daemon."
I am just wondering if it is stat()ing the files that takes so long. To check this here is a small systemtap script I wrote to measure the time it takes to stat() files:
# call-counts.stp
global calls, times
probe kernel.function(#1) {
times[probefunc()] = gettimeofday_ns()
}
probe kernel.function(#1).return {
now = gettimeofday_ns()
delta = now - times[probefunc()]
calls[probefunc()] <<< delta
}
And then use it like this:
$ stap -c "make -rC ~/src/prj -j8 -k" ~/tmp/count-calls.stp sys_newstat
make: Entering directory `/home/user/src/prj'
make: Nothing to be done for `all'.
make: Leaving directory `/home/user/src/prj'
calls["sys_newstat"] #count=8318 #min=684 #max=910667 #sum=26952500 #avg=3240
The project I ran it upon has 4593 source files and it takes ~27msec (26952500nsec above) for make to stat all the files along with the corresponding .d files. I am using non-recursive make though.
If you're using OSX, you can use fswatch
https://github.com/alandipert/fswatch
Here's how to use fswatch to for changes to a file and then run make if it detects any
fswatch -o anyFile | xargs -n1 -I{} make
You can run fswatch from inside a makefile like this:
watch: $(FILE)
fswatch -o $^ | xargs -n1 -I{} make
(Of course, $(FILE) is defined inside the makefile.)
make can now watch for changes in the file like this:
> make watch
You can watch another file like this:
> make watch anotherFile
Install inotify-tools and write a few lines of bash to invoke make when certain directories are updated.
As a side note, recursive make scales badly and is error prone. Prefer non-recursive make.
The change-dependency you describe is already part of Make, but Make is flexible enough that it can be used in an inefficient way. If the slowness really is caused by the recursion (make -C commands) -- which it probably is -- then you should reduce the recursion. (You could try putting in your own conditional logic to decide whether to execute make -C, but that would be a very inelegant solution.)
Roughly speaking, if your makefiles look like this
# main makefile
foo:
make -C bar baz
and this
# makefile in bar/
baz: quartz
do something
you can change them to this:
# main makefile
foo: bar/quartz
cd bar && do something
There are many details to get right, but now if bar/quartz has not been changed, the foo rule will not run.
My job mostly consists of engineering analysis, but I find myself distributing code more and more frequently among my colleagues. A big pain is that not every user is proficient in the intricacies of compiling source code, and I cannot distribute executables.
I've been working with C++ using Boost, and the problem is that I cannot request every sysadmin of every network to install the libraries. Instead, I want to distribute a single source file (or as few as possible) so that the user can g++ source.c -o program.
So, the question is: can you pack the Boost libraries with your code, and end up with a single file? I am talking about the Boost libraries which are "headers only" or "templates only".
As an inspiration, please look at the distribution of SQlite or the Lemon Parser Generator; the author amalgamates the stuff into a single source file which is trivial to compile.
Thank you.
Edit:
A related question in SO is for Windows environment. I work in Linux.
There is a utility that comes with boost called bcp, that can scan your source and extract any boost header files that are used from the boost source. I've setup a script that does this extraction into our source tree, so that we can package the source that we need along with our code. It will also copy the boost source files for a couple of boost libraries that we use that are no header only, which are then compiled directly into our applications.
This is done once, and then anybody who uses the code doesn't even need to know that it depends on boost. Here is what we use. It will also build bjam and bcp, if they haven't been build already.
#!/bin/sh
BOOST_SRC=.../boost_1_43_0
DEST_DIR=../src/boost
TOOLSET=
if ( test `uname` = "Darwin") then
TOOLSET="--toolset=darwin"
fi
# make bcp if necessary
if ( ! test -x $BOOST_SRC/dist/bin/bcp ) then
if ( test -x $BOOST_SRC/tools/jam/*/bin.*/bjam ) then
BJAM=$BOOST_SRC/tools/jam/*/bin.*/bjam
else
echo "### Building bjam"
pushd $BOOST_SRC/tools/jam
./build_dist.sh
popd
if ( test -x $BOOST_SRC/tools/jam/*/bin.*/bjam ) then
BJAM=$BOOST_SRC/tools/jam/*/bin.*/bjam
fi
fi
echo "BJAM: $BJAM"
pushd $BOOST_SRC/tools/bcp
echo "### Building bcp"
echo "$BJAM $TOOLSET"
$BJAM $TOOLSET
if [ $? == "0" ]; then
exit 1;
fi
popd
fi
if ( ! test -x $BOOST_SRC/dist/bin/bcp) then
echo "### Couldn't find bpc"
exit 1;
fi
mkdir -p $DEST_DIR
echo "### Copying boost source"
MAKEFILEAM=$DEST_DIR/libs/Makefile.am
rm $MAKEFILEAM
# Signals
# copy source libraries
mkdir -p $DEST_DIR/libs/signals/src
cp $BOOST_SRC/libs/signals/src/* $DEST_DIR/libs/signals/src/.
echo -n "boost_sources += " >> $MAKEFILEAM
for f in `ls $DEST_DIR/libs/signals/src | fgrep .cpp`; do
echo -n "boost/libs/signals/src/$f " >> $MAKEFILEAM
done
echo >> $MAKEFILEAM
echo "### Extracting boost includes"
$BOOST_SRC/dist/bin/bcp --scan --boost=$BOOST_SRC ../src/*/*.[Ch] ../src/boost/libs/*/src/*.cpp ../src/smart_assert/smart_assert/priv/fwd/*.hpp $DEST_DIR
if [ $? != "0" ]; then
echo "### bcp failed"
rm -rf $DEST_DIR
exit 1;
fi
Have you considered just writing a build script for a build system like SCons?
You could write a python script to download boost, unpack it compile the needed files (you can even run bjam if needed) and compile your own code.
The only dependency your colleagues will need is Python and SCons.
Run the preprocessor on your code and save the output. If you started with one main.cpp with a bunch of includes in it, you will end up with one file where all of the includes have been sucked in. If you have multiple cpp files, you will have to concatinate them together and then run the preprocessor on the concatinated file, this should work as long as you don't have any duplicate global symbol names.
For a more portable method, do what sqlite does and write your own script to just combine and concatinate together the files you created+boost, and not get the system includes. See mksqlite3c.tcl in the sqlite code
http://www2.sqlite.org/src/finfo?name=tool/mksqlite3c.tcl
Why not just check in all the necessary files to SVN, and send you co-workers the URL of the repository? Then they can check out the code whenever they want to, do an 'svn up' any time they want to update to the latest version, etc.
If you're on a Debian-derived variety of Linux, well problems like this just shouldn't come up: let the packaging system and policy manual do the work. Just make it clear that the libboost-dev or whatever package is a build-dependency of your code and needs to be installed beforehand, and then /usr/include/boost should be right there where your code expects to find it. If you're using a more recent version of boost than the distro ships, it's probably worth figuring out how to package it yourself and work within the existing packaging/dependencies framework rather than reinventing another one.
I'm not familiar enough with .rpm based distros to comment on how things work there. But knowing I can easily setup exactly the build environment I need is, for me, one of the biggest advantages of Debian based development over Windows.