Until now, I have been using Poly/ML for several small projects where all source code files are all in the same directory. To build these projects, all I had to do was run the following command in the REPL:
> PolyML.make "Main";
But now I have a project whose scale makes it impractical to put all source code files in the same directory. To build these projects in the REPL, I need to run the following commands:
> PolyML.make "foo/Foo";
> PolyML.make "bar/Bar";
> PolyML.make "qux/Qux";
> PolyML.make "Main";
which is not terribly practical as the number of subsystems grows.
Is there any way to automate the process of building projects with nested directory structures in Poly/ML?
P.D.: I have had a look at both SML/NJ's Compilation Manager and MLton's ML Basis system. While unquestionably powerful, these are too complicated for my needs.
Put a file called ml_bind.ML in each of the sub-directories and have those files build the component for that directory.
PolyML.make expects the name of the source file to match the name of the component (structure, signature or functor). So if it is looking for a structure called "Foo" it will expect the source for "Foo" in a file called "Foo", "Foo.ML" or "Foo.sml". If instead it finds a directory called "Foo" it recursively enters the "Foo" directory and uses the "ml_bind.ML" file as the guide to build the "Foo" structure. Typically, "Foo/ml_bind.ML" will look like
structure Foo = FooFunctor(structure A = FooA and B = FooB);
with files "Foo/FooFunctor.ML", "Foo/FooA.ML" and "Foo/FooB.ML" containing the source for "FooFunctor", "FooA" and "FooB" respectively.
You can find examples of this in the code for the Poly/ML compiler which comes as part of the Poly/ML source code distribution.
You could have a build.sml file listing and use-ing all project files:
use "bar/bar.sml";
use "foo/foo.sml";
use "main.sml";
Or, a little bit more concise:
app use [
"foo/foo.sml",
"bar/bar.sml",
"main.sml"
]
Where app is the standard List.app.
Then you can build just this one file:
$ polyc -o main main.sml
$ # or
$ poly
> PolyML.make "build.sml"
Related
I have the following directory structure:
my_dir
|
--> src
| |
| --> foo.cc
| --> BUILD
|
--> WORKSPACE
|
--> bazel-out/ (symlink)
|
| ...
src/BUILD contains the following code:
cc_binary(
name = "foo",
srcs = ["foo.cc"]
)
The file foo.cc creates a file named bar.txt using the regular way with <fstream> utilities.
However, when I invoke Bazel with bazel run //src:foo the file bar.txt is created and placed in bazel-out/darwin-fastbuild/bin/src/foo.runfiles/foo/bar.txt instead of my_dir/src/bar.txt, where the original source is.
I tried adding an outs field to the foo rule, but Bazel complained that outs is not a recognized attribute for cc_binary.
I also thought of creating a filegroup rule, but there is no deps field where I can declare foo as a dependency for those files.
How can I make sure that the files generated by running the cc_binary rule are placed in my_dir/src/bar.txt instead of bazel-out/...?
Bazel doesn't allow you to modify the state of your workspace, by design.
The short answer is that you don't want the results of the past builds to modify the state of your workspace, hence potentially modifying the results of the future builds. It'll violate reproducibility if running Bazel multiple times on the same workspace results in different outputs.
Given your example: imagine calling bazel run //src:foo which inserts
#define true false
#define false true
at the top of the src/foo.cc. What happens if you call bazel run //src:foo again?
The long answer: https://docs.bazel.build/versions/master/rule-challenges.html#assumption-aim-for-correctness-throughput-ease-of-use-latency
Here's more information on the output directory: https://docs.bazel.build/versions/master/output_directories.html#documentation-of-the-current-bazel-output-directory-layout
There could be a workaround to use genrule. Below is an example that I use genrule to copy a file to the .git folder.
genrule(
name = "precommit",
srcs = glob(["git/**"]),
outs = ["precommit.txt"],
# folder contain this BUILD.bazel file is tool which will be symbol linked, we use cd -P to get to the physical path
cmd = "echo 'setup pre-commit.sh' > $(OUTS) && cd -P tools && ./path/to/your-script.sh",
local = 1, # required
)
If you're passing the name of the output file in when running, you can simply use absolute paths. To make this easier, you can use the realpath utility if you're in linux. If you're on a mac, it is included in brew install coreutils. Then running it looks something like:
bazel run my_app_dir:binary_target -- --output_file=`realpath relative/path/to.output
This has been discussed and explained in a Bazel issue. Recommendation is to use a tool external to Bazel:
As I understand the use-case, this is out-of-scope for building and in the scope of, perhaps, workspace configuration. What I'm sure of is that an external tool would be both easier and safer to write for this purpose, than to introduce such a deep design change to Bazel.
The tool would copy the files from the output tree into the source tree, and update a manifest file (also in the source tree) that lists the path-digest pairs. The sources and the manifest file would all be versioned. A genrule or a sh_test would depend on the file-generating genrules, as well as on this manifest file, and compare the file-generating genrules' outputs' digests (in the output tree) to those in the manifest file, and would fail if there's a mismatch. In that case the user would need to run the external tool, thus update the source tree and the manifest, then rerun the build, which is the same workflow as you described, except you'd run this tool instead of bazel regenerate-autogenerated-sources.
I can think of at least a few use-cases or scenarios for this:
An "examples" configuration that builds multiple example programs.
A project with a client program and a host program.
A project with one main program and some associated utility programs (ex: gcc).
More generally, if the project has multiple output executables that have the same or similar set of dependencies, then it may make sense to build them all at once. It is easier for a user to decide which executable to run than to figure out how to make a DUB create the executable they want (they might not be a D developer that is familiar with DUB). It is convenient for me as a D developer too, because I can run fewer build commands to get what I want in total.
As an example, I might have a project layout like this:
dub.json
.gitignore
source/common/foo.d (Code called by all examples)
source/examples/simple_example.d (Build input with main function)
source/examples/complex_example.d (Build input with main function)
source/examples/clever_example.d (Build input with main function)
bin/simple_example.exe (Output executable)
bin/complex_example.exe (Output executable)
bin/clever_example.exe (Output executable)
Another project might look like this:
dub.json
.gitignore
source/common/netcode.d (Code called by all programs)
source/common/logic.d (Code called by all programs)
source/executables/host-daemon.d (Does privileged things for the server)
source/executables/server.d (Unprivileged network endpoint)
source/executables/client.d (Queries the server)
bin/host-daemon (Output executable)
bin/server (Output executable)
bin/client (Output executable)
In either project, I would want to build all executables with a single invocation of DUB. Ideally, this would all be managed from one dub.json file, due to the interrelated nature of the inputs and outputs.
It seems like subPackages might be able to do this, but managing it from one dub.json file is "generally discouraged":
The sub directories /component1 and /component2 then contain normal packages and can be referred to as "mylib:component1" and "mylib:component2" from outside projects. To refer to sub packages within the same repository use the "*" version specifier.
It is also possible to define the sub packages within the root package file, but note that it is generally discouraged to put the source code of multiple sub packages into the same source folder. Doing so can lead to hidden dependencies to sub packages that haven't been explicitly stated in the "dependencies" section. These hidden dependencies can then result in build errors in conjunction with certain build modes or dependency trees that may be hard to understand.
Can DUB build multiple executables in one go, as above, and if so, what is the most recommended way and why?
I managed to do it using configurations, without subpackages.
First, you prefix all your main functions with versions specifiers.
Suppose you have 3 different files, each with its own main.
one.d:
....
versions(one)
void main() {
....
}
two.d:
....
versions(two)
void main() {
....
}
three.d:
....
versions(three)
void main() {
....
}
In your dub project file (I use sdl format here), you can define three configurations, such that each configuration defines a different version, and outputs to a different output file.
dub.sdl:
....
configuration "one" {
versions "one"
targetType "executable"
targetName "one"
}
configuration "two" {
versions "two"
targetType "executable"
targetName "two"
}
configuration "three" {
versions "three"
targetType "executable"
targetName "three"
}
If you just call dub build, it will use the first configuration, but you can pass a different configuration:
dub build --config=two
"Subpackages are intended to modularize your package from the outside".
Instead, you should be able to create multiple configurations to do what you want like dub itself does (it defines multiple libraries but you could just as easily do multiple executables).
I'm not sure if there is a command to build all configurations at once. --combined maybe (the documentation isn't clear but I think it's actually for building all source files with a single compiler invocation rather that generating object files one by one).
Having used CMake, I've become used to out-of-source builds, which are encouraged with CMake. How can out-of-source builds be done with Cargo?
Using in-source-builds again feels like a step backwards:
Development tools need to be configured to ignore paths. Sometimes multiple plugins and development tools - especially using VIM or Emacs!
Some tools can't be configured to easily hide build files. While dotfiles are typically hidden, they will still show Cargo.lock and target/, worse still, recursively exposing their contents.
Deleting un-tracked files to remove everything outside of version control, typically to cleanup editor temp files or some test output, can backfire if you forgot to add a new file to version control and don't manually check the file list properly before deleting them.
Dependencies are downloaded into your source code path, sometimes adding *.rs files in the target directory as part of building indirect deps, so operating on all *.rs files may accidentally pickup other files which aren't in a hidden directory, so might not be ignored even after development tools have been configured.
While it's possible to work around all these issues, I'd rather just have an external build path and keep the source directory pristine.
You can specify the directory of the target/ folder either via configuration file (key build.target-dir) or environment variable (CARGO_TARGET_DIR). Here is an example using a configuration file:
Suppose you want to have a directory ~/work/ in which you want to save the Cargo project (~/work/foo/) and next to it the target directory (~/work/my-target/).
$ cd ~/work
$ cargo new --bin foo
$ mkdir .cargo
$ $EDITOR .cargo/config
Then insert the following into the configuration file:
[build]
target-dir = "./my-target"
If you then build in your normal Cargo project directory:
$ cd foo
$ cargo build
You will notice that there is no target/ dir, but everything is in ~/work/my-target/.
However, the Cargo.lock is still saved inside the Cargo project directory, but that kinda makes sense. For executables, you should check the Cargo.lock file into your git! For libraries, you shouldn't. I guess having to ignore one file is better than having to ignore an entire folder.
Lastly, there are a few caveats to changing the target-dir, which are listed in the PR which introduced the feature.
While useful manually setting this up isn't all that convenient, I wanted to be able to build multiple crates within a source tree, having all of them out-of-source, something that ../target-dir configuration option wouldn't achieve.
Helper utility for convenient out-of-source builds
Using the environment variable I've written a small utility to wrap cargo, so it automatically builds out-of-source, supporting crates both at the top-level, on in a subdirectory of the source tree.
Thanks to Lukas for pointing out CARGO_TARGET_DIR and target-dir configuration option.
What I really wanted was a dynamic CARGO_TARGET_DIR that changes relative to where I am.
This bash alias puts all builds in a mirrored directory structure, e.g. instead of putting target into ~/mydir/myproj it puts in into ~/rustbuild/mydir/myproj
alias cargo='CARGO_TARGET_DIR=$(echo $PWD | sed "s|$HOME|$HOME/rustbuild|g") cargo'
You could also make your rustbuild directory hidden.
Is there a way to compile a C++Builder project (a specific build configuration) from the command line?
Something like:
CommandToBuild ProjectNameToBuild BuildConfiguration ...
There are different ways for automating your builds in C++Builder (as of my experience, I'm speaking about old C++Builder versions like 5 and 6).
You can manually call compilers - bcc32.exe (also dcc32.exe, brcc32.exe and tasm32.exe if you have to compile Delphi units, resource files or assembly language lines of code in your sources) and linker - ilink32.exe.
In this case, you will need to manually provide the necessary input files, paths, and keys as arguments for each stage of compilation and linking.
All data necessary for compilation and linking is stored in project files and, hopefully there are special utilities, included in the C++Builder installation, which can automate this dirty work, provide necessary parameters to compilers and linker and run them. Their names are bpr2mak.exe and make.exe.
First you have to run bpr2mak.exe, passing your project *.bpr or *.bpk file as a parameter and then you will get a special *.mak file as output, which you can use to feed on make.exe, which finally will build your project.
Look at this simple cmd script:
#bpr2mak.exe YourProject.bpr
#ren YourProject.mak makefile
#make.exe
You can provide the real name of "YourProject.mak" as a parameter to make.exe, but the most straightforward way is to rename the *.mak file to "makefile", and then make.exe will find it.
To have different build options, you can do the following:
The first way: you can open your project in the IDE, edit options and save it with a different project name in the same folder (usually there are two project files for debug and release compile options). Then you can provide your building script with different *.bpr files. This way, it looks simple, because it doesn't involves scripting, but the user will have to manually maintain coherency of all project files if something changes (forms or units added and so on).
The second way is to make a script which edits the project file or make file. You will have to parse files, find compiler and linker related lines and put in the necessary keys. You can do it even in a cmd script, but surely a specialised scripting language like Python is preferable.
Use:
msbuild project.cbproj /p:config=[build configuration]
More specifics can be found in Building a Project Using an MSBuild Command.
A little detail not mentioned.
Suppose you have external dependencies and that the .dll file does not initially exist in your folder
You will need to include the external dependencies in the ILINK32.CFG file.
This file is usually in the folder
C:\Program Files (x86)\Borland\CBuilder6\Bin\ilink32.cfg
(consider your installation location)
In this file, place the note for your dependencies.
Example: A dependency for TeeChart, would look like this (consider the last parameter):
-L"C:\Program Files (x86)\Borland\CBuilder6\lib";"C:\Program Files (x86)\Borland\CBuilder6\lib\obj";"C:\Program Files (x86)\Borland\CBuilder6\lib\release";"C:\Program Files (x86)\Steema Software\TeeChart 805 for Builder 6\Builder6\Include\";"C:\Program Files (x86)\Steema Software\TeeChart 805 for Builder 6\Builder6\Lib\"
You will also need to include the -f command to compile.
In cmd, do:
//first generate the file.mak
1 - bpr2mak.exe MyProject.bpr
//then compile the .mak
2 - make.exe -f MyProject.mak
You can also generate a temporary mak file with another name, as the answer above says, directly with bpr2mak
bpr2mak.exe MyProject.bpr -oMyTempMak.mak
How can I get Eclipse to build many binaries at a time within one project (without writing a Makefile by hand)?
I have a CGI project that results in multiple .cgi programs to be run by the web server, plus several libraries used by them. The hand-made Makefile used to build it slowly becomes unmaintainable. We use Eclipse's "Internal Build" to build all other projects and we'd prefer to use it here too, but for the good of me, I can't find how to get Eclipse to build multiple small programs as result instead of linking everything into one binary.
Solution for this described there: http://tinyguides.blogspot.ru/2013/04/multiple-binaries-in-single-eclipse-cdt.html.
There is an excerpt:
Create a managed project (File > New C++ Project > Executable)
Add the source code containing multiple main() functions
Go to Project > Properties > C/C++ General > Path & Symbols > Manage Configurations
Make a build configuration for each executable and name it appropriately (you can clone existing configurations like Debug and Release).
From the project explorer, right click on each source file that contains a main() function > Resource Configurations > Exclude from Build and exclude all build configurations except the one that builds the executable with this main() function
All other code is included in all build configurations by default. You may need to change this depending on your application.
You can now build an executable for each main function by going to Project > Build Configurations > Set Active , Project > Build Project
Using Eclipse as your build system for production code seems like a bad idea in general. I think it's a great IDE and have used it extensively for both Java and C++ projects, but for a build system I firmly believe that Ant, make, and other dedicated build utilities are the way to go.
There are several reasons for this:
Dedicated build utilities offer the very flexibility you are looking for in generating multiple executable targets.
Ant and make support most conceivable arbitrary build process chains (though not quite all).
A dedicated build utility is likely to offer greater stability and backward-compatibility for build description file formats than an IDE tool like Eclipse. Also, I'm pretty sure that Eclipse's internal build feature is dependent on the ".project" file description, and the latter's format is probably not as stable as the build description format for either Ant or make.
General-purpose, basic build utilities are usually command-line-based, which makes it easy to integrate them with more sophisticated, higher-level build utilities for automated build management like Pulse, CruiseControl, etc.
The need that is motivating your question is telling you that it's time to make the switch to a better build tool.
There is a way to use buildconfigurations to create one binary (or shared library, in my case) from each build config. Using the answer above, this means to manually exclude all but the effective main file from each build config.
I just used the above answers to ease up working on my eclipse project that creates 14 shared libraries through 14 build configs. However, configuring the indivdual "exclude from build" setting was quite cumbersome, so I switched to using the following code relying on a preprocessor-directive as my complete main file:
/*
*main.cpp
*/
/* Within
* Project | Properties | C/C++-Build | Settings
* | GCC C++ Compiler | Preprocessor
* set the following defined Symbol:
* _FILENAME=${ConfigName}
*/
#define __QUOT2__(x) #x
#define __QUOT1__(x) __QUOT2__(x)
#include __QUOT1__(_FILENAME.cpp)
#undef __QUOT1__
#undef __QUOT2__
/* The above include directive will include the file ${CfgName}.cpp,
* wherein ${CfgName} is the name of the build configuration currently
* active in the project.
*
* When right clicking in
* Project Tree | (Project)
* and selecting
* Build Configuration | Build all
* this file will include the corresponding .cpp file named after the
* build config and thereby effectively take that file as a main file.
*
* Remember to exclude ALL ${CfgName}.cpp files from ALL build configurations.
*/
Note that it does nothing else then include another .cpp file which's name is deduced from the preprocessor and a symbol that is set in the compiler options. The symbol is ${CfgName} and will be replaced by the current config name by eclipse automatically.
One does not need to configure, which file is included in which build config. Just exclude all ${CfgName}.cpp files in every build and include main.cpp in every build.
PS: the answer from hovercraft gave me the idea to have a main file that does not contain code on its own. If one includes shared code from the different effective main files ${CfgName}.cpp, working on their code may become infeasible because header files in main.cpp will not be visible in them.
I did this until yesterday, but maintaining the code with broken index etc. was a big pain.
PPS: this procedure currently breaks the automatic rebuild of the main file if only the included .cpp file was changed. It seems that eclipse does not recognize the changes in ${CfgName}.cpp (which is excluded from build). So a manual rebuild is required after every change. This is currently bugging me ;)