I'm using Google Bazel to build a program. When I make a large change that affects multiple files, and rebuild, Bazel chooses one file at random to display the error message for. This causes a lot of editor churn, and I constantly lose my context. I fix one compilation error in one file, then rebuild, but I cannot see whether or not the fix worked because Bazel decides to fail on some other file.
Specifically if I have a target
cc_binary(name='foo',
srcs=['bar.cc', 'qux.cc'])
and I run $ bazel build :foo then I will get error messages for bar.cc. If I run again without making any changes, then I will get (maybe) error messages.for qux.cc. I don't know what governs the randomness. Perhaps it is not meant to be known by mere mortals such as my lowly self?
Is there a way to solidify the order in which Bazel builds files, so that I don't have to jump "physically" and mentally between files? Reorienting mental context takes time, and when fixing dumb typos, that time is totally wasted.
What I would love is something like make whereby you can say $ make foo.o. Then I can fix foo.cc, and only after it builds then move on to bar.cc. Does Alphabet Google support such an advanced method?
Try using --keep_going
That will tell bazel not to stop at the first error it finds, and instead try to build everything it can.
Related
I am wanting to measure the time it takes for my C++ video processing program to process a video. I am using CLion to write the program and have Cmake set up to compile and automatically run the program with a test video. However, in order to find execution time I have been using the following command in the MacOS terminal:
% time ./main ../Media/test_video.mp4
Is there a way for me to configure Cmake to automatically include time in the execution of ./main to streamline my process further?
So far I've tried using set(CMAKE_ARGS time "test_video.mp4") and some command line argument functions but they don't seem to be acting in the way that I'm looking for.
It is possible to use add_custom_target to do what you want. I'll not consider this option further as it seems abusing the build system for something it wasn't designed to do. Yet it may have an advantage over using CLion configuration: it would be available to be used outside of CLion. That advantage seems minor: why not run the desired command directly in those contexts?
The first CLion method would be to define an external tool which run time on the current build target. In File|Settings...|Tools|External Tools define a new tool with /bin/time as program, $CMakeCurrentProductFile$ $Prompt$ as arguments. When choosing that tools (in Tools|External Tools) it will now prompt you for the argument and then run /bin/time on the current target with the provided arguments. Advantage: you don't have to define the tool once, it will be available in every project. Inconvenients: the external tools are available in every project, thus it doesn't make sense to be more specific than $Prompt$ for the arguments and the working directory; it isn't possible to specify environment variables, it isn't possible to enforce the need of a build before running the command.
The second CLion method would to be define a Run/Debug Configuration. Again use /bin/time as program (chose "Custom Executable"), specify $CMakeCurrentProductFile$ as first argument (here it makes sense to provide the other arguments as desired, but note that $Prompt$ is still a valid choice if needed). Advantages: it makes sense to be as specific as needed; you have all the feature of configurations (environment variables, input redirections, specifying actions to be executed before the execution). Inconvenient: it doesn't work with other CLion features which assume that the program is the target such as the debugger, the profiler, ... so you may have to duplicate configurations to get them.
Note that the methods aren't exclusive: you can define an external tools and then add configurations for the case where it is more convenient.
Ok, n00b question. I have a cpp file. I can build and run it in the terminal. I can build and run it using clang++ in VSCode.
Then I add gtest to it. I can compile in the terminal with g++ -std=c++0x $FILENAME -lgtest -lgtest_main -pthread and then run, and the tests work.
I install the C++ TestMate extension in VSCode. Everything I see on the internet implies it should just work. But my test explorer is empty and I don't see any test indicators in the code window.
I've obviously missed something extremely basic. Please help!
Executables should be placed inside the out or build folder of your workspace. Or one can modify the testMate.cpp.test.executables config.
I'd say, never assume something will "just work".
You'll still have to read the manual and figure out what are the names of config properties. I won't provide exact examples, because even though I've only used this extension for a short time, its name, and therefore full properties path, has already changed, so any example might get obsolete quite fast.
The general idea is: this extension monitors some files/folders, when they change, it assumes those are executables created using either gtest or catch2. The extension tries to run them with standard (for those frameworks) flags to obtain a list of test suites and test cases. If it succeeds, it will parse the output and create a nice list in the side panel. Markers in the code are also dependent on the exactly same parsed output, so if you have one, you have the other as well.
From the above, you need 3 things to make this work:
Provide correct path (or a glob pattern) for finding all test executables (while ignoring all non-test executables) in the extension config. There are different ways to do this, depending on the complexity of your setup, they are all in the documentation though.
Do not modify the output of the test executable. For example, if you happen to print something to stdout/stderr before gtest implementation parses and processes its standard flags, extension will fail to parse the output of ./your_test_binary --gtest-list_tests.
If your test executable needs additional setup to run correctly (env vars, cwd), make sure, that you use the "advanced" configuration for the extension and you configure those properties accordingly.
To troubleshoot #2 and #3 you can turn on debug logging for the extension (again, in the VSCode's config json), this will cause an additional "Output" tab/category to be created, where you can see, which files were considered, which were run, what was the output, and what caused this exact file to be ignored.
This messed with me for a while, I did as Mate059 answered above and it didn't work.
Later on I found out that the reason it didn't work was because I was using a Linux terminal inside windows (enabled from the features section) and I previously had installed the G++ compiler using the linux terminal so the compiler was turning my code into a .out file, for some reason TestMate could not read .out files.
Once I compiled the C++ source file using the powershell terminal it created a .exe file which I then changed the path in the setting.json as Mate059 said and it showed up.
TL;DR
Mate059 gave a great answer, go into settings.json inside your .vscode folder and modify "testMate.cpp.test.executables": "filename.exe".
For me it also worked using the wildcard * instead of filename.exe but I do not suggest to do that as in that might mess up something with the .exe from the main cpp file and what not.
I have seen one other answer link but what I don't understand is what is basis.cm and what's it's use?
You are asking two questions.
What is basis.cm and what's it's use?
This is the Basis library. It allows the use of built-in functions.
How to compile and execute a stand-alone SML-NJ executable
Assuming you followed Jesper Reenberg's tutorial on how to execute a heap image, the next thing you need in order to have SML/NJ produce a stand-alone executable is to convert this heap image. One should hypothetically be able to do this using heap2exec, a tool that takes the heap image, e.g. the .x86-linux file generated on my system, and generates an .asm file that can be assembled and linked.
Unfortunately, this tool is not very well-maintained, so you have to
Go to the smlnj.org page and fix the download-link by removing 'www.' (this page and the SourceForge page don't contain the same explanations or assumptions about argument count, and neither page's download link work).
Download and extract this tool, and fix the 'build' script so it points to your ml-build tool
Fix the tool's argument use by changing [inf, outf] to [_, inf, outf]
Run ./build which generates 'heap2asm.x86-linux' on my system
For example, in order to generate an .asm file for the heap2asm program itself, run
sml #SMLload heap2asm.x86-linux heap2asm.x86-linux heap2asm.s
At this point, I have unfortunately been unable to produce an executable that works. E.g. if you run gcc -c heap2asm.s and ld heap2asm.o, you get a warning of a missing _start label. The resulting executable segfaults even if you rename the existing _sml_heap_image label to _start. That is, it seems that a piece of entry code that the runtime environment normally delivers is missing here.
At this point, discard SML/NJ and use MLton for producing stand-alone binaries.
I've got the start of a haskell application and I want to see how the build tools behave. One of the things I would like to see is the Haskell coverage reports, through hpc (Haskell Program Coverage -> I didn't find this tag on so, hpc points to high perf computing, on a side note).
The structure of my application is
Main
src/
ModuleA
ModuleB
tests/
ModuleBTest
I have unit tests for moduleB, and I run those unit-tests through cabal test. Before, I configure cabal to spit out the hpc data through
cabal configure --ghc-options=-fhpc --enable-tests
I then build and test,
cabal build
cabal test unit-tests (that's the name of the test suite in the cabal file)
and I indeed see a report and all seems well. However, moduleA is not referred from within moduleB, it's only referred from the Main. I don't have tests (yet) for the Main module.
The thing is, I expected to see moduleA pop up in the hpc output, highlighted completely in yellow and really waving at me that there are no tests for this module, but that doesn't seem to be the case. I noticed that the .mix files are created for this 'unused' module, so I suspect the build step went ok but it goes wrong in the cabal test step.
If I go through ghci and I compile the unit tests while explicitly moduleA on the list of modules to compile then I do get hpc to show me that this module has no tests at all. So I suspect cabal optimizes this moduleA away (as it's 'unused') somewhere, but I don't really see how or where.
Now, I do realize that this might not be a real life situation, as moduleA is only referenced from within the main method, moduleB doesn't reference moduleA and I don't test the Main module (yet), but still I would have felt a lot better if it would at least show up in the program coverage as a hole in my tests the size of a battleship. Anybody an idea?
Note : I realize that my question might boil down to : "How do I tell cabal not to optimize unused modules away?" but I wanted to present the complete problem.
Kasper
First, make sure all of your modules are listed in the other-modules cabal field.
Even though in my experience sometimes applications seem to work their way around without specifying everything there - it can often cause mysterious linking issues, and I assume it could cause situations like yours.
Now, other than that, I don't think cabal would optimize your modules like that, but GHC's dead code elimination. So if your code is not used at all (just one actual usage per module has to exist), GHC wouldn't even care for it.
Unfortunately I haven't seen a flag to change that. You may want to make a meaningless usage for every module in your tests project, just to get things visible.
2.1 Dead code elimination
Does GHC remove code that you're not actually using?
Yes and no. If there is something in a module that isn't exported and
isn't used by anything that is exported, it gets ignored. (This makes
your compiled program smaller.) So at the module level, yes, GHC does
dead code elimination.
On the other hand, if you import a module and use just 1 function from
it, all of the code for all of the functions in that module get linked
in. So in this sense, no, GHC doesn't do dead code elimination.
(There is a switch to make GHC spit out a separate object file for
each individual function in a module. If you use this, only the
functions are actually used will get linked into your executable. But
this tends to freak out the linker program...)
If you want to be warned about unused code (Why do you have it there
if it's unused? Did you forget to type something?) you can use the
-fwarn-unused-binds option (or just -Wall).
- GHC optimisations - HaskellWiki
Our project contains a lot of c++ sources, up until now we were sing make to build everything, however this takes ages. So I stumbled upon waf, which works quite well and speeds up the build a lot. However everytime I do a full build I end up with a couple of build errors that make no sense. If I do an incremental build now, most of the time some of the sources that could not be build the first time around are build now, some others still fail. On another incremental build I will finally get a successful build.
I have tried building the separate libraries in separate steps, just in case any dependent libraries are attempted to build in parallel, but the errors still appear.
EDIT: The errors I keep getting do not seem to have anything to do with my code, e.g.
Build failed
-> task failed (exit status -1):
{task 10777520: c constr_SET.c -> constr_SET.c.1.o}
After another "waf build" I do not get this error anymore.
EDIT2: The build step for my libraries looks like this:
def build(bld):
bld.shlib(source="foo.cpp bar.cpp foobar.cpp constr_SET.c",
target="foobar",
includes= "../ifinc",
name="foobar",
use="MAIN RW HEADERS",
install_path = "lib/")
MAIN, RW, HEADERS are just some flags and external libraries we use.
Has anyone seen similar behaviour on their system? Or even a solution?
I'm suspecting multiple targets are building the same required object in parallel. Try
export JOBS=1
or
waf --jobs 1