lcov, reporting 0% coverage for not tested headers - c++

i will ask using an example. Let's suppose the following files:
root
- yes.h
- not.h
- test.cpp
"test.cpp" includes "yes.h"
when I run lcov shows the percentage covered in yes.h and in test.cpp, but (and here's my question) I want a zero coverage entry for "not.h", this way I can really have a valuable coverage metric. There's any way to achieve this?
Here's my lcov usage:
g++ --coverage test.cpp
lcov --directory . --zerocounters
lcov -c -i -d . -o app_base.info
./a.out
lcov -c -d . -o app_test.info
lcov -a app_base.info -a app_test.info -o app_total.info
geninfo app_total.info
thanks.

You might want to take a look at the --remove and --extract options for lcov (see the man page).
In your case, you might want to change
lcov -a app_base.info -a app_test.info -o app_total.info
geninfo app_total.info
with
lcov -a app_base.info -a app_test.info -o app_tmp.info
lcov --remove app_tmp.info */not.h --output app_total.info
geninfo app_total.info

Related

How to compile custom cpp files on Google Colab

I'm trying to replicate the result of this github repo using Google Colab since I don't want to install all the requirements on my local machine and to take advantage of the GPU on Google Colab
However, one of the things I need to do (as indicated in the repo's README) is to first compile a cpp makefile. The instruction of the makefile is included below. Obvious I can't follow this instruction since I don't know Google Colab's directories of ncvv, cudalib and tensorflow library
cd latent_3d_points/external
with your editor modify the first three lines of the makefile to point to
your nvcc, cudalib and tensorflow library.
make
Is there a way for me to compile the files included in the makefile (because those functions are needed to run the model) either using the makefile directly or compile each cpp file individually? I included the content of the makefile below to avoid having you to click around in the repo looking for it
nvcc = /usr/local/cuda-8.0/bin/nvcc
cudalib = /usr/local/cuda-8.0/lib64
tensorflow = /orions4-zfs/projects/optas/Virt_Env/tf_1.3/lib/python2.7/site-packages/tensorflow/include
all: tf_approxmatch_so.so tf_approxmatch_g.cu.o tf_nndistance_so.so tf_nndistance_g.cu.o
tf_approxmatch_so.so: tf_approxmatch_g.cu.o tf_approxmatch.cpp
g++ -std=c++11 tf_approxmatch.cpp tf_approxmatch_g.cu.o -o tf_approxmatch_so.so -shared -fPIC -I $(tensorflow) -lcudart -L $(cudalib) -O2 -D_GLIBCXX_USE_CXX11_ABI=0
tf_approxmatch_g.cu.o: tf_approxmatch_g.cu
$(nvcc) -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -c -o tf_approxmatch_g.cu.o tf_approxmatch_g.cu -I $(tensorflow) -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -O2
tf_nndistance_so.so: tf_nndistance_g.cu.o tf_nndistance.cpp
g++ -std=c++11 tf_nndistance.cpp tf_nndistance_g.cu.o -o tf_nndistance_so.so -shared -fPIC -I $(tensorflow) -lcudart -L $(cudalib) -O2 -D_GLIBCXX_USE_CXX11_ABI=0
tf_nndistance_g.cu.o: tf_nndistance_g.cu
$(nvcc) -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 -c -o tf_nndistance_g.cu.o tf_nndistance_g.cu -I $(tensorflow) -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -O2
clean:
rm tf_approxmatch_so.so
rm tf_nndistance_so.so
rm *.cu.o
You can use the bash like on your pc by adding %%bash in the colab's cells.
Example:
Cell one: write cpp file
%%writefile welcome.cpp
#include <iostream>
int main()
{
std::cout << "Welcome To AI with Ashok's Blog\n";
return 0;
}
Cell two: compile and run
%%bash
g++ welcome.cpp -o welcome
./welcome
You can also open the cpp file in colab's build-in text editor in order to enjoy correct highlights. It opens when you open a text file from the "Files" tab on the left and can be save with "ctr+s" shortcut.
You can install the required version of Cuda in google colab. For eg.
For Cuda 9.2 you can try
!apt-get --purge remove cuda nvidia* libnvidia-*
!dpkg -l | grep cuda- | awk '{print $2}' | xargs -n1 dpkg --purge
!apt-get remove cuda-*
!apt autoremove
!apt-get update
!wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb
!dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb
!apt-key add /var/cuda-repo-9-2-local/7fa2af80.pub
!apt-get update
!apt-get install cuda-9.2
Similarly, you can find a way to install Cuda 8.2.
For gcc
!apt-get install -qq gcc-5 g++-5 -y
!ln -s /usr/bin/gcc-5
!ln -s /usr/bin/g++-5
!sudo apt-get update
!sudo apt-get upgrade
Then you can compile it or make it by running make, if your installation has a custom make file.
!make

Error with gtest installation on home system

Im trying to setup my home computer the same way as the computers at my school so I can work on my assignments from here, but I can't for the life of me get gtest working correctly. I've run through the recommended install process and figured out that it needs the ".so" libraries to not throw 'pthread' not found errors.
Makefile:
PROJECT_DIR = Electra
PROGRAM_TEST = testProject
CXX = g++
CXXFLAGS = -std=c++11 -g -fprofile-arcs -ftest-coverage
LINKFLAGS = -lgtest
SRC_DIR = src
TEST_DIR = test
SRC_INCLUDE = include
INCLUDE = -I ${SRC_INCLUDE}
GCOV = gcov
LCOV = lcov
COVERAGE_RESULTS = results.coverage
COVERAGE_DIR = docs/code/coverage
STATIC_ANALYSIS = cppcheck
STYLE_CHECK = cpplint.py
DOXY_DIR = docs/code
#Targets
#
#.PHONY: all
#all: $(PROGRAM_TEST) memcheck coverage docs static style
#Temporary all target. use ^^^ this one once docs and coverage required
.PHONY: all
all: $(PROGRAM_TEST) memcheck static style
%.o: %.cpp
$(CXX) $(CXXFLAGS) -c $< -o $#
.PHONY: clean
clean:
rm -rf *~ $(SRC)/*.o $(TEST_DIR)/output/*.dat \
*.gcov *.gcda *.gcno *.orig ???*/*.orig \
*.bak ???*/*.bak $(PROGRAM_GAME) \
???*/*~ ???*/???*/*~ $(COVERAGE_RESULTS) \
$(PROGRAM_TEST) $(MEMCHECK_RESULTS) $(COVERAGE_DIR) \
$(DOXY_DIR)/html obj bin
$(PROGRAM_TEST): $(TEST_DIR) $(SRC_DIR)
$(CXX) $(CXXFLAGS) -o $(PROGRAM_TEST) $(INCLUDE) \
$(TEST_DIR)/*.cpp $(SRC_DIR)/*.cpp $(LINKFLAGS)
tests: $(PROGRAM_TEST)
$(PROGRAM_TEST)
memcheck: $(PROGRAM_TEST)
valgrind --tool=memcheck --leak-check=yes $(PROGRAM_TEST)
fullmemcheck: $(PROGRAM_TEST)
valgrind --tool=memcheck --leak-check=full $(PROGRAM_TEST)
coverage: $(PROGRAM_TEST)
$(PROGRAM_TEST)
# Determine code coverage
$(LCOV) --capture --gcov-tool $(GCOV) --directory . --output-file $(COVERAGE_RESULTS)
# Only show code coverage for the source code files (not library files)
$(LCOV) --extract $(COVERAGE_RESULTS) */$(PROJECT_DIR)/$(SRC_DIR)/* -o $(COVERAGE_RESULTS)
#Generate the HTML reports
genhtml $(COVERAGE_RESULTS) --output-directory $(COVERAGE_DIR)
#Remove all of the generated files from gcov
rm -f *.gcda *.gcno
static: ${SRC_DIR} ${TEST_DIR}
${STATIC_ANALYSIS} --verbose --enable=all ${SRC_DIR} ${TEST_DIR} ${SRC_INCLUDE} --suppress=missingInclude
style: ${SRC_DIR} ${TEST_DIR} ${SRC_INCLUDE}
${STYLE_CHECK} $(SRC_INCLUDE)/* ${SRC_DIR}/* ${TEST_DIR}/*
#.PHONY: docs
#docs: ${SRC_INCLUDE}
# doxygen $(DOXY_DIR)/doxyfile
Running "make tests" results in the following
g++ -std=c++11 -g -fprofile-arcs -ftest-coverage -o testProject -I include \
test/*.cpp src/*.cpp -lgtest
testProject
make: testProject: Command not found
Makefile:53: recipe for target 'tests' failed
make: *** [tests] Error 127
Any idea as to why this wont work? Or how to even get started trying to resolve this? Its not a very detailed error. I don't want to change the Makefile, as it works for my school systems and this is a shared project.
My home system is running Windows 10, and im using the Ubuntu shell to run makefiles
In POSIX shells the current working directory is not searched by default. This is a safety measure that comes from POSIX's origins as a multi-user system: you don't want someone to be able to drop a program like ls in some directory and have unsuspecting people run it just by typing ls in that directory.
Apparently in your school systems, someone has added the current working directory (.) to your PATH environment variable, while at home you do not have it added.
Your makefile is wrong, the recipe should be:
tests: $(PROGRAM_TEST)
./$(PROGRAM_TEST)
to force the program from the current working directory to be run, instead of relying on the cwd appearing in the PATH (or running some other instance of testProgram that does happen to be on your PATH).
This will work on all your systems.

Launch tests normally when not in debug mode

For my C++ project have the following Makefile:
GDB=gdb
DEBUG ?= 1
ifeq ($(DEBUG), 1)
CCFLAGS =-DDEBUG
RUNPATH =${GDB}
else
CCFLAGS=-DNDEBUG
RUNPATH="/bin/sh -c"
endif
CPP=g++ ${CCFLAGS}
TESTGEN=cxxtestgen
CFLAGS_DBG=""
TEST_BUILD_PATH="./build/tests"
BUILD_PATH="./build"
TESTS_SRC_PATH="./tests"
SRC_PATH=""
# NORMAL TARGETS
# To Be filled
# RUN TESTS
test_command_parser_gen:
${TESTGEN} --error-printer -o ${TESTS_SRC_PATH}/CommandParser/runner.cpp ./tests/CommandParser/testCommandParser.h
test_command_parser_build: test_command_parser_gen
${CPP} -o ${TEST_BUILD_PATH}/commandParser ${TESTS_SRC_PATH}/CommandParser/runner.cpp ./src/tools/command_parser.cpp
test_command_parser_run: test_command_parser_build
${RUNPATH} ./build/tests/commandParser
clean:
find ./build ! -name '.gitkeep' -type f -exec rm -f {} + && find ./tests ! -name *.h -type f -exec rm -f {} +
When I launch the tests via the command:
make test_command_parser_run
As expected the gdb fires up and I can use it to debug the test. But sometimes I need just to run the test as is (eg. when in CI) therefore I use the following command to do so:
make test_command_parser_run DEBUG=0
But in that case I get the following error:
cxxtestgen --error-printer -o "./tests"/CommandParser/runner.cpp ./tests/CommandParser/testCommandParser.h
g++ -DNDEBUG -o "./build/tests"/commandParser "./tests"/CommandParser/runner.cpp ./src/tools/command_parser.cpp
"/bin/sh -c" ./build/tests/commandParser
/bin/sh: 1: /bin/sh -c: not found
Makefile:31: recipe for target 'test_command_parser_run' failed
make: *** [test_command_parser_run] Error 127
Therefore, I wanted to know how I can tell the make to execute the test without gdb when not in "debug" mode.
The whole idea behind this is somehow automatically debug my application without the need to remember the command and the compilation sequence to do so.
Remove quotes around /bin/sh -c, like so:
else
CCFLAGS=-DNDEBUG
RUNPATH=/bin/sh -c

gitlab-ci.yml cpp coverage report

I am trying to implement CI using Gitlab for a c++ project. To start with, I added a simple c++ hello world program which compiled and ranfine in both my PC and in Gitlab CI.
When I try to generate coverage report for the same the commands, it works on PC but not in Gitlab CI.
This is my gitlab-ci.yml
# use the official gcc image, based on debian
# can use verions as well, like gcc:5.2
# see https://hub.docker.com/_/gcc/
image: gcc
build:
stage: build
# instead of calling g++ directly you can also use some build toolkit like make
# install the necessary build tools when needed
# before_script:
# - apt update && apt -y install make autoconf
script:
- g++ -Wall --coverage -fprofile-arcs -ftest-coverage helloworld.cpp -o mybinary
- ls
artifacts:
paths:
- mybinary
# depending on your build setup it's most likely a good idea to cache outputs to reduce the build time
# cache:
# paths:
# - "*.o"
# run tests using the binary built before
test:
stage: test
script:
- bash runmytests.sh
coverage:
stage: deploy
before_script:
- apt-get -qq update && apt-get -qq install -y gcovr ggcov lcov
script:
- ls
- g++ -Wall --coverage -fprofile-arcs -ftest-coverage helloworld.cpp -o mybinary
- ./mybinary
- ls
- gcov helloworld.cpp
- lcov --directory . --capture --output-file coverage.info
- lcov --remove coverage.info '/usr/*' --output-file coverage.info
- lcov --list coverage.info
- genhtml -o res coverage.info
This is the generated error output
$ g++ -Wall --coverage -fprofile-arcs -ftest-coverage helloworld.cpp -o mybinary
$ ./mybinary
Hello, World!$ ls
README.md
helloworld.cpp
helloworld.gcda
helloworld.gcno
mybinary
runmytests.sh
$ gcov helloworld.cpp
File 'helloworld.cpp'
Lines executed:100.00% of 3
Creating 'helloworld.cpp.gcov'
File '/usr/local/include/c++/8.1.0/iostream'
No executable lines
Removing 'iostream.gcov'
$ lcov --directory . --capture --output-file coverage.info
Capturing coverage data from .
Found gcov version: 8.1.0
Scanning . for .gcda files ...
geninfo: WARNING: /builds/ganeshredcobra/gshell/helloworld.gcno: Overlong record at end of file!
Found 1 data files in .
Processing helloworld.gcda
geninfo: WARNING: cannot find an entry for helloworld.cpp.gcov in .gcno file, skipping file!
Finished .info-file creation
$ lcov --list coverage.info
Reading tracefile coverage.info
lcov: ERROR: no valid records found in tracefile coverage.info
ERROR: Job failed: exit code 1
How can I fix this?
Solved the issue by changing image name from gcc to ubuntu 16.04, the working yml will look like this
# use the official gcc image, based on debian
# can use verions as well, like gcc:5.2
# see https://hub.docker.com/_/gcc/
image: ubuntu:16.04
build:
stage: build
# instead of calling g++ directly you can also use some build toolkit like make
# install the necessary build tools when needed
before_script:
- apt update && apt -y install make autoconf gcc g++
script:
- g++ -Wall --coverage -fprofile-arcs -ftest-coverage helloworld.cpp -o mybinary
- ls
artifacts:
paths:
- mybinary
# depending on your build setup it's most likely a good idea to cache outputs to reduce the build time
# cache:
# paths:
# - "*.o"
# run tests using the binary built before
test:
stage: test
script:
- bash runmytests.sh
coverage:
stage: deploy
before_script:
- apt-get -qq update && apt-get -qq install -y make autoconf gcc g++ gcovr ggcov lcov
script:
- ls
- g++ -Wall --coverage -fprofile-arcs -ftest-coverage helloworld.cpp -o mybinary
- ./mybinary
- ls
- gcov helloworld.cpp
- lcov --directory . --capture --output-file coverage.info
- lcov --list coverage.info

"ifort: no match" error in csh script

I'm trying to compile a program called ZEUS and I am following the included instructions exactly but I came across the following error.
The instructions asked me to type csh -v namelist.s in the folder containing namelist.s. This is a fairly large assembly file which invokes another file given in the source files called bldlibo which is the csh file mentioned in the title. The contents of this file are:
#==== SCRIPT TO BUILD AN OBJECT LIBRARY FROM A FORTRAN SOURCE FILE====#
#
# Syntax: bldlibo <library name> <source code>
# eg: bldlibo namelist.a namelist.f
#
rm -rf bldlibo.dir
mkdir bldlibo.dir
cp $2 bldlibo.dir
cd bldlibo.dir
fsplit $2 >& /dev/null
rm $2
#
# When -fast option is used, this leads to the __vsincos_ unsatisfied
# external errors when zeus is compiled with -g option. Thus, use -O4
# instead.
#
ifort -c -O4 *.f
#f77 -c -g -C -ftrap=common *.f
ar rv $1 *.o >& /dev/null
ranlib $1
cd ..
mv bldlibo.dir/$1 .
rm -r bldlibo.dir
When I run csh -v namelist.s it begins to run bldlibo and works fine up until ifort is invoked at which point it says ifort: no match. I have tried adding #!/bin/csh at the start and also source .../ifortvars.csh but that didn't work.
Can anybody help me, sorry if I haven't explained it well enough.