Issue with C++ macros on Red Hat Enterprise Linux (RHEL) using CPPCHECK - c++

At my employment we are working on a large C++ project on Red Hat Enterprise Linux (RHEL) 6, soon to be RHEL 8. with Bash shell. We sometimes use Netbeans for editing source code, but I prefer to use vim. We are doing DevOps and Agile with two week sprints, and using Jenkins build engine with AccuRev for source control. Every time a code change is promoted in AccuRev, Jenkins automatically starts a new build of the code base. As part of that build, CPPCHECK is used to do static code analysis on the C++ source code.
In part of our system, we are using C++ macros to define unit test scripts. the macros are not fully defined, since we are allowing the unit test script developer to customize them for doing unit tests. This system works fine with no error at compile time with g++ compiler, and also there is no error at run time either.
However, when Jenkins does a build, and it uses CPPCHECK to analyze the code, it is generating
error-id: unknownMacro
text: There is an unknown macro here somewhere. Configuration is required. If SCRIPT is a macro then please configure it.
Here is an example of the C++ code we are using to complete partially defined C++ macro:
SCRIPT(SampleScript)
BODY()
{
cout << "SampleScript running." << endl;
}
END_SCRIPT()
SCRIPT, BODY, and END_SCRIPT are C++ macros listed in an include file, but are not completely defined. On the Github site for CPPCHECK there is a supposed solution to this issue by using -I option, but I tried that and the missing macro CPPCHECK errors are still occurring.
This is the CPPCHECK command listed with its arguments, including the -I option, but so far this command is still generating "unknownMacro" error.
cppcheck \
-I ./* \
-j 4 \
--xml-version=2 \

Ok, I have many years experience with Unix and Linux, but was not aware of the following fix. The fix is to use the -I option with CPPCHECK, which I was doing, but I was doing the following in the Makefile with an asterisk:
cppcheck \
-I ./* \
-j 4 \
--xml-version=2 \
I just found out from another person that asterisk * is not recognized in Unix and Linux Makefiles, but IS recognized from the command line with Bash and other shells like I have known for a long time. So I removed the asterisk from the CPPCHECK call and now there are no more CPPCHECK errors with the C++ code in the Jenkins build.
cppcheck \
-I ./ \
-j 4 \
--xml-version=2 \

Related

Cannot run Fortify with multi CPU for a C++ project

I have a C++ project setup with CMake, running on Mac. Recently I am looking into adding Fortify to do auto code analyzation. I am using Fortify version 22.1.
After set up the CMake and shell scripts, I found that if I compile with more than one CPU (using -j), the compiler (c++ or g++) will have issues generating the libs. Sometimes it will pass and successfully generate the Fortify output, but others it will just error out. Multi CPUs compile fine for this project without running Fortify.
I also see this error when I compile with Fortify (no matter it success or not):
[error]: Translator execution failed. Please consult the Troubleshooting section of the User Manual.
Translator returned status 1:
error: unable to handle compilation, expected exactly one compiler job in ''
This error always happens after a "Linking CXX xxxxx xxxx". I can't find any documentation about them.
Does anyone know how to solve this? Thank you.
Update more details about my setup:
I use shell files to wrap the sourceanalyzer like this:
#!/bin/bash
exec sourceanalyzer -b MyApp /Library/Developer/CommandLineTools/usr/bin/c++ "$#"
And my CMake setup like this:
if (${ENABLE_FORTIFY} EQUAL 1)
set(CMAKE_CC_COMPILER ${AVSxAppDALDefaultImplementation_SOURCE_DIR}/scripts/fortify-build-cc.sh)
set(CMAKE_CXX_COMPILER ${AVSxAppDALDefaultImplementation_SOURCE_DIR}/scripts/fortify-build-cxx.sh)
endif()
My shell script to run CMake and then to the scan:
cmake $PACKAGEPATH \
...
-DENABLE_FORTIFY="${ENABLE_FORTIFY}"
echo "---BUILDING---"
make release
if [[ $ENABLE_FORTIFY == 1 ]]; then
echo "---RUNNING FORTIFY SCAN---"
sourceanalyzer -b ${CURRENT_PROJECT_NAME} -scan -f fortify_scan_result_${CURRENT_PROJECT_NAME}.txt
fi

Access files generated in the backend

I'm a beginner. I started exploring Pythran and Transonic a few days back. I learned that Pythran generates a C++ file from the Python input file. I want to read those C++ generated files in the backend.
Do anyone of you have any idea about accessing files generated in the backend?
I'm implementing Pythran using Transonic's support.
Thanks!
Have you tried running pythran with the --help option?
...
optional arguments:
-h, --help show this help message and exit
-o OUTPUT_FILE path to generated file. Honors %{ext}.
-P only run the high-level optimizer, do not compile
-E only run the translator, do not compile
...
So, the answer is: use the -E option
pythran my_python_file.py -E

Building and using a pure llvm toolchain for c++ on linux

Assuming this is possible, could someone tell me, how I have to configure the cmake build to create a "pure" llvm toolchain on ubuntu-16.04 consisting of
clang
lld
libc++
libc++abi
libunwind (llvm)
compiler-rt
any other pieces that might be relevant and are "production ready"
The resulting compiler should
be as fast as possible (optimizations turned on, no unnecessary asserts or other checks in the compiler binary itself)
be installed in a separate, local directory (lets call it <llvm_install>)
not have dependencies to the llvm tolchain provided by packet manager
use libc++, libc++abi etc by default.
support the sanitizers (ubsan, address, memory, thread) (which probably means that I have to compile libc++ a second time)
So far I have cloned
llvm from http://llvm.org/git/llvm.git into <llvm_root>
clang from http://llvm.org/git/clang.git into <llvm_root>/tools/clang
lld from http://llvm.org/git/lld.git into <llvm_root>/tools/lld
compiler-rt, libcxx, libcxxabi, libunwind from http://llvm.org/git/<project_name> into <llvm_root>/projects/<project_name>
Then run ccmake in a separate directory - I have tried various settings, but as soon as I try anything more fancy beyond turning optimizations on, I almost always get some sort of build error. Unfortunately, I have yet to find a way to export my changes from ccmake otherwise I'd give you an example with the settings and according error, but I'm more interested in a best practice than a fix to my test configs anyway.
Bonus points: By default, this should build with the default g++ toolchain, but I'd also be interested in a two stage build if that improves the performance of the final toolchain (e.g. by using LTO).
Btw.: The whole Idea came from watching chandler's talk
Pacific++ 2017: Chandler Carruth "LLVM: A Modern, Open C++ Toolchain"
My usual procedure is to build a small enough LLVM/Clang so that I have something working with libc++ and libc++abi. I guess you can use the system-provided LLVM, but I haven't tried it. For this step, what you have checked-out is probably enough. A sample script for this:
cmake
-G Ninja \
-DCMAKE_EXPORT_COMPILE_COMMANDS=On \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DBUILD_SHARED_LIBS=On \
-DLLVM_ENABLE_ASSERTIONS=Off \
-DLLVM_TARGETS_TO_BUILD="X86" \
-DLLVM_ENABLE_SPHINX=Off \
-DLLVM_ENABLE_THREADS=On \
-DLIBCXX_ENABLE_EXCEPTIONS=On \
-DLIBCXX_ENABLE_RTTI=On \
-DCMAKE_INSTALL_PREFIX=[path-to-install-dir] \
[path-to-source-dir]
Having the aforementioned clang in your PATH environment variable,
you can use the below build script again and adjust based on your needs (sanitizers, etc). Apart from the main documentation page on the subject, poking around the CMakeLists.txt of each respective tool is also illuminating and helps adjust the build process from version to version.
LLVM_TOOLCHAIN_LIB_DIR=$(llvm-config --libdir)
LD_FLAGS=""
LD_FLAGS="${LD_FLAGS} -Wl,-L ${LLVM_TOOLCHAIN_LIB_DIR}"
LD_FLAGS="${LD_FLAGS} -Wl,-rpath-link ${LLVM_TOOLCHAIN_LIB_DIR}"
LD_FLAGS="${LD_FLAGS} -lc++ -lc++abi"
CXX_FLAGS=""
CXX_FLAGS="${CXX_FLAGS} -stdlib=libc++ -pthread"
CC=clang CXX=clang++ \
cmake -G Ninja \
-DCMAKE_EXPORT_COMPILE_COMMANDS=On \
-DBUILD_SHARED_LIBS=On \
-DLLVM_ENABLE_LIBCXX=On \
-DLLVM_ENABLE_LIBCXXABI=On \
-DLLVM_ENABLE_ASSERTIONS=On \
-DLLVM_TARGETS_TO_BUILD="X86" \
-DLLVM_ENABLE_SPHINX=Off \
-DLLVM_ENABLE_THREADS=On \
-DLLVM_INSTALL_UTILS=On \
-DLIBCXX_ENABLE_EXCEPTIONS=On \
-DLIBCXX_ENABLE_RTTI=On \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_CXX_FLAGS="${CXX_FLAGS}" \
-DCMAKE_SHARED_LINKER_FLAGS="${LD_FLAGS}" \
-DCMAKE_MODULE_LINKER_FLAGS="${LD_FLAGS}" \
-DCMAKE_EXE_LINKER_FLAGS="${LD_FLAGS}" \
-DCMAKE_POLICY_DEFAULT_CMP0056=NEW \
-DCMAKE_POLICY_DEFAULT_CMP0058=NEW \
-DCMAKE_INSTALL_PREFIX=${INSTALL_DIR} \
[path-to-source-dir]
A note on performance: I haven't watched that talk yet, but my motivation behind this 2 step build was to have a toolchain that I can easily relocate between systems since the minimal system dependence that matters is libc.
Lastly, relevant to the above procedure is this older question of mine, which still bugs me. If you have any insight on this, please don't hesitate.
PS: Scripts have been tested with LLVM 3.7 through 3.9 and current trunk 6.0.0.
Update: I've also applied these suggestions, and there is marked improvement when using the gold linker instead of ld. LTO is also a boost.

are there options to speed up dpkg-buildpackage

Im back porting ffmpeg to an older version of debian.
everything is going well, but its so slow.
I am running dpkg-buildpackage -us -uc
with a debian rules file that looks like this:
#!/usr/bin/make -f
%:
dh $#
override_dh_auto_configure:
./configure
I notice, this is only processing on 1 core.
is there anything like make -j 4 that I could use to speed this up?
I've been using this guide, but i don't see anything for speeding up the build step
https://www.debian.org/doc/manuals/maint-guide/
Sure, you can use -j 4 as an argument to dpkg-buildpackage. It is documented in the man page. The relevant section is:
-jjobs Number of jobs allowed to be run simultaneously, equivalent to
the make(1) option of the same name. Will add itself to
the MAKEFLAGS environment variable, which should cause all
subsequent make invocations to inherit the option. Also adds
parallel=jobs to the DEB_BUILD_OPTIONS environment variable which
allows debian/rules files to use this information for their own
purposes. The parallel=jobs in DEB_BUILD_OPTIONS environment
variable will override the -j value if this option is given.

InstallShield creates MSI even though build has errors

When I'm compiling ism project to create MSI, its still creates the MSI even though I have build errors.
The reason I need it NOT to be created is for build verification.
Instead of checking the build log for errors, I will just check the existence of the MSI.
Does anybody know how can I achieve that?
EDIT:
I'm using ISCmdBld tool to build MSIs. This is the command line I'm running to build where the environment variables are being set before running this command:
IsCmdBld -p "%FULL_PROJECT_FILENAME%" -a %BUILDMODE% -r %PRODUCT% -o "%MMSEARCHPATH%" | tee /A "%FULL_PROJECT_LOG_FILENAME%"
If you are compiling using IsCmdBld.exe, you should add the -x option, so that the build is stopped if an error occurs.
You also can use it combined with -w, which makes each warning becomes considered as an error (and thus, each warning encountered also stops the build).
More information about IsCmdBld.exe : http://helpnet.installshield.com/installshield16helplib/ISCmdBldParam.htm
I hope this helps.
Your build automation should check the exit code from ISCmdBld.exe. If the exit code is a failure, don't archive the output.