I am attempting to generate code coverage for a small test library. The library only consists of two files.
calculator.cpp:
#include "calculator.h"
Calculator::Calculator()
{
}
Calculator::~Calculator()
{
}
void Calculator::addNumbers(int x, int y)
{
this->result = x + y;
}
calculator.h:
#ifndef CALCULATOR_H
#define CALCULATOR_H
class Calculator
{
public:
Calculator();
~Calculator();
void addNumbers(int x, int y);
};
#endif
I have a unit test for this library that is being executed. It includes the library and runs fine. I have set the cmake_cxx_flags to include -fprofile-arcs and -ftest-coverage in the top level CMakeLists.txt.
cmake_minimum_required(VERSION 2.8.11)
set(CMAKE_CXX_FLAGS "-std=c++11")
include_directories("${PROJECT_BINARY_DIR}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0 --coverage")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0 --coverage -ftest-coverage -fprofile-arcs")
add_subdirectory(src)
add_subdirectory(test)
I am using a script to take the gcno and gcda files that are generated on build to generate a human readable report.
#!/bin/bash
OUTPUT_DIR="$1/bin/ExecutableTests/Coverage"
mkdir -p "$OUTPUT_DIR"
component=Calculator
dependency=calculator
script=test_calculator.sh
unit_test=unit_test_calculator
mkdir $OUTPUT_DIR/$component
cd "$1"/bin/
make clean || exit 1
make || exit 1
# Create baseline coverage
lcov -c -i -d "$1"/bin/src/"$component"/CMakeFiles/"$dependency".dir -o "$1/Coverage/$component"_"$dependency".coverage.base
# Run the test
$1/scripts/$script $1
# Create test coverage
lcov -c -d "$1"/bin/test/$component/CMakeFiles/"$unit_test".dir -o "$1/Coverage/$component"_"$dependency".coverage.run
lcov -d "$1/test/$component" -a "$1/Coverage/$component"_"$dependency".coverage.base -a "$1/Coverage/$component"_"$dependency".coverage.run -o "$1/Coverage/$component"_"$dependency".coverage.total
genhtml --branch-coverage -o "$OUTPUT_DIR/$component" "$1/Coverage/$component"_"$dependency".coverage.total
rm -f "$1/Coverage/$component"_"$dependency".coverage.base "$1/Coverage/$component"_"$dependency".coverage.run "$1/Coverage/$component"_"$dependency".coverage.total
I can see that the data files are being generated for the library; however when I view the report that is generated from the script it shows that the library is never touched. This is clearly wrong as illustrated by my unit test.
#include "lest_basic.hpp"
#include "calculator.h"
#include <memory>
#include <iostream>
int addResult(int x, int y)
{
Calculator calc1;
return calc1.addNumbers(x, y);
}
const lest::test specification[] =
{
CASE( "Addition" )
{
std::cout << "Starting Addition testing..." << std::endl;
std::cout << "Adding 1 and 2..." << std::endl;
EXPECT(addResult(1, 2) == 3);
std::cout << "Adding 2 and 8..." << std::endl;
EXPECT(addResult(2, 8) > 1);
std::cout << "Adding 7 and 4..." << std::endl;
EXPECT(addResult(7, 4) < 12);
},
};
int main(int argc, char **argv) {
return lest::run( specification );
}
Here is the CMakeLists.txt for my unit test:
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_SOURCE_DIR}/bin/ExecutableTests/)
include_directories(${CMAKE_SOURCE_DIR}/externalinclude/)
include_directories(${CMAKE_SOURCE_DIR}/src/Calculator/)
add_executable (unit_test_calculator test_calculator.cpp)
target_link_libraries(unit_test_calculator -Wl,--whole-archive calculator -Wl,--no-whole-archive)
My question is why is the report saying that the library code isn't be covered? Are the data files the problem?
I have fixed my issue. The problem was that I wasn't running lcov against the source code after running the unit test. I had the wrong directory.
lcov -c -d "$1"/bin/test/$component/CMakeFiles/"$unit_test".dir -o "$1/Coverage/$component"_"$dependency".coverage.run
Needed to be:
lcov -c -d "$1"/bin/src/$component/CMakeFiles/"$dependency".dir -o "$1/Coverage/$component"_"$dependency".coverage.run
int main(int argc, char *argv[])
{
if (argc < 2) {
printf("missing argument\n");
exit(-1);
}
printf("%s %s %s", argv[1], argv[2]);
return 0;
}
Just want to give command line arguments like ./a 3 4 without using .out in linux terminal thanks!!
Assuming your source file was a.c, just use the -o ("output") flag when you compile it, like this:
cc -o a a.c
Or, if you already did
cc a.c
meaning it left it in a.out by default, you can always rename it:
mv a.out a
Of course you can call it anything you want:
cc -o mytestprogram a.c
I have two main functions that use a common C++ class.
File1: main.cpp
#include <iostream>
#include "HelloAnother.h"
int main() {
HelloAnother::sayHello1();
return 0;
}
File2: main2.cpp
#include <iostream>
#include "HelloAnother.h"
int main() {
HelloAnother::sayHello2();
return 0;
}
File3: HelloAnother.h
#pragma once
class HelloAnother {
public:
static void sayHello1();
static void sayHello2();
};
File4: HelloAnother.cpp
#include <iostream>
#include "HelloAnother.h"
void HelloAnother::sayHello1() {
std::cout << "Hello 1!!!" << std::endl;
}
void HelloAnother::sayHello2() {
std::cout << "Hello 2 !!!" << std::endl;
}
Now I compile two executables:
clang-3.8 -o main -fprofile-arcs -ftest-coverage --coverage -g -fPIC -lstdc++ main.cpp HelloAnother.cpp
clang-3.8 -o main2 -fprofile-arcs -ftest-coverage --coverage -g -fPIC -lstdc++ main2.cpp HelloAnother.cpp
Now, I run ./main
Hello 1!!!
When I rerun ./main
Hello 1!!!
profiling: /media/sf_ubuntu-shared/test-profiling/main.gcda: cannot map: Invalid argument
profiling: /media/sf_ubuntu-shared/test-profiling/HelloAnother.gcda: cannot map: Invalid argument
One second run, I get this error (above) in trying to create/merge .gcda files.
Now, If I try to run ./main2
Hello 2 !!!
profiling: /media/sf_ubuntu-shared/test-profiling/HelloAnother.gcda: cannot map: Invalid argument
When I generate the code coverage report, the call to second function doesn't show up as if the call wasn't made.
Can anyone help me debug this issue pls? The issue seems to be related to merging of .gcda files on multiple runs, but not sure how to solve it.
I also tried clang-3.5 but with same results.
After a lot of searching and trial/error this is what works for me:
Compile first executable, run it. This generates HelloAnother.gcda and main.gcda files.
Execute lcov --gcov-tool=gcov-4.4 --directory . --capture --output-file coverage.main.info
rm -rf *.gcda; rm -rf *.gcno
Compile second executable (main2.cpp), run it. This generates another HelloAnother.gcda and a main2.gcda file.
Execute lcov --gcov-tool=gcov-4.4 --directory . --capture --output-file coverage.main2.info
Now to generate nice looking html report do: genhtml -o coverage coverage.main.info coverage.main2.info
Your problem is that you compile the shared file (HelloAnother.cc) twice and gcov fails to understand that two copies of HelloAnother.o inside main1 and main2 need to be shared.
Instead, compile the shared code once and link it into each executable:
$ g++ --coverage -c HelloAnother.cc
$ g++ --coverage main1.cc HelloAnother.o -o main1
$ g++ --coverage main2.cc HelloAnother.o -o main2
$ ./main1
Hello 1!!!
$ ./main2
Hello 2 !!!
$ gcov --stdout HelloAnother.gcno
...
1: 4:void HelloAnother::sayHello1() {
1: 5: std::cout << "Hello 1!!!" << std::endl;
1: 6:}
-: 7:
1: 8:void HelloAnother::sayHello2() {
1: 9: std::cout << "Hello 2 !!!" << std::endl;
1: 10:}
-: 11:
Since there's basically no documentation on the Google Test webpage—how do I do that?
What I have done until now:
I downloaded googletest 1.6 from the project page and did a ./configure && make inside it
I added -Igtest/include -Lgtest/lib to my compiler/linker flags
I wrote a small sample test:
#include "gtest/gtest.h"
int main(int argc, char **args)
{
return 0;
}
TEST(someTest,testOne)
{
ASSERT_EQ(5,5);
}
This compiles fine, but the linker seems not to be amused at all. I get a huge pile of error messages in the style of
test/main.o: In function someTest_testOne_Test::TestBody()':
main.cpp:(.text+0x96): undefined reference totesting::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
Now what did I forget to do?
Just as a reference I have a docker system setup with g++ and gtest which works properly. I provide all the files here for future reference:
Dockerfile
FROM gcc:9.2.0
WORKDIR /usr/src/app
RUN apt-get -qq update \
&& apt-get -qq install --no-install-recommends cmake \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone --depth=1 -b master https://github.com/google/googletest.git
RUN mkdir googletest/build
WORKDIR /usr/src/app/googletest/build
RUN cmake .. \
&& make \
&& make install \
&& rm -rf /usr/src/app/googletest
WORKDIR /usr/src/app
COPY . .
RUN mkdir obj
RUN make
CMD [ "./main" ]
Makefile
CXX = g++
CXXFLAGS = -std=c++17 -Wall -I h -I /usr/local/include/gtest/ -c
LXXFLAGS = -std=c++17 -I h -pthread
OBJECTS = ./obj/program.o ./obj/main.o ./obj/program_unittest.o
GTEST = /usr/local/lib/libgtest.a
TARGET = main
$(TARGET): $(OBJECTS)
$(CXX) $(LXXFLAGS) -o $(TARGET) $(OBJECTS) $(GTEST)
./obj/program.o: ./cpp/program.cpp
$(CXX) $(CXXFLAGS) ./cpp/program.cpp -o ./obj/program.o
./obj/program_unittest.o: ./cpp/program_unittest.cpp
$(CXX) $(CXXFLAGS) ./cpp/program_unittest.cpp -o ./obj/program_unittest.o
./obj/main.o: ./cpp/main.cpp
$(CXX) $(CXXFLAGS) ./cpp/main.cpp -o ./obj/main.o
clean:
rm -fv $(TARGET) $(OBJECTS)
cpp/maincpp
#include <iostream>
#include "program.h"
#include "gtest/gtest.h"
int main(int argc, char **argv)
{
::testing::InitGoogleTest(&argc, argv);
std::cout << "RUNNING TESTS ..." << std::endl;
int ret{RUN_ALL_TESTS()};
if (!ret)
std::cout << "<<<SUCCESS>>>" << std::endl;
else
std::cout << "FAILED" << std::endl;
return 0;
}
cpp/program.cpp
#include "program.h"
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int Factorial(int n)
{
int result = 1;
for (int i = 1; i <= n; i++)
{
result *= i;
}
return result;
}
// Returns true if and only if n is a prime number.
bool IsPrime(int n)
{
// Trivial case 1: small numbers
if (n <= 1)
return false;
// Trivial case 2: even numbers
if (n % 2 == 0)
return n == 2;
// Now, we have that n is odd and n >= 3.
// Try to divide n by every odd number i, starting from 3
for (int i = 3;; i += 2)
{
// We only have to try i up to the square root of n
if (i > n / i)
break;
// Now, we have i <= n/i < n.
// If n is divisible by i, n is not prime.
if (n % i == 0)
return false;
}
// n has no integer factor in the range (1, n), and thus is prime.
return true;
}
cpp/program_unittest.cpp
#include <limits.h>
#include "program.h"
#include "gtest/gtest.h"
namespace
{
// Tests Factorial().
// Tests factorial of negative numbers.
TEST(FactorialTest, Negative)
{
// This test is named "Negative", and belongs to the "FactorialTest"
// test case.
EXPECT_EQ(1, Factorial(-5));
EXPECT_EQ(1, Factorial(-1));
EXPECT_GT(Factorial(-10), 0);
}
// Tests factorial of 0.
TEST(FactorialTest, Zero)
{
EXPECT_EQ(1, Factorial(0));
}
// Tests factorial of positive numbers.
TEST(FactorialTest, Positive)
{
EXPECT_EQ(1, Factorial(1));
EXPECT_EQ(2, Factorial(2));
EXPECT_EQ(6, Factorial(3));
EXPECT_EQ(40320, Factorial(8));
}
// Tests IsPrime()
// Tests negative input.
TEST(IsPrimeTest, Negative)
{
// This test belongs to the IsPrimeTest test case.
EXPECT_FALSE(IsPrime(-1));
EXPECT_FALSE(IsPrime(-2));
EXPECT_FALSE(IsPrime(INT_MIN));
}
// Tests some trivial cases.
TEST(IsPrimeTest, Trivial)
{
EXPECT_FALSE(IsPrime(0));
EXPECT_FALSE(IsPrime(1));
EXPECT_TRUE(IsPrime(2));
EXPECT_TRUE(IsPrime(3));
}
// Tests positive input.
TEST(IsPrimeTest, Positive)
{
EXPECT_FALSE(IsPrime(4));
EXPECT_TRUE(IsPrime(5));
EXPECT_FALSE(IsPrime(6));
EXPECT_TRUE(IsPrime(23));
}
h/program.h
#ifndef GTEST_PROGRAM_H_
#define GTEST_PROGRAM_H_
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int Factorial(int n);
// Returns true if and only if n is a prime number.
bool IsPrime(int n);
#endif // GTEST_PROGRAM_H_
cpp/program.cpp and h/program.h files are from the googletest repo sample 1. Dockerfile is adapted from here.
The best example Makefile is the one distributed with Google Test. It shows you how to link gtest_main.a or gtest.a with your binary based on whether you want to use Google's main() function or your own.
I installed Google Test on my system with sudo apt-get install libgtest-dev and the Fixture I'm working on doesn't have a main() and can be build with:
g++ unitTest.cpp -o unitTest /usr/src/gtest/src/gtest_main.cc /usr/src/gtest/src/gtest-all.cc -I /usr/include -I /usr/src/gtest -L /usr/local/lib -lpthread
Before I add something to a project makefile, I like to figure out what commands it is actually running. So here is a list of commands that I used to build the sample1 unit test by hand.
g++ -c -I../include sample1.cc
g++ -c -I../include sample1_unittest.cc
g++ -pthread -o s1_ut sample1.o sample1_unittest.o ../lib/.libs/libgtest.a ../lib/.libs/libgtest_main.a
Note: If you get a bunch of pthread related linker errors, you forgot the -pthread in the 3rd command. If you get a bunch of C++ runtime library related linker errors, you typed gcc instead of g++.
So this is probably a long shot, but is there any way to run a C or C++ file as a script? I tried:
#!/usr/bin/gcc main.c -o main; ./main
int main(){ return 0; }
But it says:
./main.c:1:2: error: invalid preprocessing directive #!
Short answer:
//usr/bin/clang "$0" && exec ./a.out "$#"
int main(){
return 0;
}
The trick is that your text file must be both valid C/C++ code and shell script. Remember to exit from the shell script before the interpreter reaches the C/C++ code, or invoke exec magic.
Run with chmod +x main.c; ./main.c.
A shebang like #!/usr/bin/tcc -run isn't needed because unix-like systems will already execute the text file within the shell.
(adapted from this comment)
I used it in my C++ script:
//usr/bin/clang++ -O3 -std=c++11 "$0" && ./a.out; exit
#include <iostream>
int main() {
for (auto i: {1, 2, 3})
std::cout << i << std::endl;
return 0;
}
If your compilation line grows too much you can use the preprocessor (adapted from this answer) as this plain old C code shows:
#if 0
clang "$0" && ./a.out
rm -f ./a.out
exit
#endif
int main() {
return 0;
}
Of course you can cache the executable:
#if 0
EXEC=${0%.*}
test -x "$EXEC" || clang "$0" -o "$EXEC"
exec "$EXEC"
#endif
int main() {
return 0;
}
Now, for the truly eccentric Java developer:
/*/../bin/true
CLASS_NAME=$(basename "${0%.*}")
CLASS_PATH="$(dirname "$0")"
javac "$0" && java -cp "${CLASS_PATH}" ${CLASS_NAME}
rm -f "${CLASS_PATH}/${CLASS_NAME}.class"
exit
*/
class Main {
public static void main(String[] args) {
return;
}
}
D programmers simply put a shebang at the beginning of text file without breaking the syntax:
#!/usr/bin/rdmd
void main(){}
See:
https://unix.stackexchange.com/a/373229/23567
https://stackoverflow.com/a/12296348/199332
For C, you may have a look at tcc, the Tiny C Compiler. Running C code as a script is one of its possible uses.
$ cat /usr/local/bin/runc
#!/bin/bash
sed -n '2,$p' "$#" | gcc -o /tmp/a.out -x c++ - && /tmp/a.out
rm -f /tmp/a.out
$ cat main.c
#!/bin/bash /usr/local/bin/runc
#include <stdio.h>
int main() {
printf("hello world!\n");
return 0;
}
$ ./main.c
hello world!
The sed command takes the .c file and strips off the hash-bang line. 2,$p means print lines 2 to end of file; "$#" expands to the command-line arguments to the runc script, i.e. "main.c".
sed's output is piped to gcc. Passing - to gcc tells it to read from stdin, and when you do that you also have to specify the source language with -x since it has no file name to guess from.
Since the shebang line will be passed to the compiler, and # indicates a preprocessor directive, it will choke on a #!.
What you can do is embed the makefile in the .c file (as discussed in this xkcd thread)
#if 0
make $# -f - <<EOF
all: foo
foo.o:
cc -c -o foo.o -DFOO_C $0
bar.o:
cc -c -o bar.o -DBAR_C $0
foo: foo.o bar.o
cc -o foo foo.o bar.o
EOF
exit;
#endif
#ifdef FOO_C
#include <stdlib.h>
extern void bar();
int main(int argc, char* argv[]) {
bar();
return EXIT_SUCCESS;
}
#endif
#ifdef BAR_C
void bar() {
puts("bar!");
}
#endif
The #if 0 #endif pair surrounding the makefile ensure the preprocessor ignores that section of text, and the EOF marker marks where the make command should stop parsing input.
CINT:
CINT is an interpreter for C and C++
code. It is useful e.g. for situations
where rapid development is more
important than execution time. Using
an interpreter the compile and link
cycle is dramatically reduced
facilitating rapid development. CINT
makes C/C++ programming enjoyable even
for part-time programmers.
You might want to checkout ryanmjacobs/c which was designed for this in mind. It acts as a wrapper around your favorite compiler.
#!/usr/bin/c
#include <stdio.h>
int main(void) {
printf("Hello World!\n");
return 0;
}
The nice thing about using c is that you can choose what compiler you want to use, e.g.
$ export CC=clang
$ export CC=gcc
So you get all of your favorite optimizations too! Beat that tcc -run!
You can also add compiler flags to the shebang, as long as they are terminated with the -- characters:
#!/usr/bin/c -Wall -g -lncurses --
#include <ncurses.h>
int main(void) {
initscr();
/* ... */
return 0;
}
c also uses $CFLAGS and $CPPFLAGS if they are set as well.
#!/usr/bin/env sh
tail -n +$(( $LINENO + 1 )) "$0" | cc -xc - && { ./a.out "$#"; e="$?"; rm ./a.out; exit "$e"; }
#include <stdio.h>
int main(int argc, char const* argv[]) {
printf("Hello world!\n");
return 0;
}
This properly forwards the arguments and the exit code too.
Quite a short proposal would exploit:
The current shell script being the default interpreter for unknown types (without a shebang or a recognizable binary header).
The "#" being a comment in shell and "#if 0" disabling code.
#if 0
F="$(dirname $0)/.$(basename $0).bin"
[ ! -f $F -o $F -ot $0 ] && { c++ "$0" -o "$F" || exit 1 ; }
exec "$F" "$#"
#endif
// Here starts my C++ program :)
#include <iostream>
#include <unistd.h>
using namespace std;
int main(int argc, char **argv) {
if (argv[1])
clog << "Hello " << argv[1] << endl;
else
clog << "hello world" << endl;
}
Then you can chmod +x your .cpp files and then ./run.cpp.
You could easily give flags for the compiler.
The binary is cached in the current directory along with the source, and updated when necessary.
The original arguments are passed to the binary: ./run.cpp Hi
It doesn't reuse the a.out, so that you can have multiple binaries in the same folder.
Uses whatever c++ compiler you have in your system.
The binary starts with "." so that it is hidden from the directory listing.
Problems:
What happens on concurrent executions?
Variatn of John Kugelman can be written in this way:
#!/bin/bash
t=`mktemp`
sed '1,/^\/\/code/d' "$0" | g++ -o "$t" -x c++ - && "$t" "$#"
r=$?
rm -f "$t"
exit $r
//code
#include <stdio.h>
int main() {
printf("Hi\n");
return 0;
}
Here's yet another alternative:
#if 0
TMP=$(mktemp -d)
cc -o ${TMP}/a.out ${0} && ${TMP}/a.out ${#:1} ; RV=${?}
rm -rf ${TMP}
exit ${RV}
#endif
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("Hello world\n");
return 0;
}
I know this question is not a recent one, but I decided to throw my answer into the mix anyways.
With Clang and LLVM, there is not any need to write out an intermediate file or call an external helper program/script. (apart from clang/clang++/lli)
You can just pipe the output of clang/clang++ to lli.
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0 -"
$CXXCMD | $LLICMD "$#" ; exit $?
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
return 3==argc;
}
The above however does not let you use stdin in your c/c++ script.
If bash is your shell, then you can do the following to use stdin:
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0"
exec $LLICMD <($CXXCMD) "$#"
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
for (int c; EOF != (c=getchar()); putchar(c));
return 3==argc;
}
There are several places that suggest the shebang (#!) should remain but its illegal for the gcc compiler. So several solutions cut it out. In addition it is possible to insert a preprocessor directive that fixes the compiler messages for the case the c code is wrong.
#!/bin/bash
#ifdef 0
xxx=$(mktemp -d)
awk 'BEGIN
{ print "#line 2 \"$0\""; first=1; }
{ if (first) first=0; else print $0 }' $0 |\
g++ -x c++ -o ${xxx} - && ./${xxx} "$#"
rv=$?
\rm ./${xxx}
exit $rv
#endif
#include <iostream>
int main(int argc,char *argv[]) {
std::cout<<"Hello world"<<std::endl;
}
As stated in a previous answer, if you use tcc as your compiler, you can put a shebang #!/usr/bin/tcc -run as the first line of your source file.
However, there is a small problem with that: if you want to compile that same file, gcc will throw an error: invalid preprocessing directive #! (tcc will ignore the shebang and compile just fine).
If you still need to compile with gcc one workaround is to use the tail command to cut off the shebang line from the source file before piping it into gcc:
tail -n+2 helloworld.c | gcc -xc -
Keep in mind that all warnings and/or errors will be off by one line.
You can automate that by creating a bash script that checks whether a file begins with a shebang, something like
if [[ $(head -c2 $1) == '#!' ]]
then
tail -n+2 $1 | gcc -xc -
else
gcc $1
fi
and use that to compile your source instead of directly invoking gcc.
Just wanted to share, thanks to Pedro's explanation on solutions using the #if 0 trick, I have updated my fork on TCC (Sugar C) so that all examples can be called with shebang, finally, with no errors when looking source on the IDE.
Now, code displays beautifully using clangd in VS Code for project sources. Samples first lines look like:
#if 0
/usr/local/bin/sugar `basename $0` $# && exit;
// above is a shebang hack, so you can run: ./args.c <arg 1> <arg 2> <arg N>
#endif
The original intention of this project always has been to use C as if a scripting language using TCC base under the hood, but with a client that prioritizes ram output over file output (without the of -run directive).
You can check out the project at: https://github.com/antonioprates/sugar
I like to use this as the first line at the top of my programs:
For C (technically: gnu C as I've specified it below):
///usr/bin/env ccache gcc -Wall -Wextra -Werror -O3 -std=gnu17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
For C++ (technically: gnu++ as I've specified it below):
///usr/bin/env ccache g++ -Wall -Wextra -Werror -O3 -std=gnu++17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
ccache helps ensure your compiling is a little more efficient. Install it in Ubuntu with sudo apt update && sudo apt install ccache.
For Go (golang) and some explanations of the lines above, see my other answer here: What's the appropriate Go shebang line?