How do I set up Google Test with a GNU Make project? - c++

Since there's basically no documentation on the Google Test webpage—how do I do that?
What I have done until now:
I downloaded googletest 1.6 from the project page and did a ./configure && make inside it
I added -Igtest/include -Lgtest/lib to my compiler/linker flags
I wrote a small sample test:
#include "gtest/gtest.h"
int main(int argc, char **args)
{
return 0;
}
TEST(someTest,testOne)
{
ASSERT_EQ(5,5);
}
This compiles fine, but the linker seems not to be amused at all. I get a huge pile of error messages in the style of
test/main.o: In function someTest_testOne_Test::TestBody()':
main.cpp:(.text+0x96): undefined reference totesting::internal::AssertHelper::AssertHelper(testing::TestPartResult::Type, char const*, int, char const*)'
Now what did I forget to do?

Just as a reference I have a docker system setup with g++ and gtest which works properly. I provide all the files here for future reference:
Dockerfile
FROM gcc:9.2.0
WORKDIR /usr/src/app
RUN apt-get -qq update \
&& apt-get -qq install --no-install-recommends cmake \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone --depth=1 -b master https://github.com/google/googletest.git
RUN mkdir googletest/build
WORKDIR /usr/src/app/googletest/build
RUN cmake .. \
&& make \
&& make install \
&& rm -rf /usr/src/app/googletest
WORKDIR /usr/src/app
COPY . .
RUN mkdir obj
RUN make
CMD [ "./main" ]
Makefile
CXX = g++
CXXFLAGS = -std=c++17 -Wall -I h -I /usr/local/include/gtest/ -c
LXXFLAGS = -std=c++17 -I h -pthread
OBJECTS = ./obj/program.o ./obj/main.o ./obj/program_unittest.o
GTEST = /usr/local/lib/libgtest.a
TARGET = main
$(TARGET): $(OBJECTS)
$(CXX) $(LXXFLAGS) -o $(TARGET) $(OBJECTS) $(GTEST)
./obj/program.o: ./cpp/program.cpp
$(CXX) $(CXXFLAGS) ./cpp/program.cpp -o ./obj/program.o
./obj/program_unittest.o: ./cpp/program_unittest.cpp
$(CXX) $(CXXFLAGS) ./cpp/program_unittest.cpp -o ./obj/program_unittest.o
./obj/main.o: ./cpp/main.cpp
$(CXX) $(CXXFLAGS) ./cpp/main.cpp -o ./obj/main.o
clean:
rm -fv $(TARGET) $(OBJECTS)
cpp/maincpp
#include <iostream>
#include "program.h"
#include "gtest/gtest.h"
int main(int argc, char **argv)
{
::testing::InitGoogleTest(&argc, argv);
std::cout << "RUNNING TESTS ..." << std::endl;
int ret{RUN_ALL_TESTS()};
if (!ret)
std::cout << "<<<SUCCESS>>>" << std::endl;
else
std::cout << "FAILED" << std::endl;
return 0;
}
cpp/program.cpp
#include "program.h"
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int Factorial(int n)
{
int result = 1;
for (int i = 1; i <= n; i++)
{
result *= i;
}
return result;
}
// Returns true if and only if n is a prime number.
bool IsPrime(int n)
{
// Trivial case 1: small numbers
if (n <= 1)
return false;
// Trivial case 2: even numbers
if (n % 2 == 0)
return n == 2;
// Now, we have that n is odd and n >= 3.
// Try to divide n by every odd number i, starting from 3
for (int i = 3;; i += 2)
{
// We only have to try i up to the square root of n
if (i > n / i)
break;
// Now, we have i <= n/i < n.
// If n is divisible by i, n is not prime.
if (n % i == 0)
return false;
}
// n has no integer factor in the range (1, n), and thus is prime.
return true;
}
cpp/program_unittest.cpp
#include <limits.h>
#include "program.h"
#include "gtest/gtest.h"
namespace
{
// Tests Factorial().
// Tests factorial of negative numbers.
TEST(FactorialTest, Negative)
{
// This test is named "Negative", and belongs to the "FactorialTest"
// test case.
EXPECT_EQ(1, Factorial(-5));
EXPECT_EQ(1, Factorial(-1));
EXPECT_GT(Factorial(-10), 0);
}
// Tests factorial of 0.
TEST(FactorialTest, Zero)
{
EXPECT_EQ(1, Factorial(0));
}
// Tests factorial of positive numbers.
TEST(FactorialTest, Positive)
{
EXPECT_EQ(1, Factorial(1));
EXPECT_EQ(2, Factorial(2));
EXPECT_EQ(6, Factorial(3));
EXPECT_EQ(40320, Factorial(8));
}
// Tests IsPrime()
// Tests negative input.
TEST(IsPrimeTest, Negative)
{
// This test belongs to the IsPrimeTest test case.
EXPECT_FALSE(IsPrime(-1));
EXPECT_FALSE(IsPrime(-2));
EXPECT_FALSE(IsPrime(INT_MIN));
}
// Tests some trivial cases.
TEST(IsPrimeTest, Trivial)
{
EXPECT_FALSE(IsPrime(0));
EXPECT_FALSE(IsPrime(1));
EXPECT_TRUE(IsPrime(2));
EXPECT_TRUE(IsPrime(3));
}
// Tests positive input.
TEST(IsPrimeTest, Positive)
{
EXPECT_FALSE(IsPrime(4));
EXPECT_TRUE(IsPrime(5));
EXPECT_FALSE(IsPrime(6));
EXPECT_TRUE(IsPrime(23));
}
h/program.h
#ifndef GTEST_PROGRAM_H_
#define GTEST_PROGRAM_H_
// Returns n! (the factorial of n). For negative n, n! is defined to be 1.
int Factorial(int n);
// Returns true if and only if n is a prime number.
bool IsPrime(int n);
#endif // GTEST_PROGRAM_H_
cpp/program.cpp and h/program.h files are from the googletest repo sample 1. Dockerfile is adapted from here.

The best example Makefile is the one distributed with Google Test. It shows you how to link gtest_main.a or gtest.a with your binary based on whether you want to use Google's main() function or your own.

I installed Google Test on my system with sudo apt-get install libgtest-dev and the Fixture I'm working on doesn't have a main() and can be build with:
g++ unitTest.cpp -o unitTest /usr/src/gtest/src/gtest_main.cc /usr/src/gtest/src/gtest-all.cc -I /usr/include -I /usr/src/gtest -L /usr/local/lib -lpthread

Before I add something to a project makefile, I like to figure out what commands it is actually running. So here is a list of commands that I used to build the sample1 unit test by hand.
g++ -c -I../include sample1.cc
g++ -c -I../include sample1_unittest.cc
g++ -pthread -o s1_ut sample1.o sample1_unittest.o ../lib/.libs/libgtest.a ../lib/.libs/libgtest_main.a
Note: If you get a bunch of pthread related linker errors, you forgot the -pthread in the 3rd command. If you get a bunch of C++ runtime library related linker errors, you typed gcc instead of g++.

Related

Why does the code coverage report say my library isn't being covered?

I am attempting to generate code coverage for a small test library. The library only consists of two files.
calculator.cpp:
#include "calculator.h"
Calculator::Calculator()
{
}
Calculator::~Calculator()
{
}
void Calculator::addNumbers(int x, int y)
{
this->result = x + y;
}
calculator.h:
#ifndef CALCULATOR_H
#define CALCULATOR_H
class Calculator
{
public:
Calculator();
~Calculator();
void addNumbers(int x, int y);
};
#endif
I have a unit test for this library that is being executed. It includes the library and runs fine. I have set the cmake_cxx_flags to include -fprofile-arcs and -ftest-coverage in the top level CMakeLists.txt.
cmake_minimum_required(VERSION 2.8.11)
set(CMAKE_CXX_FLAGS "-std=c++11")
include_directories("${PROJECT_BINARY_DIR}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0 --coverage")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -O0 --coverage -ftest-coverage -fprofile-arcs")
add_subdirectory(src)
add_subdirectory(test)
I am using a script to take the gcno and gcda files that are generated on build to generate a human readable report.
#!/bin/bash
OUTPUT_DIR="$1/bin/ExecutableTests/Coverage"
mkdir -p "$OUTPUT_DIR"
component=Calculator
dependency=calculator
script=test_calculator.sh
unit_test=unit_test_calculator
mkdir $OUTPUT_DIR/$component
cd "$1"/bin/
make clean || exit 1
make || exit 1
# Create baseline coverage
lcov -c -i -d "$1"/bin/src/"$component"/CMakeFiles/"$dependency".dir -o "$1/Coverage/$component"_"$dependency".coverage.base
# Run the test
$1/scripts/$script $1
# Create test coverage
lcov -c -d "$1"/bin/test/$component/CMakeFiles/"$unit_test".dir -o "$1/Coverage/$component"_"$dependency".coverage.run
lcov -d "$1/test/$component" -a "$1/Coverage/$component"_"$dependency".coverage.base -a "$1/Coverage/$component"_"$dependency".coverage.run -o "$1/Coverage/$component"_"$dependency".coverage.total
genhtml --branch-coverage -o "$OUTPUT_DIR/$component" "$1/Coverage/$component"_"$dependency".coverage.total
rm -f "$1/Coverage/$component"_"$dependency".coverage.base "$1/Coverage/$component"_"$dependency".coverage.run "$1/Coverage/$component"_"$dependency".coverage.total
I can see that the data files are being generated for the library; however when I view the report that is generated from the script it shows that the library is never touched. This is clearly wrong as illustrated by my unit test.
#include "lest_basic.hpp"
#include "calculator.h"
#include <memory>
#include <iostream>
int addResult(int x, int y)
{
Calculator calc1;
return calc1.addNumbers(x, y);
}
const lest::test specification[] =
{
CASE( "Addition" )
{
std::cout << "Starting Addition testing..." << std::endl;
std::cout << "Adding 1 and 2..." << std::endl;
EXPECT(addResult(1, 2) == 3);
std::cout << "Adding 2 and 8..." << std::endl;
EXPECT(addResult(2, 8) > 1);
std::cout << "Adding 7 and 4..." << std::endl;
EXPECT(addResult(7, 4) < 12);
},
};
int main(int argc, char **argv) {
return lest::run( specification );
}
Here is the CMakeLists.txt for my unit test:
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_SOURCE_DIR}/bin/ExecutableTests/)
include_directories(${CMAKE_SOURCE_DIR}/externalinclude/)
include_directories(${CMAKE_SOURCE_DIR}/src/Calculator/)
add_executable (unit_test_calculator test_calculator.cpp)
target_link_libraries(unit_test_calculator -Wl,--whole-archive calculator -Wl,--no-whole-archive)
My question is why is the report saying that the library code isn't be covered? Are the data files the problem?
I have fixed my issue. The problem was that I wasn't running lcov against the source code after running the unit test. I had the wrong directory.
lcov -c -d "$1"/bin/test/$component/CMakeFiles/"$unit_test".dir -o "$1/Coverage/$component"_"$dependency".coverage.run
Needed to be:
lcov -c -d "$1"/bin/src/$component/CMakeFiles/"$dependency".dir -o "$1/Coverage/$component"_"$dependency".coverage.run

Makefile that only recompiles changed objects?

Alright, new user here, and I've got a problem. I'm a new c++ student, and I have no prior experience in this language (before about 3 months ago). My assignment is as follows:
Write a program that declares an array darray of 50 elements of type double. Initialize the array so that the first 25 elements are equal to the square of the index variable, and the last 25 elements are equal to three times the index variable. Output the array so that 10 elements per line are printed.
The program should have two functions: a function, initArray(), which initializes the array elements, and a function, prArray(), which prints the elements.
I have that, it's as follows
#include "printArray.h"
#include "initializearray.h"
#include "Main.h"
#include <stdio.h>
#include <iostream>
#include <string>
using namespace std;
double initArray();
double prArray(double arrayone[50]);
double * dArray;
int main() {
dArray[50] = initArray();
system("PAUSE");
prArray(dArray);
system("PAUSE");
return 0;
}
#include "printArray.h"
#include "initializearray.h"
#include "Main.h"
#include <iostream>
#include <string>
using namespace std;
double prArray(double arraytwo[50])
{
for (int x = 0; x < 50; x++) {
cout << arraytwo[x];
if (x = 9 || 19 || 29 || 39 || 49) {
cout << endl;
}
}
return 0;
}
#include "printArray.h"
#include "initializearray.h"
#include "Main.h"
#include <iostream>
#include <string>
int x = 0;
double arrayone[50];
double initArray()
{
for (x = 0; x < 25; x++) {
arrayone[x] = (x*x);
}
for (x = 25; x <= 50; x++) {
arrayone[x] = (x * 3);
}
return arrayone[50];
}
Now my problem is that the assignment goes on to say
Write a Makefile to compile the program above that minimizes recompiling items upon changes. (e.g., if one function file gets updated, only the necessary file(s) are recompiled.) Include a clean target that removes compiled objects if invoked.
I have a basic makefile:
CC=g++
CFLAGS=-c -Wall
LDFLAGS=
SOURCES=Main.cpp initializeArray.cpp printArray.cpp
OBJECTS=$(SOURCES:.cpp=.o)
EXECUTABLE=Main
all: $(SOURCES) $(EXECUTABLE)
$(EXECUTABLE): $(OBJECTS)
$(CC) $(LDFLAGS) $(OBJECTS) -o $#
.cpp.o:
$(CC) $(CFLAGS) $< -o $#
Now, what I need help with is turning this into a makefile that satisfies the assignment conditions - preferably with step-by-step instructions so that I can learn from this.
Modified your Makefile to:
Automatically generate header dependencies.
Re-build and re-link when Makefile changes.
CXX := g++
LD := ${CXX}
CXXFLAGS := -Wall -Wextra -std=gnu++14 -pthread
LDFLAGS := -pthread
exes :=
# Specify EXEs here begin.
Main.SOURCES := Main.cpp initializeArray.cpp printArray.cpp
exes += Main
# Specify EXEs here end.
all: $(exes)
.SECONDEXPANSION:
get_objects = $(patsubst %.cpp,%.o,${${1}.SOURCES})
get_deps = $(patsubst %.cpp,%.d,${${1}.SOURCES})
# Links executables.
${exes} : % : $$(call get_objects,$$*) Makefile
${LD} -o $# $(filter-out Makefile,$^) ${LDFLAGS}
# Compiles C++ and generates dependencies.
%.o : %.cpp Makefile
${CXX} -o $# -c ${CPPFLAGS} ${CXXFLAGS} -MP -MD $<
# Include the dependencies generated on a previous build.
-include $(foreach exe,${exes},$(call get_deps,${exe}))
.PHONY: all

Gcovr branch coverage for simple case

I developed a simple example to test gcovr and gcov:
#include <iostream>
int main (int argc, const char * argv[])
{
std::cout << argc << std::endl;
if(argc == 1)
{
int y = 1;
std::cout << "Argc > 1" << std::endl;
}
if(argc == 2) std::cout << "Argc > 2" << std::endl;
if(argc == 3)
{
std::cout << "Argc > 3" << std::endl;
}
int i = 34;
i = i * i;
return 0;
}
And a script for coverage report generation:
#! /bin/bash
rm -rf build-run
mkdir build-run
cd build-run
g++ -O6 -DDEBUG=0 --coverage -ftest-coverage -fprofile-arcs -c -o main.o ../main.cpp
g++ -O6 -DDEBUG=0 --coverage -fprofile-arcs -ftest-coverage -lgcov -o coverage ./main.o
./coverage > out
./coverage --help > out
./coverage --help --out > out
gcovr -v -kpbu -r .. -o ../branch-report.txt
gcovr -v -kpu -r .. -o ../report.txt
I got coverage 80% using -b option and it point me on the last line in main block. It seems to me that it should be 100% for such scenario or not?
This is an issue with gcov. If you look at the underlying gcov output [which the example driver is so courteous to leave for us as build-run/^#main.cpp.gcov], you see:
[…snip…]
3: 21: return 0;
function _Z41__static_initialization_and_destruction_0ii called 3 returned 100% blocks executed 100%
6: 22:}
branch 0 taken 3 (fallthrough)
branch 1 taken 0
branch 2 taken 3 (fallthrough)
branch 3 taken 0
function _GLOBAL__I_main called 3 returned 100% blocks executed 100%
3: 23:/*EOF*/
call 0 returned 3
I think what is being reported is branch coverage for the destructors of static members of objects in the iostream library. … while we try and filter out most of the gcov weirdness through gcovr, this is one of the cases that we cannot reliably ignore.
Bill Hart
John Siirola
P.S. I encourage you to submit gcovr tickets on the gcovr Trac page: https://software.sandia.gov/trac/fast/newticket

Include cuda kernel in my project

I have this c++ project in which I call a cuda kernel by means of a wrapper function.
My c++ file looks like this (this is extern.cc):
#include "extern.h"
#include "qc/operator.h"
#include "qc/quStates.h"
#include "gpu.h"
...
ROUTINE(ext_bit) {
int i;
quState *qbit;
PAR_QUSTATE(q,"q");
opBit *op;
tComplex I(0,1);
tComplex sg= inv ? -1 : 1;
char c=(def->id())[0];
if(def->id().length()!=1) c='?';
switch(c) {
case 'H': op=new opBit(1,1,1,-1,sqrt(0.5)); break;
case 'X': op=new opBit(0,1,1,0); break;
case 'Y': op=new opBit(0,-I,I,0); break;
case 'Z': op=new opBit(1,0,0,-1); break;
case 'S': op=new opBit(1,0,0,sg*I); break;
case 'T': op=new opBit(1,0,0,sqrt(0.5)+sg*sqrt(0.5)*I); break;
case '?':
default: EXTERR("unknown single qubit operator "+def->id());
}
// This is where I call my wrapper function
// the error that I get is: expected primary-expression before ',' token
gpucaller(opBit, q);
qcl_delete(op);
return 0;
}
where "gpucaller" is my wrapper function that calls the kernel, both defined in cuda_kernel.cu:
/* compiling with:
nvcc -arch sm_11 -c -I"/home/glu/NVIDIA_GPU_Computing_SDK/C/common/inc" -I"." -I"./qc" -I"/usr/local/cuda/include" -o cuda_kernel.o cuda_kernel.cu
*/
#ifndef _CUDA_KERNEL_H_
#define _CUDA_KERNEL_H_
#define MAX_QUBITS 25
#define BLOCKDIM 512
#define MAX_TERMS_PER_BLOCK (2*BLOCKDIM)
#define THREAD_MASK (~0ul << 1)
// includes
#include <cutil_inline.h>
#include "gpu.h"
__constant__ float devOpBit[2][2];
__global__ void qcl1(cuFloatComplex *a, int N, int qbCount, int blockGrpSize, int k)
{
//int idx = blockIdx.x * BLOCKDIM + threadIdx.x;
//int tx = threadIdx.x;
cuFloatComplex t0_0, t0_1, t1_0, t1_1;
int x0_idx, x1_idx;
int i, grpSize, b0_idx, b1_idx;
__shared__ cuFloatComplex aS[MAX_TERMS_PER_BLOCK];
...
}
void gpucaller(opBit* op, quBaseState* q) {
// make an operator copy
float** myOpBit = (float**)op->getDeviceReadyOpBit();
unsigned int timer = 0;
cuFloatComplex *a_d;
long int N = 1 << q->mapbits();
int size = sizeof(cuFloatComplex) * N;
// start timer
cutilCheckError( cutCreateTimer( &timer));
cutilCheckError( cutStartTimer( timer));
// allocate device memory
cudaMalloc((void**)&a_d,size);
// copy host memory to device
cudaMemcpy(a_d, q->termsarray, size, cudaMemcpyHostToDevice);
// copy quantic operator to constant memory
cutilSafeCall( cudaMemcpyToSymbol(devOpBit, myOpBit, 2*sizeof(float[2]), 0) );
printf("Cuda errors: %s\n", cudaGetErrorString( cudaGetLastError() ) );
// setup execution parameters
dim3 dimBlock(BLOCKDIM, 1, 1);
int n_blocks = N/MAX_TERMS_PER_BLOCK + (N%MAX_TERMS_PER_BLOCK == 0 ? 0:1);
dim3 dimGrid(n_blocks, 1, 1);
...
// execute the kernel
qcl1<<< dimGrid, dimBlock >>>(a_d, N, gates, blockGrpSize, k);
// check if kernel execution generated and error
cutilCheckMsg("Kernel execution failed");
...
// copy result from device to host
cudaMemcpy(q->termsarray, a_d, size, cudaMemcpyDeviceToHost);
// stop timer
cutilCheckError( cutStopTimer( timer));
//printf( "GPU Processing time: %f (ms)\n", cutGetTimerValue( timer));
cutilCheckError( cutDeleteTimer( timer));
// cleanup memory on device
cudaFree(a_d);
cudaThreadExit();
}
#endif // #ifndef _CUDA_KERNEL_H_
and "gpu.h" has the following content:
#ifndef _GPU_H_
#define _GPU_H_
#include "qc/operator.h"
#include "qc/qustates.h"
void gpucaller(opBit* op, quBaseState* q);
#endif // #ifndef _GPU_H_
I don't include the .cu file in my c++ file, I only include the .h file (gpu.h - contains the prototype of my kernel caller function) in both the c++ and the .cu files.
I compile the .cu file with nvcc, and link the resulting .o file in my project's Makefile.
Also, I didn't forget to add the "-lcudart" flag to the Makefile.
The problem is that when I compile my main project, I get this error:
expected primary-expression before ',' token
and is referring to the line in extern.cc where I call the "gpucaller" function.
Does anyone know how to get this right?
EDIT: I've tried compiling again, this time removing the arguments from the gpucaller's function definition (and obviously not passing any arguments to the function, which is wrong because I need to pass arguments). It compiled just fine.
So the problem is that gpucaller's argument types aren't recognized, I have no idea why (I've included the headers where the arguments' types are declared, ie "qc/operator.h" and "qc/quStates.h"). Does anyone have a solution to this?
My project's Makefile is this:
VERSION=0.6.3
# Directory for Standard .qcl files
QCLDIR = /usr/local/lib/qcl
# Path for qcl binaries
QCLBIN = /usr/local/bin
ARCH = `g++ -dumpmachine || echo bin`
# Comment out if you want to compile for a different target architecture
# To build libqc.a, you will also have to edit qc/Makefile!
#ARCH = i686-linux
#ARCHOPT = -m32 -march=i686
# Debugging and optimization options
#DEBUG = -g -pg -DQCL_DEBUG -DQC_DEBUG
#DEBUG = -g -DQCL_DEBUG -DQC_DEBUG
DEBUG = -O2 -g -DQCL_DEBUG -DQC_DEBUG
#DEBUG = -O2
# Plotting support
#
# Comment out if you don't have GNU libplotter and X
PLOPT = -DQCL_PLOT
PLLIB = -L/usr/X11/lib -lplotter
# Readline support
#
# Comment out if you don't have GNU readline on your system
# explicit linking against libtermcap or libncurses may be required
RLOPT = -DQCL_USE_READLINE
#RLLIB = -lreadline
RLLIB = -lreadline -lncurses
# Interrupt support
#
# Comment out if your system doesn't support ANSI C signal handling
IRQOPT = -DQCL_IRQ
# Replace with lex and yacc on non-GNU systems (untested)
LEX = flex
YACC = bison
INSTALL = install
##### You shouldn't have to edit the stuff below #####
DATE = `date +"%y.%m.%d-%H%M"`
QCDIR = qc
QCLIB = $(QCDIR)/libqc.a
QCLINC = lib
#CXX = g++
#CPP = $(CC) -E
CXXFLAGS = -c $(ARCHOPT) -Wall $(DEBUG) $(PLOPT) $(RLOPT) $(IRQOPT) -I$(QCDIR) -DDEF_INCLUDE_PATH="\"$(QCLDIR)\""
LDFLAGS = $(ARCHOPT) -L$(QCDIR) $(DEBUG) $(PLLIB) -lm -lfl -lqc $(RLLIB) -L"/usr/local/cuda/lib" -lcudart
FILESCC = $(wildcard *.cc)
FILESH = $(wildcard *.h)
SOURCE = $(FILESCC) $(FILESH) qcl.lex qcl.y Makefile
OBJECTS = types.o syntax.o typcheck.o symbols.o error.o \
lex.o yacc.o print.o quheap.o extern.o eval.o exec.o \
parse.o options.o debug.o cond.o dump.o plot.o format.o cuda_kernel.o
all: do-it-all
ifeq (.depend,$(wildcard .depend))
include .depend
do-it-all: build
else
do-it-all: dep
$(MAKE)
endif
#### Rules for depend
dep: lex.cc yacc.cc yacc.h $(QCLIB)
for i in *.cc; do \
$(CPP) -I$(QCDIR) -MM $$i; \
done > .depend
lex.cc: qcl.lex yacc.h
$(LEX) -olex.cc qcl.lex
yacc.cc: qcl.y
$(YACC) -t -d -o yacc.cc qcl.y
yacc.h: yacc.cc
mv yacc.*?h yacc.h
$(QCLIB):
cd $(QCDIR) && $(MAKE) libqc.a
#### Rules for build
build: qcl $(QCLINC)/default.qcl
qcl: $(OBJECTS) qcl.o $(QCLIB)
$(CXX) $(OBJECTS) qcl.o $(LDFLAGS) -o qcl
$(QCLINC)/default.qcl: extern.cc
grep "^//!" extern.cc | cut -c5- > $(QCLINC)/default.qcl
checkinst:
[ -f ./qcl -a -f $(QCLINC)/default.qcl ] || $(MAKE) build
install: checkinst
$(INSTALL) -m 0755 -d $(QCLBIN) $(QCLDIR)
$(INSTALL) -m 0755 ./qcl $(QCLBIN)
$(INSTALL) -m 0644 ./$(QCLINC)/*.qcl $(QCLDIR)
uninstall:
-rm -f $(QCLBIN)/qcl
-rm -f $(QCLDIR)/*.qcl
-rmdir $(QCLDIR)
#### Other Functions
edit:
nedit $(SOURCE) &
clean:
rm -f *.o lex.* yacc.*
cd $(QCDIR) && $(MAKE) clean
clear: clean
rm -f qcl $(QCLINC)/default.qcl .depend
cd $(QCDIR) && $(MAKE) clear
dist-src: dep
mkdir qcl-$(VERSION)
cp README CHANGES COPYING .depend $(SOURCE) qcl-$(VERSION)
mkdir qcl-$(VERSION)/qc
cp qc/Makefile qc/*.h qc/*.cc qcl-$(VERSION)/qc
cp -r lib qcl-$(VERSION)
tar czf qcl-$(VERSION).tgz --owner=0 --group=0 qcl-$(VERSION)
rm -r qcl-$(VERSION)
dist-bin: build
mkdir qcl-$(VERSION)-$(ARCH)
cp Makefile README CHANGES COPYING qcl qcl-$(VERSION)-$(ARCH)
cp -r lib qcl-$(VERSION)-$(ARCH)
tar czf qcl-$(VERSION)-$(ARCH).tgz --owner=0 --group=0 qcl-$(VERSION)-$(ARCH)
rm -r qcl-$(VERSION)-$(ARCH)
upload: dist-src
scp qcl-$(VERSION)*.tgz oemer#tph.tuwien.ac.at:html/tgz
scp: dist-src
scp qcl-$(VERSION).tgz oemer#tph.tuwien.ac.at:bak/qcl-$(DATE).tgz
The only changes that I've added to the original Makefile is adding "cuda_kernel.o" to the OBJECTS' line, and adding the "-lcudart" flag to LDFLAGS.
UPDATE: Thanks harrism for helping me out. I was passing a type as a parameter.
gpucaller(opBit, q);
You are passing a type name (opBit) as a function parameter, which is not valid C or C++. It looks like you need to do this instead:
gpucaller(op, q);
Does it literally look like this in cuda.h?
void gpucaller(type1 param1, type2 param2);
Are type1 and type2 declared anywhere so that your regular C++ compiler know what these types are? If not, then you'd get an error like you're saying you're getting.
Your problem code is pretty big and complex at the moment. Try to strip it down to a more simple failure case and update your question once you have that. It'll make it easier to attempt reproduction. Strip out the cuda timer code, the switch case, replace implementation details with ... where it doesn't matter, etc.
I compile with msvc and nvcc then link with icl; so if you can make a simple example I can see if it compiles with a totally different compiler setup. That should narrow down the problem.
Even though renaming your own header cuda.h to somethingspecific.h didn't help, I don't think it's a good idea to leave it as cuda.h. It's confusing and a potential source of problems.

Run C or C++ file as a script

So this is probably a long shot, but is there any way to run a C or C++ file as a script? I tried:
#!/usr/bin/gcc main.c -o main; ./main
int main(){ return 0; }
But it says:
./main.c:1:2: error: invalid preprocessing directive #!
Short answer:
//usr/bin/clang "$0" && exec ./a.out "$#"
int main(){
return 0;
}
The trick is that your text file must be both valid C/C++ code and shell script. Remember to exit from the shell script before the interpreter reaches the C/C++ code, or invoke exec magic.
Run with chmod +x main.c; ./main.c.
A shebang like #!/usr/bin/tcc -run isn't needed because unix-like systems will already execute the text file within the shell.
(adapted from this comment)
I used it in my C++ script:
//usr/bin/clang++ -O3 -std=c++11 "$0" && ./a.out; exit
#include <iostream>
int main() {
for (auto i: {1, 2, 3})
std::cout << i << std::endl;
return 0;
}
If your compilation line grows too much you can use the preprocessor (adapted from this answer) as this plain old C code shows:
#if 0
clang "$0" && ./a.out
rm -f ./a.out
exit
#endif
int main() {
return 0;
}
Of course you can cache the executable:
#if 0
EXEC=${0%.*}
test -x "$EXEC" || clang "$0" -o "$EXEC"
exec "$EXEC"
#endif
int main() {
return 0;
}
Now, for the truly eccentric Java developer:
/*/../bin/true
CLASS_NAME=$(basename "${0%.*}")
CLASS_PATH="$(dirname "$0")"
javac "$0" && java -cp "${CLASS_PATH}" ${CLASS_NAME}
rm -f "${CLASS_PATH}/${CLASS_NAME}.class"
exit
*/
class Main {
public static void main(String[] args) {
return;
}
}
D programmers simply put a shebang at the beginning of text file without breaking the syntax:
#!/usr/bin/rdmd
void main(){}
See:
https://unix.stackexchange.com/a/373229/23567
https://stackoverflow.com/a/12296348/199332
For C, you may have a look at tcc, the Tiny C Compiler. Running C code as a script is one of its possible uses.
$ cat /usr/local/bin/runc
#!/bin/bash
sed -n '2,$p' "$#" | gcc -o /tmp/a.out -x c++ - && /tmp/a.out
rm -f /tmp/a.out
$ cat main.c
#!/bin/bash /usr/local/bin/runc
#include <stdio.h>
int main() {
printf("hello world!\n");
return 0;
}
$ ./main.c
hello world!
The sed command takes the .c file and strips off the hash-bang line. 2,$p means print lines 2 to end of file; "$#" expands to the command-line arguments to the runc script, i.e. "main.c".
sed's output is piped to gcc. Passing - to gcc tells it to read from stdin, and when you do that you also have to specify the source language with -x since it has no file name to guess from.
Since the shebang line will be passed to the compiler, and # indicates a preprocessor directive, it will choke on a #!.
What you can do is embed the makefile in the .c file (as discussed in this xkcd thread)
#if 0
make $# -f - <<EOF
all: foo
foo.o:
cc -c -o foo.o -DFOO_C $0
bar.o:
cc -c -o bar.o -DBAR_C $0
foo: foo.o bar.o
cc -o foo foo.o bar.o
EOF
exit;
#endif
#ifdef FOO_C
#include <stdlib.h>
extern void bar();
int main(int argc, char* argv[]) {
bar();
return EXIT_SUCCESS;
}
#endif
#ifdef BAR_C
void bar() {
puts("bar!");
}
#endif
The #if 0 #endif pair surrounding the makefile ensure the preprocessor ignores that section of text, and the EOF marker marks where the make command should stop parsing input.
CINT:
CINT is an interpreter for C and C++
code. It is useful e.g. for situations
where rapid development is more
important than execution time. Using
an interpreter the compile and link
cycle is dramatically reduced
facilitating rapid development. CINT
makes C/C++ programming enjoyable even
for part-time programmers.
You might want to checkout ryanmjacobs/c which was designed for this in mind. It acts as a wrapper around your favorite compiler.
#!/usr/bin/c
#include <stdio.h>
int main(void) {
printf("Hello World!\n");
return 0;
}
The nice thing about using c is that you can choose what compiler you want to use, e.g.
$ export CC=clang
$ export CC=gcc
So you get all of your favorite optimizations too! Beat that tcc -run!
You can also add compiler flags to the shebang, as long as they are terminated with the -- characters:
#!/usr/bin/c -Wall -g -lncurses --
#include <ncurses.h>
int main(void) {
initscr();
/* ... */
return 0;
}
c also uses $CFLAGS and $CPPFLAGS if they are set as well.
#!/usr/bin/env sh
tail -n +$(( $LINENO + 1 )) "$0" | cc -xc - && { ./a.out "$#"; e="$?"; rm ./a.out; exit "$e"; }
#include <stdio.h>
int main(int argc, char const* argv[]) {
printf("Hello world!\n");
return 0;
}
This properly forwards the arguments and the exit code too.
Quite a short proposal would exploit:
The current shell script being the default interpreter for unknown types (without a shebang or a recognizable binary header).
The "#" being a comment in shell and "#if 0" disabling code.
#if 0
F="$(dirname $0)/.$(basename $0).bin"
[ ! -f $F -o $F -ot $0 ] && { c++ "$0" -o "$F" || exit 1 ; }
exec "$F" "$#"
#endif
// Here starts my C++ program :)
#include <iostream>
#include <unistd.h>
using namespace std;
int main(int argc, char **argv) {
if (argv[1])
clog << "Hello " << argv[1] << endl;
else
clog << "hello world" << endl;
}
Then you can chmod +x your .cpp files and then ./run.cpp.
You could easily give flags for the compiler.
The binary is cached in the current directory along with the source, and updated when necessary.
The original arguments are passed to the binary: ./run.cpp Hi
It doesn't reuse the a.out, so that you can have multiple binaries in the same folder.
Uses whatever c++ compiler you have in your system.
The binary starts with "." so that it is hidden from the directory listing.
Problems:
What happens on concurrent executions?
Variatn of John Kugelman can be written in this way:
#!/bin/bash
t=`mktemp`
sed '1,/^\/\/code/d' "$0" | g++ -o "$t" -x c++ - && "$t" "$#"
r=$?
rm -f "$t"
exit $r
//code
#include <stdio.h>
int main() {
printf("Hi\n");
return 0;
}
Here's yet another alternative:
#if 0
TMP=$(mktemp -d)
cc -o ${TMP}/a.out ${0} && ${TMP}/a.out ${#:1} ; RV=${?}
rm -rf ${TMP}
exit ${RV}
#endif
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("Hello world\n");
return 0;
}
I know this question is not a recent one, but I decided to throw my answer into the mix anyways.
With Clang and LLVM, there is not any need to write out an intermediate file or call an external helper program/script. (apart from clang/clang++/lli)
You can just pipe the output of clang/clang++ to lli.
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0 -"
$CXXCMD | $LLICMD "$#" ; exit $?
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
return 3==argc;
}
The above however does not let you use stdin in your c/c++ script.
If bash is your shell, then you can do the following to use stdin:
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0"
exec $LLICMD <($CXXCMD) "$#"
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
for (int c; EOF != (c=getchar()); putchar(c));
return 3==argc;
}
There are several places that suggest the shebang (#!) should remain but its illegal for the gcc compiler. So several solutions cut it out. In addition it is possible to insert a preprocessor directive that fixes the compiler messages for the case the c code is wrong.
#!/bin/bash
#ifdef 0
xxx=$(mktemp -d)
awk 'BEGIN
{ print "#line 2 \"$0\""; first=1; }
{ if (first) first=0; else print $0 }' $0 |\
g++ -x c++ -o ${xxx} - && ./${xxx} "$#"
rv=$?
\rm ./${xxx}
exit $rv
#endif
#include <iostream>
int main(int argc,char *argv[]) {
std::cout<<"Hello world"<<std::endl;
}
As stated in a previous answer, if you use tcc as your compiler, you can put a shebang #!/usr/bin/tcc -run as the first line of your source file.
However, there is a small problem with that: if you want to compile that same file, gcc will throw an error: invalid preprocessing directive #! (tcc will ignore the shebang and compile just fine).
If you still need to compile with gcc one workaround is to use the tail command to cut off the shebang line from the source file before piping it into gcc:
tail -n+2 helloworld.c | gcc -xc -
Keep in mind that all warnings and/or errors will be off by one line.
You can automate that by creating a bash script that checks whether a file begins with a shebang, something like
if [[ $(head -c2 $1) == '#!' ]]
then
tail -n+2 $1 | gcc -xc -
else
gcc $1
fi
and use that to compile your source instead of directly invoking gcc.
Just wanted to share, thanks to Pedro's explanation on solutions using the #if 0 trick, I have updated my fork on TCC (Sugar C) so that all examples can be called with shebang, finally, with no errors when looking source on the IDE.
Now, code displays beautifully using clangd in VS Code for project sources. Samples first lines look like:
#if 0
/usr/local/bin/sugar `basename $0` $# && exit;
// above is a shebang hack, so you can run: ./args.c <arg 1> <arg 2> <arg N>
#endif
The original intention of this project always has been to use C as if a scripting language using TCC base under the hood, but with a client that prioritizes ram output over file output (without the of -run directive).
You can check out the project at: https://github.com/antonioprates/sugar
I like to use this as the first line at the top of my programs:
For C (technically: gnu C as I've specified it below):
///usr/bin/env ccache gcc -Wall -Wextra -Werror -O3 -std=gnu17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
For C++ (technically: gnu++ as I've specified it below):
///usr/bin/env ccache g++ -Wall -Wextra -Werror -O3 -std=gnu++17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
ccache helps ensure your compiling is a little more efficient. Install it in Ubuntu with sudo apt update && sudo apt install ccache.
For Go (golang) and some explanations of the lines above, see my other answer here: What's the appropriate Go shebang line?