I'm programming an ATSAME70 and I'm trying to program a simple timer using the SysTick interrupt available in Cortex M MCUs, but I don't know what is going wrong.
If write this code in a simple main.cpp file:
// main.cpp
#include <cstdint>
#include "init.h"
#include "led.hpp"
volatile uint32_t g_ticks = 0;
extern "C" {
void SysTick_Handler(void)
{
g_ticks++;
}
}
class Timer
{
private:
uint32_t start;
public:
Timer() : start(g_ticks) {}
float elapsed() const { return (g_ticks - start) / 1000.0f; }
};
int main()
{
init();
SysTick_Config(300000000 / 1000); /* Clock is running at 300 MHz */
Timer t;
while (t.elapsed() < 1.0f);
Led::on();
while (true);
}
It works, the led lights up properly after 1 second.
But if I try to keep it clean and separate the program in the following files:
// timer.hpp
#include <cstdint>
class Timer
{
private:
uint32_t start;
public:
Timer();
float elapsed() const;
};
// timer.cpp
#include "timer.hpp"
volatile uint32_t g_ticks = 0;
extern "C" {
void SysTick_Handler(void)
{
g_ticks++;
}
}
Timer::Timer() : start(g_ticks) {}
float Timer::elapsed() const
{
return (g_ticks - start) / 1000.0f;
}
// main.cpp
#include <cstdint>
#include "init.h"
#include "led.hpp"
#include "timer.hpp"
int main()
{
init();
SysTick_Config(300000000 / 1000); /* Clock is running at 300 MHz */
Timer t;
while (t.elapsed() < 1.0f);
Led::on();
while (true);
}
It doesn't work anymore, the program reaches the first while loop and then it gets stuck there, I think g_ticks is being corrupted when I try to read it in t.elapsed() but I don't know what is happening. Does anybody know where I'm wrong?
init() is just a function in which I initialize all needed registers.
EDIT: here are the command lines used to generate the code:
$toolchain_path = "C:\Program Files (x86)\GNU Tools ARM Embedded\8 2018-q4-major\bin";
$link_file = "source\device\same70_flash.ld"
$c_files = "include\sensors\bmi088\bmi088.c " +
...
"source\utils\syscalls.c";
$cpp_files = "source\device\init.cpp " +
...
"source\main.cpp";
Invoke-Expression "& '$toolchain_path\arm-none-eabi-gcc.exe' -c -s -O3 -fdata-sections -ffunction-sections '-Wl,--gc-sections' '-Wl,--entry=Reset_Handler' -mthumb -mcpu=cortex-m7 -mfloat-abi=hard -mfpu=fpv5-d16 -Isource -Iinclude\CMSIS -D__SAME70N21__ $c_files --specs=nosys.specs"
foreach ($c_file in $c_files.split(" "))
{
if ($objects) { $objects += " "; }
$objects += ($c_file.split("\")[-1]).split(".")[0] + ".o";
}
Invoke-Expression "& '$toolchain_path\arm-none-eabi-ld.exe' -s --entry=Reset_Handler -r $objects -o drivers.o"
foreach ($object in $objects.split(" ")) { Remove-Item $object; }
Move-Item drivers.o bin\drivers.o -force
Invoke-Expression "& '$toolchain_path\arm-none-eabi-g++.exe' -s -O3 -fdata-sections -ffunction-sections '-Wl,--gc-sections' -mthumb -mcpu=cortex-m7 -mfloat-abi=hard -mfpu=fpv5-d16 '-Wl,--entry=Reset_Handler' -std=c++17 -Isource -Iinclude -Iinclude\CMSIS -D__SAME70N21__ bin/drivers.o $cpp_files --specs=nosys.specs -T $link_file -o bin\code.elf"
Invoke-Expression "& '$toolchain_path\arm-none-eabi-objcopy.exe' -O binary bin\code.elf bin\code.bin"
The script is written in powershell and I'll explain it a little bit. $c_files is just a string with every c file to be compiled separated by an space. $objects is an array of strings containing every file listed in $c_files but with the ".c" extension replaced by ".o". I've done this to link every c compiled file into "drivers.o". Finally, c++ code is compiled using this drivers.o as argument and then I generate the .bin file to upload it to the MCU.
The code is compiled using the latest GNU Arm Embedded toolchain. I must have made a mistake somewhere but I don't know where and I don't have a debugger to debug the code at runtime.
EDIT 2: Both variants work properly without optimizations. If I pass -O1 or higher as argument to the compiler the second variant stops working and I don't understand why.
I have two main functions that use a common C++ class.
File1: main.cpp
#include <iostream>
#include "HelloAnother.h"
int main() {
HelloAnother::sayHello1();
return 0;
}
File2: main2.cpp
#include <iostream>
#include "HelloAnother.h"
int main() {
HelloAnother::sayHello2();
return 0;
}
File3: HelloAnother.h
#pragma once
class HelloAnother {
public:
static void sayHello1();
static void sayHello2();
};
File4: HelloAnother.cpp
#include <iostream>
#include "HelloAnother.h"
void HelloAnother::sayHello1() {
std::cout << "Hello 1!!!" << std::endl;
}
void HelloAnother::sayHello2() {
std::cout << "Hello 2 !!!" << std::endl;
}
Now I compile two executables:
clang-3.8 -o main -fprofile-arcs -ftest-coverage --coverage -g -fPIC -lstdc++ main.cpp HelloAnother.cpp
clang-3.8 -o main2 -fprofile-arcs -ftest-coverage --coverage -g -fPIC -lstdc++ main2.cpp HelloAnother.cpp
Now, I run ./main
Hello 1!!!
When I rerun ./main
Hello 1!!!
profiling: /media/sf_ubuntu-shared/test-profiling/main.gcda: cannot map: Invalid argument
profiling: /media/sf_ubuntu-shared/test-profiling/HelloAnother.gcda: cannot map: Invalid argument
One second run, I get this error (above) in trying to create/merge .gcda files.
Now, If I try to run ./main2
Hello 2 !!!
profiling: /media/sf_ubuntu-shared/test-profiling/HelloAnother.gcda: cannot map: Invalid argument
When I generate the code coverage report, the call to second function doesn't show up as if the call wasn't made.
Can anyone help me debug this issue pls? The issue seems to be related to merging of .gcda files on multiple runs, but not sure how to solve it.
I also tried clang-3.5 but with same results.
After a lot of searching and trial/error this is what works for me:
Compile first executable, run it. This generates HelloAnother.gcda and main.gcda files.
Execute lcov --gcov-tool=gcov-4.4 --directory . --capture --output-file coverage.main.info
rm -rf *.gcda; rm -rf *.gcno
Compile second executable (main2.cpp), run it. This generates another HelloAnother.gcda and a main2.gcda file.
Execute lcov --gcov-tool=gcov-4.4 --directory . --capture --output-file coverage.main2.info
Now to generate nice looking html report do: genhtml -o coverage coverage.main.info coverage.main2.info
Your problem is that you compile the shared file (HelloAnother.cc) twice and gcov fails to understand that two copies of HelloAnother.o inside main1 and main2 need to be shared.
Instead, compile the shared code once and link it into each executable:
$ g++ --coverage -c HelloAnother.cc
$ g++ --coverage main1.cc HelloAnother.o -o main1
$ g++ --coverage main2.cc HelloAnother.o -o main2
$ ./main1
Hello 1!!!
$ ./main2
Hello 2 !!!
$ gcov --stdout HelloAnother.gcno
...
1: 4:void HelloAnother::sayHello1() {
1: 5: std::cout << "Hello 1!!!" << std::endl;
1: 6:}
-: 7:
1: 8:void HelloAnother::sayHello2() {
1: 9: std::cout << "Hello 2 !!!" << std::endl;
1: 10:}
-: 11:
I have a problem in compiling my code.
It works when main() is in the same file as yacc parser but its not working when I put main() in another file.
This is my Flex file: (lex1.ll)
%{
#include <stdio.h>
#include "yac1.tab.hh"
extern "C"
{
int yylex(void);
}
int line_num = 1;
%}
alpha [A-Za-z]
digit [0-9]
%%
"DELETE ALL" return DELALL;
[ \t] ;
INSERT return INSERT;
DELETE return DELETE;
FIND return FIND;
[0-9]+\.[0-9]+ { yylval.fval = atof(yytext); return FLOAT; }
[0-9]+ { yylval.ival = atoi(yytext); return INT; }
[a-zA-Z0-9_]+ { yylval.sval = strdup(yytext);return STRING; }
\n { ++line_num; return ENDL; }
. ;
%%
This is my Bison file: (yac1.yy)
%{
#include <stdio.h>
#include <stdlib.h>
int intval;
void yyerror(const char *s)
{
fprintf(stderr, "error: %s\n", s);
}
extern "C"
{
int yyparse(void);
int yylex(void);
int yywrap()
{
return 1;
}
}
%}
%token INSERT DELETE DELALL FIND ENDL
%union {
int ival;
float fval;
char *sval;
}
%token <ival> INT
%token <fval> FLOAT
%token <sval> STRING
%%
S:T {printf("INPUT ACCEPTED....\n");exit(0);};
T: INSERT val {printf("hey insert FOUND\n");}
| DELETE val
| DELALL ENDL
| FIND val
;
val : INT ENDL {printf("hey %d\n",$1);intval=$1; }
|
FLOAT ENDL
|
STRING ENDL {printf("hey %s\n",$1);}
;
%%
/*
It works if I uncomment this block of code
int main()
{
while(1){
printf("Enter the string");
yyparse();
}
}
*/
This is my main program: (testlex.cc)
#include <stdio.h>
#include "lexheader.h"
#include "yac1.tab.hh"
extern int intval;
main()
{
/*
char * line = "INSERT 54\n";
YY_BUFFER_STATE bp = yy_scan_string( line );
yy_switch_to_buffer(bp);
yyparse();
yy_delete_buffer(bp);
printf ("hello %d",intval);
*/
printf("Enter the query:");
//while(1)
printf ("%d\n",yyparse());
}
And this is my Makefile
parser: lex1.ll yac1.yy testlex.cc
bison -d yac1.yy
flex --header-file="lexheader.h" lex1.ll
g++ -o parser yac1.tab.cc lex.yy.c testlex.cc -lfl
clean:
rm -rf *.o parser
When I compile I get this error.
bison -d yac1.yy
flex --header-file="lexheader.h" lex1.ll
g++ -o parser yac1.tab.cc lex.yy.c testlex.cc -lfl
testlex.cc: In function ‘int main()’:
testlex.cc:21:24: error: ‘yyparse’ was not declared in this scope
make: *** [parser] Error 1
PS: It is necessary for me to compile with g++.With gcc I have a working code.
Any help in this regard is highly appreciated.
Thanks.
Did you read the documentation of GNU Bison ? It has a chapter about C++ parsers with a complete example (quite similar to yours).
You could explicitly declare yyparse as suggested by this answer, but making a real C++ parser is perhaps better.
Your Makefile is not very good. You could have something like
LEX= flex
YACC= bison
LIBES= -lfl
CXXFLAGS= -Wall
parser: lex1.o yac1.tab.o lex.yy.o testlex.o
$(LINKER.cc) -o $# $^ $(LIBES)
lex1.cc: lex1.ll
$(LEX) --header-file="lexheader.h" $< -o $#
yac1.tag.cc: yac1.yy
$(YACC) -d $<
Take also time to read the documentation of GNU make. You might want to use remake as remake -x to debug your Makefile.
You need to declare the yyparse function in the testlex.cc file:
int yyparse();
This is what is known as a function prototype, and tells the compiler that the function exists and can be called.
After looking a little closer at your source I now know the reason why the existing prototype didn't work: It's because you declared it as extern "C" but compiled the file as a C++ source. The extern "C" told the compiler that the yyparse function was an old C style function but then you continued to compile the source with a C++ compiler. This caused a name mismatch.
Perl 5.14 source - Sample program failing
I'm trying to execute the below program on Linux 64 with libperl.so built with 5.14 source
and i'm getting an abort in the location
Program terminated with signal 11, Segmentation fault.
#0 0x00002abdc0eb2656 in Perl_sv_2mortal () from ./libperl.so
(gdb) where
#0 0x00002abdc0eb2656 in Perl_sv_2mortal () from ./libperl.so
#1 0x00000000004010ed in PerlPower ()
#2 0x0000000000401335 in main ()
(gdb)
My program:
#include <EXTERN.h>
#include <perl.h>
#include <stdio.h>
static PerlInterpreter *my_perl;
static void PerlPower(int a, int b)
{
dSP; /* initialize stack pointer */
ENTER; /* everything created after here */
SAVETMPS; /* ...is a temporary variable. */
PUSHMARK(SP); /* remember the stack pointer */
XPUSHs(sv_2mortal(newSViv(a))); /* push the base onto the stack */
XPUSHs(sv_2mortal(newSViv(b))); /* push the exponent onto stack */
PUTBACK; /* make local stack pointer global */
call_pv("expo", G_SCALAR); /* call the function */
SPAGAIN; /* refresh stack pointer *
/* pop the return value from stack */
printf("%d to the %dth power is %d.\n", a, b, POPi);
PUTBACK;
FREETMPS; /* free that return value */
LEAVE; /* ...and the XPUSHed "mortal" args. */
}
int main(int argc, char **argv, char **env)
{
char *my_argv[] = { "", "power.pl" };
PERL_SYS_INIT3(&argc, &argv, &env);
my_perl = perl_alloc();
perl_construct(my_perl);
perl_parse(my_perl, NULL, 2, my_argv, (char **)NULL);
PL_exit_flags |= PERL_EXIT_DESTRUCT_END;
perl_run(my_perl);
PerlPower(3, 4);
/*** Compute 3 ** 4 ***/
perl_destruct(my_perl);
perl_free(my_perl);
PERL_SYS_TERM();
}
power.pl contains the below statements
sub expo {
my ($a, $b) = #_;
return $a ** $b;
}
The above sample C and perl program was taken from the link http://perldoc.perl.org/perlembed.html#Evaluating-a-Perl-statement-from-your-C-program
I'm using the below compiler options
Compiler:
cc='cc', ccflags ='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -pipe -Wdeclaration-after-statement -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64',
optimize='-O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -Wall -pipe',
cppflags='-D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -pipe -Wdeclaration-after-statement'
ccversion='', gccversion='4.1.2 20070115 (prerelease) (SUSE Linux)', gccosandvers=''
intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16
ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
alignbytes=8, prototype=define
Can you please help me to narrow down the problem?
I got one too, I got the win32 equivalent Faulting application power.exe, version 0.0.0.0, faulting module perl514.dll, version 0.0.0.0, fault address 0x000c155f.
but I also got an error message :)
$ power.exe
Can't open perl script "power.pl": No such file or directory
After copying power.pl from embed it works as expected
$ cat power.pl
sub expo {
my ($a, $b) = #_;
return $a ** $b;
}
$ power.exe
3 to the 4th power is 81.
The problem was while compiling the source i hadn't specified the new 5.14.2 header files location. I copied both the c and perl file to the folder containing new header file of 5.14.2 and compiling with below options resolved the problem
g++ -o test_perl test_perl.c -I . -L . -g perl -MExtUtils::Embed -e ccopts -e ldopts
LD_LIBRARY_PATH=.:LD_LIBRARY_PATH;export LD_LIBRARY_PATH
So this is probably a long shot, but is there any way to run a C or C++ file as a script? I tried:
#!/usr/bin/gcc main.c -o main; ./main
int main(){ return 0; }
But it says:
./main.c:1:2: error: invalid preprocessing directive #!
Short answer:
//usr/bin/clang "$0" && exec ./a.out "$#"
int main(){
return 0;
}
The trick is that your text file must be both valid C/C++ code and shell script. Remember to exit from the shell script before the interpreter reaches the C/C++ code, or invoke exec magic.
Run with chmod +x main.c; ./main.c.
A shebang like #!/usr/bin/tcc -run isn't needed because unix-like systems will already execute the text file within the shell.
(adapted from this comment)
I used it in my C++ script:
//usr/bin/clang++ -O3 -std=c++11 "$0" && ./a.out; exit
#include <iostream>
int main() {
for (auto i: {1, 2, 3})
std::cout << i << std::endl;
return 0;
}
If your compilation line grows too much you can use the preprocessor (adapted from this answer) as this plain old C code shows:
#if 0
clang "$0" && ./a.out
rm -f ./a.out
exit
#endif
int main() {
return 0;
}
Of course you can cache the executable:
#if 0
EXEC=${0%.*}
test -x "$EXEC" || clang "$0" -o "$EXEC"
exec "$EXEC"
#endif
int main() {
return 0;
}
Now, for the truly eccentric Java developer:
/*/../bin/true
CLASS_NAME=$(basename "${0%.*}")
CLASS_PATH="$(dirname "$0")"
javac "$0" && java -cp "${CLASS_PATH}" ${CLASS_NAME}
rm -f "${CLASS_PATH}/${CLASS_NAME}.class"
exit
*/
class Main {
public static void main(String[] args) {
return;
}
}
D programmers simply put a shebang at the beginning of text file without breaking the syntax:
#!/usr/bin/rdmd
void main(){}
See:
https://unix.stackexchange.com/a/373229/23567
https://stackoverflow.com/a/12296348/199332
For C, you may have a look at tcc, the Tiny C Compiler. Running C code as a script is one of its possible uses.
$ cat /usr/local/bin/runc
#!/bin/bash
sed -n '2,$p' "$#" | gcc -o /tmp/a.out -x c++ - && /tmp/a.out
rm -f /tmp/a.out
$ cat main.c
#!/bin/bash /usr/local/bin/runc
#include <stdio.h>
int main() {
printf("hello world!\n");
return 0;
}
$ ./main.c
hello world!
The sed command takes the .c file and strips off the hash-bang line. 2,$p means print lines 2 to end of file; "$#" expands to the command-line arguments to the runc script, i.e. "main.c".
sed's output is piped to gcc. Passing - to gcc tells it to read from stdin, and when you do that you also have to specify the source language with -x since it has no file name to guess from.
Since the shebang line will be passed to the compiler, and # indicates a preprocessor directive, it will choke on a #!.
What you can do is embed the makefile in the .c file (as discussed in this xkcd thread)
#if 0
make $# -f - <<EOF
all: foo
foo.o:
cc -c -o foo.o -DFOO_C $0
bar.o:
cc -c -o bar.o -DBAR_C $0
foo: foo.o bar.o
cc -o foo foo.o bar.o
EOF
exit;
#endif
#ifdef FOO_C
#include <stdlib.h>
extern void bar();
int main(int argc, char* argv[]) {
bar();
return EXIT_SUCCESS;
}
#endif
#ifdef BAR_C
void bar() {
puts("bar!");
}
#endif
The #if 0 #endif pair surrounding the makefile ensure the preprocessor ignores that section of text, and the EOF marker marks where the make command should stop parsing input.
CINT:
CINT is an interpreter for C and C++
code. It is useful e.g. for situations
where rapid development is more
important than execution time. Using
an interpreter the compile and link
cycle is dramatically reduced
facilitating rapid development. CINT
makes C/C++ programming enjoyable even
for part-time programmers.
You might want to checkout ryanmjacobs/c which was designed for this in mind. It acts as a wrapper around your favorite compiler.
#!/usr/bin/c
#include <stdio.h>
int main(void) {
printf("Hello World!\n");
return 0;
}
The nice thing about using c is that you can choose what compiler you want to use, e.g.
$ export CC=clang
$ export CC=gcc
So you get all of your favorite optimizations too! Beat that tcc -run!
You can also add compiler flags to the shebang, as long as they are terminated with the -- characters:
#!/usr/bin/c -Wall -g -lncurses --
#include <ncurses.h>
int main(void) {
initscr();
/* ... */
return 0;
}
c also uses $CFLAGS and $CPPFLAGS if they are set as well.
#!/usr/bin/env sh
tail -n +$(( $LINENO + 1 )) "$0" | cc -xc - && { ./a.out "$#"; e="$?"; rm ./a.out; exit "$e"; }
#include <stdio.h>
int main(int argc, char const* argv[]) {
printf("Hello world!\n");
return 0;
}
This properly forwards the arguments and the exit code too.
Quite a short proposal would exploit:
The current shell script being the default interpreter for unknown types (without a shebang or a recognizable binary header).
The "#" being a comment in shell and "#if 0" disabling code.
#if 0
F="$(dirname $0)/.$(basename $0).bin"
[ ! -f $F -o $F -ot $0 ] && { c++ "$0" -o "$F" || exit 1 ; }
exec "$F" "$#"
#endif
// Here starts my C++ program :)
#include <iostream>
#include <unistd.h>
using namespace std;
int main(int argc, char **argv) {
if (argv[1])
clog << "Hello " << argv[1] << endl;
else
clog << "hello world" << endl;
}
Then you can chmod +x your .cpp files and then ./run.cpp.
You could easily give flags for the compiler.
The binary is cached in the current directory along with the source, and updated when necessary.
The original arguments are passed to the binary: ./run.cpp Hi
It doesn't reuse the a.out, so that you can have multiple binaries in the same folder.
Uses whatever c++ compiler you have in your system.
The binary starts with "." so that it is hidden from the directory listing.
Problems:
What happens on concurrent executions?
Variatn of John Kugelman can be written in this way:
#!/bin/bash
t=`mktemp`
sed '1,/^\/\/code/d' "$0" | g++ -o "$t" -x c++ - && "$t" "$#"
r=$?
rm -f "$t"
exit $r
//code
#include <stdio.h>
int main() {
printf("Hi\n");
return 0;
}
Here's yet another alternative:
#if 0
TMP=$(mktemp -d)
cc -o ${TMP}/a.out ${0} && ${TMP}/a.out ${#:1} ; RV=${?}
rm -rf ${TMP}
exit ${RV}
#endif
#include <stdio.h>
int main(int argc, char *argv[])
{
printf("Hello world\n");
return 0;
}
I know this question is not a recent one, but I decided to throw my answer into the mix anyways.
With Clang and LLVM, there is not any need to write out an intermediate file or call an external helper program/script. (apart from clang/clang++/lli)
You can just pipe the output of clang/clang++ to lli.
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0 -"
$CXXCMD | $LLICMD "$#" ; exit $?
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
return 3==argc;
}
The above however does not let you use stdin in your c/c++ script.
If bash is your shell, then you can do the following to use stdin:
#if 0
CXX=clang++
CXXFLAGS="-O2 -Wall -Werror -std=c++17"
CXXARGS="-xc++ -emit-llvm -c -o -"
CXXCMD="$CXX $CXXFLAGS $CXXARGS $0"
LLICMD="lli -force-interpreter -fake-argv0=$0"
exec $LLICMD <($CXXCMD) "$#"
#endif
#include <cstdio>
int main (int argc, char **argv) {
printf ("Hello llvm: %d\n", argc);
for (auto i = 0; i < argc; i++) {
printf("%d: %s\n", i, argv[i]);
}
for (int c; EOF != (c=getchar()); putchar(c));
return 3==argc;
}
There are several places that suggest the shebang (#!) should remain but its illegal for the gcc compiler. So several solutions cut it out. In addition it is possible to insert a preprocessor directive that fixes the compiler messages for the case the c code is wrong.
#!/bin/bash
#ifdef 0
xxx=$(mktemp -d)
awk 'BEGIN
{ print "#line 2 \"$0\""; first=1; }
{ if (first) first=0; else print $0 }' $0 |\
g++ -x c++ -o ${xxx} - && ./${xxx} "$#"
rv=$?
\rm ./${xxx}
exit $rv
#endif
#include <iostream>
int main(int argc,char *argv[]) {
std::cout<<"Hello world"<<std::endl;
}
As stated in a previous answer, if you use tcc as your compiler, you can put a shebang #!/usr/bin/tcc -run as the first line of your source file.
However, there is a small problem with that: if you want to compile that same file, gcc will throw an error: invalid preprocessing directive #! (tcc will ignore the shebang and compile just fine).
If you still need to compile with gcc one workaround is to use the tail command to cut off the shebang line from the source file before piping it into gcc:
tail -n+2 helloworld.c | gcc -xc -
Keep in mind that all warnings and/or errors will be off by one line.
You can automate that by creating a bash script that checks whether a file begins with a shebang, something like
if [[ $(head -c2 $1) == '#!' ]]
then
tail -n+2 $1 | gcc -xc -
else
gcc $1
fi
and use that to compile your source instead of directly invoking gcc.
Just wanted to share, thanks to Pedro's explanation on solutions using the #if 0 trick, I have updated my fork on TCC (Sugar C) so that all examples can be called with shebang, finally, with no errors when looking source on the IDE.
Now, code displays beautifully using clangd in VS Code for project sources. Samples first lines look like:
#if 0
/usr/local/bin/sugar `basename $0` $# && exit;
// above is a shebang hack, so you can run: ./args.c <arg 1> <arg 2> <arg N>
#endif
The original intention of this project always has been to use C as if a scripting language using TCC base under the hood, but with a client that prioritizes ram output over file output (without the of -run directive).
You can check out the project at: https://github.com/antonioprates/sugar
I like to use this as the first line at the top of my programs:
For C (technically: gnu C as I've specified it below):
///usr/bin/env ccache gcc -Wall -Wextra -Werror -O3 -std=gnu17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
For C++ (technically: gnu++ as I've specified it below):
///usr/bin/env ccache g++ -Wall -Wextra -Werror -O3 -std=gnu++17 "$0" -o /tmp/a -lm && /tmp/a "$#"; exit
ccache helps ensure your compiling is a little more efficient. Install it in Ubuntu with sudo apt update && sudo apt install ccache.
For Go (golang) and some explanations of the lines above, see my other answer here: What's the appropriate Go shebang line?