I have tried working with the command line arguments in a small C++ program on Amiga 1200 (Workbench 3.1.4).
I have compiled with the use of bebbo’s cross-compiler g++ (m68k-amigaos-g++) (see https://github.com/bebbo/amiga-gcc) a simple CLI app that just outputs the arguments. While it works fine when compiled with 'normal' g++ in Windows, it failed in AmigaShell in Amiga Forever emulator and Amiga 1200 machine as well.
I have found on some forums that the preprocessor symbol __stdargs should be used, which as I understand instructs the compiler to handle the generated assembler as if the function was called with the parameters passed on stack and not with the use of registers. Is that correct understanding?
Is the normal that Amiga (and g++) by default use registers and it needs to be overridden for AmigaShell? I added that to __stdargs to the main() function. Anyway, that did not help.
Next, I have read, again on some forum, that -mcrt parameter has to be used when compiler output is linked. I have struggled to find the purpose do the parameter. It seems it specifies which standard C library (similar to glibc) to be linked? According the Google the following possible variants of the parameter (-mcrt=nix13, -mcrt=nix20, and mcrt=clib2) (see e.g. https://github.com/adtools/libnix).
The only one that works fine was nix20 (nix13 did not link and clib2 linked, but the program did not work on Amiga. Why in a first-place we need the standard C library?
I have used this with -mcrt: m68k-amigaos-g++ args.o -mcrt=nix20 -o args and it finally worked:
Can anybody describe to me as a newbie a bit more background details of all this?
Here is my test program:
#include <iostream>
using std::cout;
#if defined (__AMIGA__)
#define MAIN_FNC __stdargs
#else
#define MAIN_FNC
#endif
MAIN_FNC int main( int argc, char *argv[] )
{
cout << "Arguments count:" << argc << " \n";
for ( int i = 0; i < argc; i ++ )
cout << i << ". [" << argv[i] << "]\n";
return 0;
}
You don't need any MAIN_FNC, remove it. Also don't need to play with -mcrt=xxx. Just link with -noixemul option.
m68k-amigaos-g++ args.o -noixemul -o args
By default ixemul.library is used/linked (in short and very simply the ixemul.library emulate some unix behavior, see here). That cause your problem.
More info about -noixemul related to gcc & AmigaOS here:
GCC and ixemul.library
Related
This MCVE works fine in Visual Studio.
#include <experimental/generator>
#include <iostream>
std::experimental::generator<int> f() { for (int i = 0; i < 10; ++i) co_yield i; }
int main ()
{
for (int i : f())
std::cout << i << ' ';
return 0;
}
but in g++10, which is listed as having full support or C++20's coroutines, it does not.
(Taking out experimental doesn't help.)
I am compiling thus: g++ -g -std=c++2a -fcoroutines -c main.cpp.
It complains that there is no include file generator, and if I take out the #include, that generator is not a part of std:: or is not defined. I suppose there's another name for it in the new standard? Or if not, what do I do instead to get a coroutine that uses co_yield?
Nothing in GCC's status list alongside its coroutine support says it supports anything other than p0912r5, which does not provide std::generator, experimentally or otherwise.
I recall that VS added <experimental/generator> a few years ago; I guess GCC never did.
If it's currently proposed for inclusion in C++, and you can find the relevant proposal, perhaps you can track its support status. But honestly, for now, you'd be better off writing your own that works until it becomes part of some actual standard.
tl;dr: Though it is a coroutine, this feature is not part of the Coroutines TS.
If you need a generator for g++11 and above, copy paste the one here:
https://en.cppreference.com/w/cpp/coroutine/coroutine_handle
This MCVE works fine in Visual Studio.
#include <experimental/generator>
#include <iostream>
std::experimental::generator<int> f() { for (int i = 0; i < 10; ++i) co_yield i; }
int main ()
{
for (int i : f())
std::cout << i << ' ';
return 0;
}
but in g++10, which is listed as having full support or C++20's coroutines, it does not.
(Taking out experimental doesn't help.)
I am compiling thus: g++ -g -std=c++2a -fcoroutines -c main.cpp.
It complains that there is no include file generator, and if I take out the #include, that generator is not a part of std:: or is not defined. I suppose there's another name for it in the new standard? Or if not, what do I do instead to get a coroutine that uses co_yield?
Nothing in GCC's status list alongside its coroutine support says it supports anything other than p0912r5, which does not provide std::generator, experimentally or otherwise.
I recall that VS added <experimental/generator> a few years ago; I guess GCC never did.
If it's currently proposed for inclusion in C++, and you can find the relevant proposal, perhaps you can track its support status. But honestly, for now, you'd be better off writing your own that works until it becomes part of some actual standard.
tl;dr: Though it is a coroutine, this feature is not part of the Coroutines TS.
If you need a generator for g++11 and above, copy paste the one here:
https://en.cppreference.com/w/cpp/coroutine/coroutine_handle
For example, suppose we have a string like:
string x = "for(int i = 0; i < 10; i++){cout << \"Hello World!\n\";}"
What is the simplest way to complete the following function definition:
void do_code(string x); /* given x that's valid c++ code, executes that code as if it were written inside of the function body */
The standard C++ libraries do not contain a C++ parser/compiler. This means that your only choice is to either find and link a C++ compiler library or to simply output your string as a file and launch the C++ compiler with a system call.
The first thing, linking to a C++ compiler, would actually be quite doable in something like Visual Studio for example, that does indeed have DLL libraries for compiling C++ and spitting out a new DLL that you could link at runtime.
The second thing, is pretty much what any IDE does. It saves your text-editor stuff into a C++ file, compile it by system-executing the compiler and run the output.
That said, there are many languages with build-in interpreter that would be more suitable for runtime code interpretation.
Not directly as you're asking for C++ to be simultaneously compiled and interpreted.
But there is LLVM, which is a compiler framework and API. That would allow you to take in this case a string containing valid C++, invoke the LLVM infrastructure and then afterwards use a LLVM-based just in time compiler as described at length here. Keep in mind you must also support the C++ library. You should also have some mechanism to map variables into your interpreted C++ and take data back out.
A big but worthy undertaking, seems like someone might have done something like this already, and maybe Cling is just that.
Use the Dynamic Linking Loader (POSIX only)
This has been tested in Linux and OSX.
#include<fstream>
#include<string>
#include<cstdlib>
#include<dlfcn.h>
void do_code( std::string x ) {
{ std::ofstream s("temp.cc");
s << "#include<iostream>\nextern \"C\" void f(){" << x << '}'; }
std::system( "g++ temp.cc -shared -fPIC -otemp.o" );
auto h = dlopen( "./temp.o", RTLD_LAZY );
reinterpret_cast< void(*)() >( dlsym( h, "f" ) )();
dlclose( h );
}
int main() {
std::string x = "for(int i = 0; i < 10; i++){std::cout << \"Hello World!\\n\";}";
do_code( x );
}
Try it online! You'll need to compile with the -ldl parameter to link libdl.a. Don't copy-paste this into production code as this has no error checking.
Works for me:
system("echo \"#include <iostream> \nint main() { for(int i = 0; i < 10; i++){std::cout << i << std::endl;} }\" >temp.cc; g++ -o temp temp.cc && ./temp");
I'm writing a smart ptr template which is intended to be instantiated only for a given base class and its subclasses, which provides boost::shared_ptr-like implicit conversions to variants MyPtr<U> of the template MyPtr<T> has long as the conversion is valid from T* to U* (valid base class and const-compatible).
This was working fine in vs2005, but not with g++ on linux so my colleague changed it there, but doing so it broke the const-correctness.
My problem is that I want to unit test that some conversions are not valid (assign MyPtr<const T> to MyPtr<T> for example), resulting in the file not compiling! But you can't have a file not compiling in the solution...
If there some VS-specific #pragma or some SFINAE trick that could test a given construct is NOT valid and therefore doesn't compile?
Thanks, --DD
You could run the command line compiler, cl, which is pretty easy to set up, and capture its error message output.
No makefile, and hardly any info from the solution/project. Just the include path and a source file. At it's simplest you just need a program that "reverses" the exit code of another program:
#include <sstream>
#include <stdlib.h>
int main(int argc, char *argv[])
{
std::ostringstream command;
for (int n = 1; n < argc; n++)
command << argv[n] << " ";
return (system(command.str().c_str()) == EXIT_SUCCESS)
? EXIT_FAILURE : EXIT_SUCCESS;
}
It simply reconstitutes the arguments passed to it into (omitting its own name) and exits the resulting command line, and then returns success if it fails and failure if it succeeds. That is enough to fool Visual Studio or make.
Technically the reconstituted command line should quote the arguments, but that will only be necessary if you are insane enough to put spaces in your build directory or source file names!
If a given argument (in your case 'const T') would satisfy all the other unit tests except this one you're looking for, isn't it the sign that it would be a perfectly acceptable case?
In other words, if it works, why forbid it?
Presuming that your C++ compiler supports them, is there any particular reason not to use __FILE__, __LINE__ and __FUNCTION__ for logging and debugging purposes?
I'm primarily concerned with giving the user misleading data—for example, reporting the incorrect line number or function as a result of optimization—or taking a performance hit as a result.
Basically, can I trust __FILE__, __LINE__ and __FUNCTION__ to always do the right thing?
__FUNCTION__ is non standard, __func__ exists in C99 / C++11. The others (__LINE__ and __FILE__) are just fine.
It will always report the right file and line (and function if you choose to use __FUNCTION__/__func__). Optimization is a non-factor since it is a compile time macro expansion; it will never affect performance in any way.
In rare cases, it can be useful to change the line that is given by __LINE__ to something else. I've seen GNU configure does that for some tests to report appropriate line numbers after it inserted some voodoo between lines that do not appear in original source files. For example:
#line 100
Will make the following lines start with __LINE__ 100. You can optionally add a new file-name
#line 100 "file.c"
It's only rarely useful. But if it is needed, there are no alternatives I know of. Actually, instead of the line, a macro can be used too which must result in any of the above two forms. Using the boost preprocessor library, you can increment the current line by 50:
#line BOOST_PP_ADD(__LINE__, 50)
I thought it's useful to mention it since you asked about the usage of __LINE__ and __FILE__. One never gets enough surprises out of C++ :)
Edit: #Jonathan Leffler provides some more good use-cases in the comments:
Messing with #line is very useful for pre-processors that want to keep errors reported in the user's C code in line with the user's source file. Yacc, Lex, and (more at home to me) ESQL/C preprocessors do that.
FYI: g++ offers the non-standard __PRETTY_FUNCTION__ macro. Until just now I did not know about C99 __func__ (thanks Evan!). I think I still prefer __PRETTY_FUNCTION__ when it's available for the extra class scoping.
PS:
static string getScopedClassMethod( string thePrettyFunction )
{
size_t index = thePrettyFunction . find( "(" );
if ( index == string::npos )
return thePrettyFunction; /* Degenerate case */
thePrettyFunction . erase( index );
index = thePrettyFunction . rfind( " " );
if ( index == string::npos )
return thePrettyFunction; /* Degenerate case */
thePrettyFunction . erase( 0, index + 1 );
return thePrettyFunction; /* The scoped class name. */
}
C++20 std::source_location
C++ has finally added a non-macro option, and it will likely dominate at some point in the future when C++20 becomes widespread:
https://en.cppreference.com/w/cpp/utility/source_location
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1208r5.pdf
The documentation says:
constexpr const char* function_name() const noexcept;
6 Returns: If this object represents a position in the body of a function,
returns an implementation-defined NTBS that should correspond to the
function name. Otherwise, returns an empty string.
where NTBS means "Null Terminated Byte String".
The feature is present on GCC 11.2 Ubuntu 21.10 with -std=c++20. It was not on GCC 9.1.0 with g++-9 -std=c++2a.
https://en.cppreference.com/w/cpp/utility/source_location shows usage is:
main.cpp
#include <iostream>
#include <string_view>
#include <source_location>
void log(std::string_view message,
const std::source_location& location = std::source_location::current()
) {
std::cout << "info:"
<< location.file_name() << ":"
<< location.line() << ":"
<< location.function_name() << " "
<< message << '\n';
}
int main() {
log("Hello world!");
}
Compile and run:
g++ -std=c++20 -Wall -Wextra -pedantic -o main.out main.cpp
./main.out
Output:
info:main.cpp:17:int main() Hello world!
__PRETTY_FUNCTION__ vs __FUNCTION__ vs __func__ vs std::source_location::function_name
Answered at: What's the difference between __PRETTY_FUNCTION__, __FUNCTION__, __func__?
Personally, I'm reluctant to use these for anything but debugging messages. I have done it, but I try not to show that kind of information to customers or end users. My customers are not engineers and are sometimes not computer savvy. I might log this info to the console, but, as I said, reluctantly except for debug builds or for internal tools. I suppose it does depend on the customer base you have, though.
I use them all the time. The only thing I worry about is giving away IP in log files. If your function names are really good you might be making a trade secret easier to uncover. It's sort of like shipping with debug symbols, only more difficult to find things. In 99.999% of the cases nothing bad will come of it.