Is Doxygen compatible with Fortran code (subroutines) using ENTRYs ? If it is, is there a special flag / something to do to make it so ?
This is the first time I'm using this software. it seems awesome, except CALLs to an ENTRY seem to be ignored, both for reference list generation and call graph generation.
The Fortran implementation of doxygen does not support "ENTRY". The status: Declared obsolescent in Fortran 2008. Although doxygen does not (yet) all features of Fortran there are no plans to support this feature.
Related
In VS Code, with the C++ extension, is there any way I can get the parameter names for functions to be filled out automatically after e.g. selecting a function name suggested by intellisense? The parameters should then be filled out as if they were a snippet. This feature is common in several IDEs, such as Eclipse.
I think this is a perfectly valid question and nothing to do with IDE-like features as some are suggesting in the comments.
Short answer is No.
VS Code's C/C++ extension will not expand the autocompletion feature to the argument list of a function.
The more involved answer is that the authors of the C/C++ extension could do so but for some reason (probably a good one) have chosen not to.
The autocomplete feature is actually handled by the Language Server for C/C++. A Language Server is an implementation of the Language Server Protocol (LSP) which is simply a way to provide programming language-specific features for multiple code editors / IDEs.
The LSP, written by Microsoft originally for VSCode, absolutely permits such types of completion. After all, that is how snippets are handled. If you have a look at LSP's specification page you will see this bit
insertText is just a string so theoretically the server could pass the function name + argument list in the format of a snippet, set the variable InsertTextFormat == 2, marking it as a snippet and then pressing Tab would autocomplete to exactly what you want.
Here is an example of how it looks like (implemented in our Fortran language server)
You might want to open a feature request/ask the question in their repo. They will probably have a better answer as to why it is the way it is. If I had to guess I would say that it is related to overloading.
My understanding is that one step of the compilation of a program (irrespective of the language, I guess) is parsing the source file into some kind of space separated tokens (this tokenization would be made by what's referred to as scanner in this answer. For instance I understand that at some point in the compilation process, a line containing x += fun(nullptr); is separated is something like
x
+=
fun
(
nullptr
)
;
Is this true? If so, is there a way to have access to this tokenization of a C++ source code?
I'm asking this question mostly for curiosity, and I do not intend to write a lexer myself
And the reason I'm curious to know whether one can leverage the compiler is that, to give an example, before meeting [[noreturn]] & Co. I wouldn't have ever considered [[ as a valid token, if I was to write a lexer myself.
Do we necessarily need a true, actual use case? I think we don't, if I am curious about whether there's an existing tool or not to do something.
However, if we really need a use case,
let's say my target is to write a C++ function which reads in a C++ source file and returns a std::vector of the lexemes it's made up of. Clearly, a requirement is that concatenating the elments of the output should make up the whole text again, including line breakers and every other byte of it.
With the restriction mentioned in the comment (tokenization keeping __DATE__) it seems rather manageable. You need the preprocessing tokens. The Boost::Wave preprocessor necessarily creates a token list, because it has to work on those tokens.
Basile correctly points out that it's hard to assign a meaning to those tokens.
C++ is a very complex programming language.
Be sure to read the C++11 draft standard n3337 before even attempting to parse C++ code.
Look inside the source code of existing open source C++ compilers, such as GCC (at least GCC 10 in October 2020) or Clang (at least Clang 10 in October 2020)
If you have to write your C++ parser from scratch, be sure to have the budget for at least a full person year of work.
Look also into existing C++ static source code analyzers, such as Frama-C++ or Clang static analyzer. Consider adapting one of them to your needs, but do document in writing your needs before starting coding. Be aware of Rice's theorem.
If you want to parse a small subset of C++ (you'll need to document and specify that subset), consider using parser generators like ANTLR or GNU bison.
Most compilers are building some internal representations, in particular some abstract syntax tree. Read the Dragon book for more.
I would suggest instead writing your own GCC plugin.
Indeed, it would be tied to some major version of GCC, but you'll win months of work.
Is this true? If so, is there a way to have access to this tokenization of a C++ source code?
Yes, by patching some existing opensource C++ compiler, or extending it with your plugin (there are licensing conditions related to both approaches).
let's say my target is to write a C++ function which reads in a C++ source file and returns a std::vector of the lexemes it's made up of.
The above specification is ambiguous.
Do you want the lexeme before or after the C++ preprocessing phase? In other words, what would be the lexeme for e.g. __DATE__ or __TIME__ ? Read e.g. the documentation of GNU cpp ... If you happen to use GCC on Linux (see gcc(1)) and have some C++ translation unit foo.cc, try running g++ -C -E -Wall foo.cc > foo.ii and look (using less(1)...) into the generated preprocessed form foo.ii ? And what about template expansion, or preprocessor conditionals or preprocessor stringizing ?
I would suggest writing your GCC plugin working on GENERIC representations. You could also start a PhD work related to your goals.
Notice that generating C++ code is a lot easier than parsing it.
Look inside Qt for an example of software generating C++ code. Yo could consider using GNU m4, or GNU gawk, or GNU autoconf, or GPP, or your own C++ source generator (perhaps with the help of GNU bison or of ANTLR) to generate some of your C++ code.
PS. On my home page you'll find an hyperlink to some draft report related to your question, and another hyperlink to an open source program generating C++ code. It sadly seems that I am forbidden here to give these hyperlinks, but you could find them in two mouse clicks. You might also look into two European H2020 projects funding that draft report: CHARIOT & DECODER.
I've been using #include <minmax.h> in my scripts and using min() and max() as expected. I showed this to someone and they had never seen it before, said it wasn't working for them and asked me why I wasn't including <algorithm> and calling std::min() or std::max().
So my question is basically, why aren't I? I found this in a book on C++: "C++ Design Patterns and Derivatives Pricing". Googling "minmax.h", I find a reference to that very book in the top result, so that even more so makes me think it's something abnormal.
Is anyone able to tell me what this is?
The C++ programming language is accompanied by the C++ Standard Library. There is no <minmax.h> header in the C++ Standard Library. No header in the standard-library has the .h extension. Furthermore, the header is not part of the ported C standard library either, as those headers have the c prefix, like <cmath> (which replaces the C standard-library <math.h> header), <ctime>(which replaces the <time.h> header) when used from the C++ Standard Library.
The std::min and std::max functions are declared inside the <algorithm> header.
That being said, there indeed appears to be some MS header called <minmax.h> inside the C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\ucrt folder which defines min and max macros, not functions. But, that is some implementation specific header, and you should be using the standard <algorithm> header instead.
why aren't I?
People do all sort of odd things that they heard about somewhere once, be it in school or that came up as some "solution" that fixed their immediate need (usually under timeline pressure). They then keep doing things the same way because they "work". But I'm glad you stopped for a minute to ask. Hopefully we'll steer you back onto the portable C++ route :)
No, there's no need to use the non-standard minmax.h header. On Windows you need to define the NOMINMAX macro before you include any headers whatsoever, and include <algorithm> right after this macro definition. This is just to free the min and max symbols from being taken over by ill-conceived WINAPI macros. In C++, std::min etc. are in the <algorithm> header and that's what you ought to be using. Thus, the following is portable:
#define NOMINMAX
#include <algorithm>
// other includes
#undef NOMINMAX
// your code here
See this answer for details for Windows.
An ancient reference w.r.t. C++, using ancient compilers, supplying examples using non-standard C++ (e.g. headers such as minmax.h)
Note that the book you are mentioning, C++ Design Patterns and Derivatives Pricing (M.S. Joshi), was first released in 2004, with a subsequent second edition released in 2008. As can be seen in the extract below, the examples in the book relied on successful compilation on ancient compiler versions (not so ancient back in 2004, but still far from recent versions).
Appendix D of the book even specifically mentions that the code examples covered by the book may not be standard-compliant, followed by the pragmatic advice that "[...] fixing the problems should not be hard" [emphasis mine]:
The code has been tested under three compilers: MingW 2.95, Borland 5.5, and Visual C++ 6.0. The first two of these are available for free so you should have no trouble finding a compiler that the code works for. In addition, MingW is the Windows port of the GNU compiler, gcc, so the code should work with that compiler too. Visual C++ is not free but is popular in the City and the introductory version is not very expensive. In addition, I have strived to use only ANSI/ISO code so the code should work under any compiler. In any case, it does not use any cutting-edge language features so if it is not compatible with your compiler, fixing the problems should not be hard.
The compiler releases listed above are very old:
Borland 5.5 was released in 2000,
Visual C++ 6.0 was released in 1998,
GCC 2.95 was released in 1999.
Much like any other ancient compiler it is not surprising that these compilers supplied non-standard headers such as minmax.h, particularly as it seems to have been a somewhat common non-standard convention, based on e.g. the following references.
Gnulib Module List - Extra functions based on ANSI C 89: minmax.h, possibly accessible in GCC 2.95,
Known problems in using the Microsoft Visual C++ compiler, version 6.0:
The MS library does not define the min and max algorithms, which should be found in The workaround we use is to define a new header file, say minmax.h, which we include in any file that uses these functions: [...]
What is the worst real-world macros/pre-processor abuse you've ever come across?:
Real-world? MSVC has macros in minmax.h, called max and min, which cause a compiler error every time I intend to use the standard std::numeric_limits::max() function.
Alternative references for the C++ language
Based on the passage above, the book should most likely be considered primarily a reference for its main domain, quant finance, and not such much for C++, other than the latter being a tool used to cover the former.
For references that are focusing on the C++ language and not its application in a particular applied domain (with emphasis on the latter), consider having a look at:
StackOverfow C++ FAQ: The Definitive C++ Book Guide and List.
Is there any way to take advantage of Microsoft's SAL, e.g. through a C parser that preserves this information? Or is it made by Microsoft, for Microsoft's internal use only?
It would be immensely useful for a lot of tasks, such as creating C library bindings for other languages.
Not sure what you mean by "take advantage of", but currently the VS 2011 Beta uses the SAL annotations when performing code analysis, via the the /analyze option. the annotations are just pure macro's from sal.h which Microsoft encourages the use of (at least in a VS environment).
If you just want to preserve the info after a preprocessing step, you could just make the macro's expand to themselves or just alter one of the exisitng open-source pre-processors to exclude the symbols (VS also has a few expansion options from the SAL macro's), but using the information provided by the annotations will require something along the lines of a custom LLVM pre-pass or GCC plugin to do this (if compiling the code, though you can at the same time use them for binding generation).
SAL annotations can find tons of bugs with static analysis.
http://msdn.microsoft.com/en-us/library/windows/hardware/hh454825(v=vs.85).aspx
I have never had to set it from scratch, but my development environment will use prefast to do static analysis everytime I build something. Finding bugs at compile time is better than finding them at runtime.
Source annotations as far as my own personal experience has seen, is a useful way to quickly see how parameters are supposed to be passed or how they are assumed to be passed. As far as taking advantage of that, I agree that a prepass might be the only way to take real advantage, and might i suggest writing your own if you have specific needs or expectations on it's output.
Hope I helped..
I'm trying to figure out which of the additions to the algorithm headers are supported by a given implementation (gcc and MSVC would be enough).
The simple way would be to do it the same way as one would do it for core features: check the compiler version and define a macro if a language feature is supported. Unfortunately I cannot find a list that shows the version numbers for either compiler.
Is simply checking for a generic C++0x macro (GXX_EXPERIMENTAL or __cplusplus) enough or should I check the change lists for the compilers and build my macros based on those lists?
http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.200x
Since all compiler vendors provide a nice list of what's available in what version, and you would test the functionality anyways, I would use compiler versions to check for specific features. Or demand the user uses at least a good version, and not worry about it.
__cplusplus is not necessarily a C++0x macro, it tells you nothing. GXX_EXPERIMENTAL has existed since GCC 4.3, so that's pretty useless too.
This one is for GCC.
This one is for MSVC. (mind you: partially implemented means broken)
This one is for Intel.
Here you can find what macros to check against for a specific version of a compiler.
As far as I could figure out the only proper solution is to have a build script that tries to compile and run a file that uses the feature and has a runtime assertion. Depending on the outcome have a #define CONFIG_NO_FEATURENAME or similiar in a config file and guard your uses and workaround with a #ifndef.
This way it is possible to check if
the feature is available
the feature functions properly (depending on the correctness of the assertion)