I have a huge template file and only few functions are used, and I want to isolate that part for test and comment the other half. How can i find what's the best way to do this ?
How can I do this on a Windows system and the template file is .hxx ?
I like Mohammad's answer. Oops... he removed it - but basically - use a tool like nm - I don't know a windows equivalent but there's sure to be one - to query the objects for instantations. While your templates may be in a .hxx, you can only meaningfully talk about the subset of methods instantiated by some body of client code. You may need to do this analysis with inlining disabled, to ensure the function bodies are actually instantiated in a tangible form in the object files.
In the less likely event that you might have instantiated stuff because some code handles cases that you know the data doesn't - and won't evolve to - use, then you may prefer automated run-time coverage analysis. Many compilers (e.g. GCC's g++ -ftest-coverage) and tools (e.g. purecov) provide this.
How about commenting out the whole file, then uncommenting individual methods when the linker complains, until the program can be compiled ?
By the way, if you are using Visual Studio, commenting the whole file is just a matter of using the following key shortcuts : Ctrl+A, then Ctrl+K+C. You can uncomment selected lines using Ctrl+K+U.
Related
I am working on creating a package with two new commands, say foo and bar.
For example, if foo.ado contains:
program define foo
...
rex
end
program define rex
...
end
But my other command, bar.ado, also needs to call rex. Where should I put rex?
I see the following few options:
Create a rex.ado file as well.
Create a rex.do file and include it from within both foo.ado and bar.ado using include "`c(sysdir_plus)'r/rex.do" at the bottom of each file.
Copy the code into both foo.ado and bar.ado, which seems ugly because now the code must be maintained in two places.
What is best practice for organizing subroutines that are needed by both foo and bar?
Also, should the subroutine be called rex, _rex, or something else — maybe _foobar_rex — to indicate it is actually a sub-command that foo and bar depend on to work correctly rather than a separate command intended to stand on its own?
Create a rex.ado file as well
Your question is a bit too broad. Personally, I would go with the first option to be safe, although it really depends on the structure of your project. Sometimes including rex in a single ado file may be enough. This will be the case, for example, if foo is a wrapper command. However, for most other use cases, including two commands sharing a common program, i strongly believe that you will need to have a separate ado file.
The second option is obviously unnecessary, since the first does the same thing, plus it does not have to load the program every single time you call it. The third option is probably the worst in a programming context, as it may create conflicts and will be difficult to maintain down the road.
With regards to naming conventions, I would recommend using something like _rex only if you include the program as a subroutine in an ado file. Otherwise, rex will do just fine and will also indicate that the program has a wider scope within your project. It is also better, in my opinion, to provide a more elaborate explanation about the intended use of rex using a comment at the start of the ado file, rather than trying to incorporate this in the name.
struct Foo{
Bar get(){
}
}
auto f = Foo();
f.get();
For example you decide that get was a very poor choice for a name but you have already used it in many different files and manually changing ever occurrence is very annoying.
You also can't really make a global substitution because other types may also have a method called get.
Is there anything for D to help refactor names for types, functions, variables etc?
Here's how I do it:
Change the name in the definition
Recompile
Go to the first error line reported and replace old with new
Goto 2
That's semi-manual, but I find it to be pretty easy and it goes quickly because the compiler error message will bring you right to where you need to be, and most editors can read those error messages well enough to dump you on the correct line, then it is a simple matter of telling it to repeat the last replacement again. (In my vim setup with my hotkeys, I hit F4 for next error message, then dot for repeat last change until it is done. Even a function with a hundred uses can be changed reliably* in a couple minutes.)
You could probably write a script that handles 90% of cases automatically too by just looking for ": Error: " in the compiler's output, extracting the file/line number, and running a plain text replace there. If the word shows up only once and outside a string literal, you can automatically replace it, and if not, ask the user to handle the remaining 10% of cases manually.
But I think it is easy enough to do with my editor hotkeys that I've never bothered trying to script it.
The one case this doesn't catch is if there's another function with the same name that might still compile. That should never happen if you do this change in isolation, because an ambiguous name wouldn't compile without it.
In that case, you could probably do a three-step compiler-assisted change:
Make sure your code compiles before. Then add #disable to the thing you want to rename.
Compile. Every place it complains about it being unusable for being disabled, do the find/replace.
Remove #disable and rename the definition. Recompile again to make sure there's nothing you missed like child classes (the compiler will then complain "method foo does not override any function" so they stand right out too.
So yeah, it isn't fully automated, but just changing it and having the compiler errors help find what's left is good enough for me.
Some limited refactoring support can be found in major IDE plugins like Mono-D or VisualD. I remember that Brian Schott had plans to add similar functionality to his dfix tool by adding dependency on dsymbol but it doesn't seem implemented yet.
Not, however, that all such options are indeed of a very limited robustness right now. This is because figuring out the fully qualified name of any given symbol is very complex task in D, one that requires full semantics analysis to be done 100% correctly. Think about local imports, templates, function overloading, mixins and how it all affects identifying the symbol.
In the long run it is quite certain that we need to wait before reference D compiler frontend becomes available as a library to implement such refactoring tool in clean and truly reliable way.
A good find all feature can be better than a bad refactoring which, as mentioned previously, requires semantic.
Personally I have a find all feature in Coedit which displays the context of a match and works on all the project sources.
It's fast to process the results.
I have following requirement:
Adding text at the entry and exit point of any function.
Not altering the source code, beside inserting from above (so no pre-processor or anything)
For example:
void fn(param-list)
{
ENTRY_TEXT (param-list)
//some code
EXIT_TEXT
}
But not only in such a simple case, it'd also run with pre-processor directives!
Example:
void fn(param-list)
#ifdef __WIN__
{
ENTRY_TEXT (param-list)
//some windows code
EXIT_TEXT
}
#else
{
ENTRY_TEXT (param-list)
//some any-os code
if (condition)
{
return; //should become EXIT_TEXT
}
EXIT_TEXT
}
So my question is: Is there a proper way doing this?
I already tried some work with parsers used by compilers but since they all rely on running a pre-processor before parsing, they are useless to me.
Also some of the token generating parser, which do not need a pre-processor are somewhat useless because they generate a memory-mapping of tokens, which then leads to a complete new source code, instead of just inserting the text.
One thing I am working on is to try it with FLEX (or JFlex), if this is a valid option, I would appreciate some input on it. ;-)
EDIT:
To clarify a little bit: The purpose is to allow something like a stack trace.
I want to trace every function call, and in order to follow the call-hierachy, I need to place a macro at the entry-point of a function and at the exit point of a function.
This builds a function-call trace. :-)
EDIT2: Compiler-specific options are not quite suitable since we have many different compilers to use, and many that are propably not well supported by any tools out there.
Unfortunately, your idea is not only impractical (C++ is complex to parse), it's also doomed to fail.
The main issue you have is that exceptions will bypass your EXIT_TEXT macro entirely.
You have several solutions.
As has been noted, the first solution would be to use a platform dependent way of computing the stack trace. It can be somewhat imprecise, especially because of inlining: ie, small functions being inlined in their callers, they do not appear in the stack trace as no function call was generated at assembly level. On the other hand, it's widely available, does not require any surgery of the code and does not affect performance.
A second solution would be to only introduce something on entry and use RAII to do the exit work. Much better than your scheme as it automatically deals with multiple returns and exceptions, it suffers from the same issue: how to perform the insertion automatically. For this you will probably want to operate at the AST level, and modify the AST to introduce your little gem. You could do it with Clang (look up the c++11 migration tool for examples of rewrites at large) or with gcc (using plugins).
Finally, you also have manual annotations. While it may seem underpowered (and a lot of work), I would highlight that you do not leave logging to a tool... I see 3 advantages to doing it manually: you can avoid introducing this overhead in performance sensitive parts, you can retain only a "summary" of big arguments and you can customize the summary based on what's interesting for the current function.
I would suggest using LLVM libraries & Clang to get started.
You could also leverage the C++ language to simplify your process. If you just insert a small object into the code that is constructed on function scope entrance & rely on the fact that it will be destroyed on exit. That should massively simplify recording the 'exit' of the function.
This does not really answer you question, however, for your initial need, you may use the backtrace() function from execinfo.h (if you are using GCC).
How to generate a stacktrace when my gcc C++ app crashes
I have a simple wrapper for an Mersenne twister random number generator. The purpose is to scale the number returned by the generator (between 0 and 1) to between argument defined limits (begin and end).
So my function is
inline float xlRandomFloat(float begin, float end) {return (begin+((end-begin)*genrand_real2()));}
I don't believe the implementation of genrand_real2() function is important, but if I am wrong it can be found here
The problem is the function does not return the translated result. The scaling (multiplying by (begin-end) seems to work correctly, but the addition of begin does not seem to be returned.
So if I call xlRandomFloat(5,10) - I get values between 0 and 5.
If I debug with GDB, and use the print function then it shows the correct result.
So then I tried separating things into lines to see what happens
inline float xlRandomFloat(float begin, float end) {
float ret;
ret=(((end-begin)*genrand_real2()));
ret+=begin;
return ret;};
When debugging, it jumped straight from the first line into the genrand_real2() function and skipped out every thing else entirely. That was really confusing so I thought it may have something to do with the inlining.
I moved the file from this .hpp file to the .cpp and removed the inline keyword and everything works correctly.
But why does this behavior occur, and how can I inline this function? Also, I am not sure if this is relevant, but often when I made changes to the sources, my Make compilation would say there is nothing to be done. Which is unusual since normally I expect make to pick up on changes in the sources and rebuild accordingly.
Any ideas.
Thanks
Zenna
Okay, several things at work here.
First, on the debugging, you describe what I'd think of as the more or less expected behavior, because when you inline a function, there's no generated code to go with the fromt matter of the function. So, the first statement there is
ret=(((end-begin)*genrand_real2()));
and the first step on that is to call genrand_real2(). If genrand_real2() is also inline, then you end up at the first statement in that, with no pause to catch your breath.
Second, make sure you're really running the code you think you are. Try making from a clean directory --some C++ compilers make precompiled pieces that they preserve to speed compilation. Make sure your inline definition has been completely removes or commented out from the header files. m
Thrd, make a very simple program with an inline and make sure it's behaving as you expect.
Your code is perfectly fine, there must be something wrong with the way you're compiling. Make sure your Makefile has the proper dependencies: the source files need to depend on the header files that they include. Tracking these dependencies is rarely done by hand, usually only for very small projects -- they are normally generated by a tool such as makedepend.
To see if inlining is causing the problem, just disable all optimizations by using the -O0 (dash capital-oh zero) option with GCC. Also make sure to enable debugging symbols with -g.
Whilst refactoring some old code I realised that a particular header file was full of function declarations for functions long since removed from the .cpp file. Does anyone know of a tool that could find (and strip) these automatically?
You could if possible make a test.cpp file to call them all, the linker will flag the ones that have no code as unresolved, this way your test code only need compile and not worry about actually running.
PC-lint can be tunned for dedicated purpose:
I tested the following code against for your question:
void foo(int );
int main()
{
return 0;
}
lint.bat test_unused.cpp
and got the following result:
============================================================
--- Module: test_unused.cpp (C++)
--- Wrap-up for Module: test_unused.cpp
Info 752: local declarator 'foo(int)' (line 2, file test_unused.cpp) not referenced
test_unused.cpp(2) : Info 830: Location cited in prior message
============================================================
So you can pass the warning number 752 for your puropse:
lint.bat -"e*" +e752 test_unused.cpp
-e"*" will remove all the warnings and +e752 will turn on this specific one
If you index to code with Doxygen you can see from where is each function referenced. However, you would have to browse through each class (1 HTML page per class) and scan for those that don't have anything pointing to them.
Alternatively, you could use ctags to generate list of all functions in the code, and then use objdump or some similar tool to get list of all function in .o files - and then compare those lists. However, this can be problematic due to name mangling.
I don't think there is such thing because some functions not having a body in the actual source tree might be defined in some external library. This can only be done by creating a script which makes a list of declared functions in a header and verifies if they are sometimes called.
I have a C++ ftplugin for vim that is able is check and report unmatched functions -- vimmers, the ftplugin suite is not yet straightforward to install. The ftplugin is based on ctags results (hence its heuristic could be easily adapted to other environments), sometimes there are false positives in the case of inline functions.
HTH,
In addition Doxygen (#Milan Babuskov), you can see if there are warnings for this in your compiler. E.g. gcc has -Wunused-function for static functions; -fdump-ipa-cgraph.
I've heard good things about PC-Lint, but I imagine it's probably overkill for your needs.