Is there any tool or method that can speed up this process?
For instance I just split neatTrick.cpp source file into two separate files neatTrickImplementation.cpp and neatTrickTests.cpp.
What I have to do now is to go through the list of #includes at the top of neatTrick.cpp and determine which of them need to go into the implementation file, and which need to go into the tests file. Some of the headers are required for both of them, some are not. Some may even be completely unnecessary.
I feel like my process (start with nothing, compile, see what's broken, add proper include, compile again, repeat) will produce the most unbloated code but it is so frustratingly slow. I think it'd be great if my IDE could analyze the rest of the headers in my project, see which ones could eliminate the current set of errors, and automate this task for me.
There was a talk by Chandler Carruth on Microsoft's "Going Native" (a C++ conference) where he said that the Clang tooling project had something in the pipeline to solve exactly this problem.
From my understanding, it was presented as something no publically available tool is able to do at the moment and most people were pretty impressed by this.
So: At the moment, there currently is no such tool. In the near future you will probably get something like this as a Clang-based tool to compile for yourself. Long-term, expect this to be a standard feature built upon a Clang toolchain.
(A bit OT: There currently is a discussion on the Clang/LLVM developers list dealing with a tooling/service infrastructure. The tools are not there yet but are under active development, currently by Google engineers, later probably by people in the whole industry and Clang open source community).
During the ACCU conference at Oxford last April, one of the speakers, Peter Sommerlad, demoed exactly this functionality with a plugin for Eclipse CDT, written by one of his students. I don't know if this plugin is already publicly available, but maybe you could drop him an e-mail to ask...
Related
This is a potentially dangerous question because interdisciplinary questions and answers will be biased, but I'll have a stab at it anyway. All in good spirit!
So, here we go. I'm writing a major editing mode for Emacs for the language that it has almost no support for yet. And I'm at the point, where I have to decide on a way to generate project files. Below is the syllabus of the task ahead:
The templates have to represent project directory tree, not only single files.
The resulting files are of various formats, potentially including SGML-like languages, but not limited to this variety. They also have to generate C-like source code and, eLisp source code and plain text files, like README, for example.
The templates must be processed in a batch upon user-initiated action (as in user wants to create a project - several files must be created in the user-appointed directory). It may be beneficial to have an ability to supervise the creation, but this is less important then the ability to run the process entirely automatically.
Bonus features:
The template language has already a user base (with a potential of reuse of existing templates).
The templates can be used for code snippets (contain blanks which are filled interactively once the user invokes code-generating routine while editing the file).
Obvious things like cross-platform-ness, ease of use both through graphical interface and command line.
I made a research, but I won't share my results (yet) so I won't bias the answers. The problem with answering this question is not that the answer is hard to find, but that it is hard to chose one from many.
I'm developing a system based on Mustache for exactly the use case that you've described. The template language itself is a very simple extension of Mustache called Groome.
I also released a command-line tool called Molt that renders Groome templates. I'd be curious to know if it does everything that you need. I'm still adding features to the tool and haven't yet announced it. Thanks.
I went to solve a similar problem several years aback, where I wanted to use Emacs to generate code out of a UML diagram (cogre), and also generate Makefiles from project specifications. I first tried to use Tempo, but when I tried to get the templates to nest, I ran into problems. I also looked into skeleton, but that didn't quite fit the plan either.
I ended up using Google Templates for a little bit, and liked the syntax, and developed SRecode instead, and just borrowed the good bits from Google templates. SRecode was written specifically for machine-generated code. The interaction for template insertion (aka - what tempo was written for) isn't first class in SRecode. For generating code from a data structure, however, it is very robust, and has a lot of features, and automatically filled variables. It works closely with your major mode, and allows many nested templates, with control over the nested dictionary values. There is a subsystem that will use Semantic tags and generate code from them for a couple languages. That means you can parse code in one language with Semantic, and generate code in another language with SReocde using those tags. Nifty! Many parts of CEDET Reference manuals were built that way.
The templates themselves allow looping, if statements, and include statements. There are a couple examples in SRecode for making an 'application', such as the comment writer, and EDE uses it to create Makefiles, which is almost exactly what you are trying to do.
Another option is Generator, which offers “language-agnostic project bootstrapping with an emphasis on simplicity”. Installation requires Node.js and npm.
Generator’s emphasis on simplicity means it is very easy to learn how to make a template. Generator also saves you from having to reference templates by file paths – it looks for templates in ~/.generator.
However, there is no way to write README or LICENSE files for the template itself without those files being copied to the generated project. Also, post-generation commands written in the Makefile will be copied to the generated Makefile, even after they are no longer of use. Finally, the ad-hoc templating language doesn’t provide a way to escape its __lowercasevariables__ – though I can’t think of a language where that limitation would be a problem.
This is basically a duplicate of:
Netbeans or Eclipse for C++?
But, that question as 3+ years old, and a lot has changed since then.
I have a large code base with a custom (but Makefile based) build system. The areas I am specifically wondering about include:
Syntax highlighting
Code navigation.
Code hints.
"ReSharper style" code helpers.
Documentation integration.
Debugger UI and features.
Has anyone had the chance to evaluate both Netbeans and Eclipse?
EDIT: As a followup question, are any of the Netbeans users here concerned with its future given Oracle's recent bad history with "open" efforts? (Open Solaris, MySQL, Open Office)
Thank you
I cannot comment on eclipse, but on netbeans 7 I will say things that are very important for me and that work fine so far:
code completion, go to declarations
pkg-config automatic include management for parsing
stuff that sometimes works and sometimes don't
find usages, sometimes it might fail to find usages in other open projects
debugger sometimes gets confused with unittest-cpp macros and it will not go on the appropiate line
stuff that are not yet working and i care deeply:
C++0x syntax highlighting (auto, lambdas, enum class, variadic templates, none of them are recognized by the built-in parser)
stuff that is not quite working but i could not care less:
git integration. I enjoy using git from command-line so this is a non-issue
in all, the IDE is very usable. I hope to have a chance to try out latest cdt on Indigo Eclipse, but so far i haven't that much of a real reason to investigate
I cannot comment on Netbeans, but I can offer you information on Eclipse. I work with C++ on UNIX systems, and I have started to use Eclipse when exploring large code bases that I know little about. I don't use it to build, but it would be easy to integrate our build system with it as one only needs commands.
Eclipse has most of what you are looking for: (I'm speaking of Eclipse/CDT)
Not only can you completely customize your syntax highlighting, you can also have it format the code with templates. My company has a code standard for spacing, tabs and formatting of functions and conditional code, and with little effort I was able to modify an existing template to meet our code standards.
The navigation is not bad, if you highlight and hover over a variable, it shows you the definition in a small pop-up bubble. If you do the same for a type, it will you show you where the type is defined. For functions, it will show the first few lines of the implementation of the function, with an option to expand it and see the whole function. I find all of these nice for code discovery and navigation. You can also highlight a variable, and use a right-click menu option to jump to its declaration.
I suppose by code hints you are referring to something like intellisense? This is the main reason why I use Eclipse when looking over a large code base. Just hit the '.' or '->' and a second later you get your options.
The debugger UI is quite capable. You can launch gdb within the tool and it allows you to graphically move through your code just as you would in a tool like ddd or Visual C++. It offers standard features like viewing registers, memory, watching variables, etc.
That being said, I have found some weaknesses. The first is that it doesn't really strongly support revision control systems outside of CVS and SVN very easily (integrated into the GUI). I found a plug-in for the system we use at my company, but it spews XML and Unicode garbage. It was easier to just use the revision control on the command line. I suspect this is the plug-in's issue and not Eclipse. I wish there were better tool integration though.
The second complaint is that for each project I have to manually setup the include directories and library paths. Perhaps with an environment variable this could be circumvented? Or I may just do not know how to set things up correctly. Then again if it is not obvious to a developer how to do this, I consider that a weakness of the tool.
All in all I like working with Eclipse. It is not my main editing environment, but I appreciate it for working on large code bases.
I'm a huge fan of Netbeans. I am in a similar situation to yours, but creating the project was very easy. Just point Netbeans at where the code is checked out and it figures out most things for itself. I rarely have to do any configuration. One thing to note though, if your tree is very large, it can take some time to fully index - and while it does, memory and cpu will be hosed on the box.
The integration with cvs is awesome, and the Hudson integration is very cool for CB. I've not used Git myself, though I should imagine it's a no-brainer.
One thing that does irritate me no end is that it does not behave very well with code relying heavily on templates. i.e. shows lots of warnings and errors about types not being found etc.
I have not used the latest version of Eclipse, I tried the major release before the current one and gave up because it did not have the same smooth project integration with the makefiles etc. I find it's not as nice if you don't want to use it's make system - though I could be wrong.
I don't use any of the code formatting provided, I instead prefer something like AStyle instead. I know that NetBeans does a good job with Java - but have not used it for C++. CDT I seem to remember doing some odd stuff with indentation when formatting C++ code - esp. if templates are involved - but that was atleast two years ago.
Hope some of it helps - the best way to do this is to download and try for yourself and see what works for you. Anything we tell you is purely subjective.
I used to work with Netbeans with MinGW, I Just tried 7.0.1.
I currently use Eclipse Indigo with CDT and MinGW - It's better performance wise (less CPU & Memory).
Netbeans creates a makefile to compile all the time,
In Eclipse you can build directly with the CDT-Toolchain or use Makefile - Eclipse is more flexible.
Debugging: Netbeans might be better in Solaris/Linux.
I Personally rather eclipse over Netbeans, I think eclipse is more professional.
One particular issue that causes me quite a lot of grief with Netbeans 7.0 is that it tends to want to work with utf8 files, and not all of out c++ projects are utf8. It will issue a warning about opening such a file, and if you do open it, will corrupt said file, which is a pain.
I've not found out how to properly make netbeans handle this. Apparently the encoding can be changed, but for the entire project. So presumably changing it to us-acii would stop this problem, although non ascii characters wouldn't display properly.
I know that E&C is a controversial subject and some say that it encourages a wrong approach to debugging, but still - I think we can agree that there are numerous cases when it is clearly useful - experimenting with different values of some constants, redesigning GUI parameters on-the-fly to find a good look... You name it.
My question is: Are we ever going to have E&C on GDB? I understand that it is a platform-specific feature and needs some serious cooperation with the compiler, the debugger and the OS (MSVC has this one easy as the compiler and debugger always come in one package), but... It still should be doable. I've even heard something about Apple having it implemented in their version of GCC [citation needed]. And I'd say it is indeed feasible.
Knowing all the hype about MSVC's E&C (my experience says it's the first thing MSVC users mention when asked "why not switch to Eclipse and gcc/gdb"), I'm seriously surprised that after quite some years GCC/GDB still doesn't have such feature. Are there any good reasons for that? Is someone working on it as we speak?
It is a surprisingly non-trivial amount of work, encompassing many design decisions and feature tradeoffs. Consider: you are debugging. The debugee is suspended. Its image in memory contains the object code of the source, and the binary layout of objects, the heap, the stacks. The debugger is inspecting its memory image. It has loaded debug information about the symbols, types, address mappings, pc (ip) to source correspondences. It displays the call stack, data values.
Now you want to allow a particular set of possible edits to the code and/or data, without stopping the debuggee and restarting. The simplest might be to change one line of code to another. Perhaps you recompile that file or just that function or just that line. Now you have to patch the debuggee image to execute that new line of code the next time you step over it or otherwise run through it. How does that work under the hood? What happens if the code is larger than the line of code it replaced? How does it interact with compiler optimizations? Perhaps you can only do this on a specially compiled for EnC debugging target. Perhaps you will constrain possible sites it is legal to EnC. Consider: what happens if you edit a line of code in a function suspended down in the call stack. When the code returns there does it run the original version of the function or the version with your line changed? If the original version, where does that source come from?
Can you add or remove locals? What does that do to the call stack of suspended frames? Of the current function?
Can you change function signatures? Add fields to / remove fields from objects? What about existing instances? What about pending destructors or finalizers? Etc.
There are many, many functionality details to attend to to make any kind of usuable EnC work. Then there are many cross-tools integration issues necessary to provide the infrastructure to power EnC. In particular, it helps to have some kind of repository of debug information that can make available the before- and after-edit debug information and object code to the debugger. For C++, the incrementally updatable debug information in PDBs helps. Incremental linking may help too.
Looking from the MS ecosystem over into the GCC ecosystem, it is easy to imagine the complexity and integration issues across GDB/GCC/binutils, the myriad of targets, some needed EnC specific target abstractions, and the "nice to have but inessential" nature of EnC, are why it has not appeared yet in GDB/GCC.
Happy hacking!
(p.s. It is instructive and inspiring to look at what the Smalltalk-80 interactive programming environment could do. In St80 there was no concept of "restart" -- the image and its object memory were always live, if you edited any aspect of a class you still had to keep running. In such environments object versioning was not a hypothetical.)
I'm not familiar with MSVC's E&C, but GDB has some of the things you've mentioned:
http://sourceware.org/gdb/current/onlinedocs/gdb/Altering.html#Altering
17. Altering Execution
Once you think you have found an error in your program, you might want to find out for certain whether correcting the apparent error would lead to correct results in the rest of the run. You can find the answer by experiment, using the gdb features for altering execution of the program.
For example, you can store new values into variables or memory locations, give your program a signal, restart it at a different address, or even return prematurely from a function.
Assignment: Assignment to variables
Jumping: Continuing at a different address
Signaling: Giving your program a signal
Returning: Returning from a function
Calling: Calling your program's functions
Patching: Patching your program
Compiling and Injecting Code: Compiling and injecting code in GDB
This is a pretty good reference to the old Apple implementation of "fix and continue". It also references other working implementations.
http://sources.redhat.com/ml/gdb/2003-06/msg00500.html
Here is a snippet:
Fix and continue is a feature implemented by many other debuggers,
which we added to our gdb for this release. Sun Workshop, SGI ProDev
WorkShop, Microsoft's Visual Studio, HP's wdb, and Sun's Hotspot Java
VM all provide this feature in one way or another. I based our
implementation on the HP wdb Fix and Continue feature, which they
added a few years back. Although my final implementation follows the
general outlines of the approach they took, there is almost no shared
code between them. Some of this is because of the architectual
differences (both the processor and the ABI), but even more of it is
due to implementation design differences.
Note that this capability may have been removed in a later version of their toolchain.
UPDATE: Dec-21-2012
There is a GDB Roadmap PDF presentation that includes a slide describing "Fix and Continue" among other bullet points. The presentation is dated July-9-2012 so maybe there is hope to have this added at some point. The presentation was part of the GNU Tools Cauldron 2012.
Also, I get it that adding E&C to GDB or anywhere in Linux land is a tough chore with all the different components.
But I don't see E&C as controversial. I remember using it in VB5 and VB6 and it was probably there before that. Also it's been in Office VBA since way back. And it's been in Visual Studio since VS2005. VS2003 was the only one that didn't have it and I remember devs howling about it. They intended to add it back anyway and they did with VS2005 and it's been there since. It works with C#, VB, and also C and C++. It's been in MS core tools for 20+ years, almost continuous (counting VB when it was standalone), and subtracting VS2003. But you could still say they had it in Office VBA during the VS2003 period ;)
And Jetbrains recently added it too their C# tool Rider. They bragged about it (rightly so imo) in their Rider blog.
When you get a third-party library (c, c++), open-source (LGPL say), that does not have good documentation, what is the best way to go about understanding it to be able to integrate into your application?
The library usually has some example programs and I end up walking through the code using gdb. Any other suggestions/best-practicies?
For an example, I just picked one from sourceforge.net, but it's just a broad engineering/programming question:
http://sourceforge.net/projects/aftp/
I frequently use a couple of tools to help me with this:
GNU Global. It generates cross-referencing databases and can produce hyperlinked HTML from source code. Clicking function calls will take you to their definitions, and you can see lists of all references to a function. Only works for C and perhaps C++.
Doxygen. It generates documentation from Javadoc-style comments. If you tell it to generate documentation for undocumented methods, it will give you nice summaries. It can also produce hyperlinked source code listings (and can link into the listings provided by htags).
These two tools, along with just reading code in Emacs and doing some searches with recursive grep, are how I do most of my source reverse-engineering.
One of the better ways to understand it is to attempt to document it yourself. By going and trying to document it yourself, it forces you to really dive in and test and test and test and make sure you know what each statement is doing at what times. Then you can really start to understand what the previous developer may have been thinking (or not thinking for that matter).
Great question. I think that this should be addressed thoroughly, so I'm going to try to make my answer as thorough as possible.
One thing that I do when approaching large projects that I've either inherited or contributing to is automatically generate their sources, UML diagrams, and anything that can ease the various amounts of A.D.D. encountered when learning a new project:)
I believe someone here already mentioned Doxygen, that's a great tool! You should look into it and write a small bash script that will automatically generate sources for the application you're developing in some tree structure you've setup.
One thing that I've haven't seen people mention is BOUML! It's fantastic and free! It automatically generates reverse UML diagrams from existing sources and it supports a variety of languages. I use this as a way to really capture the big picture of what's going on in terms of architecture and design before I start reading code.
If you've got the money to spare, look into Understand for %language-here%. It's absolutely great and has helped me in many ways when inheriting legacy code.
EDIT:
Try out ack (betterthangrep.com), it is a pretty convenient script for searching source trees:)
Familiarize yourself with the information available in the headers. The functions you call will be declared there. Then try to identify the valid arguments and pre-/post-conditions of the functions, as those are your primary guidance (even if they are not documented!). The example programs are your next bet.
If you have code completion/intellisense I like opening up the library and going '.' or 'namespace::' and seeing what comes up. I always find it helpful, you can navigate through the objects/namespaces and see what functionality they have. This is of course assuming its an OOP library with relatively good naming of functions/objects.
There really isn't a silver bullet other than just rolling up your sleeves and digging into the code.
This is where we earn our money.
Three things;
(1) try to run the test or example apps available, set low debug levels, and walk through logs.
(2) use source navigator tool / cscope ( available both on windows and linux) and browse the code to understand the flow.
(3) also in parallel use gdb to walk into code while running test/example apps.
In a nutshell, I'm searching for a working autocompletion feature for the Vim editor. I've argued before that Vim completely replaces an IDE under Linux and while that's certainly true, it lacks one important feature: autocompletion.
I know about Ctrl+N, Exuberant Ctags integration, Taglist, cppcomplete and OmniCppComplete. Alas, none of these fits my description of “working autocompletion:”
Ctrl+N works nicely (only) if you've forgotton how to spell class, or while. Oh well.
Ctags gives you the rudiments but has a lot of drawbacks.
Taglist is just a Ctags wrapper and as such, inherits most of its drawbacks (although it works well for listing declarations).
cppcomplete simply doesn't work as promised, and I can't figure out what I did wrong, or if it's “working” correctly and the limitations are by design.
OmniCppComplete seems to have the same problems as cppcomplete, i.e. auto-completion doesn't work properly. Additionally, the tags file once again needs to be updated manually.
I'm aware of the fact that not even modern, full-blown IDEs offer good C++ code completion. That's why I've accepted Vim's lack in this area until now. But I think a fundamental level of code completion isn't too much to ask, and is in fact required for productive usage. So I'm searching for something that can accomplish at least the following things.
Syntax awareness. cppcomplete promises (but doesn't deliver for me), correct, scope-aware auto-completion of the following:
variableName.abc
variableName->abc
typeName::abc
And really, anything else is completely useless.
Configurability. I need to specify (easily) where the source files are, and hence where the script gets its auto-completion information from. In fact, I've got a Makefile in my directory which specifies the required include paths. Eclipse can interpret the information found therein, why not a Vim script as well?
Up-to-dateness. As soon as I change something in my file, I want the auto-completion to reflect this. I do not want to manually trigger ctags (or something comparable). Also, changes should be incremental, i.e. when I've changed just one file it's completely unacceptable for ctags to re-parse the whole directory tree (which may be huge).
Did I forget anything? Feel free to update.
I'm comfortable with quite a lot of configuration and/or tinkering but I don't want to program a solution from scratch, and I'm not good at debugging Vim scripts.
A final note, I'd really like something similar for Java and C# but I guess that's too much to hope for: ctags only parses code files and both Java and C# have huge, precompiled frameworks that would need to be indexed. Unfortunately, developing .NET without an IDE is even more of a PITA than C++.
Try YouCompleteMe. It uses Clang through the libclang interface, offering semantic C/C++/Objective-C completion. It's much like clang_complete, but substantially faster and with fuzzy-matching.
In addition to the above, YCM also provides semantic completion for C#, Python, Go, TypeScript etc. It also provides non-semantic, identifier-based completion for languages for which it doesn't have semantic support.
There’s also clang_complete which uses the clang compiler to provide code completion for C++ projects. There’s another question with troubleshooting hints for this plugin.
The plugin seems to work fairly well as long as the project compiles, but is prohibitively slow for large projects (since it attempts a full compilation to generate the tags list).
as per requested, here is the comment I gave earlier:
have a look at this:
Vim integration to MonoDevelop
for .net stuff at least..
OmniCompletion
this link should help you if you want to use monodevelop on a MacOSX
Good luck and happy coding.
I've just found the project Eclim linked in another question. This looks quite promising, at least for Java integration.
I'm a bit late to the party but autocomplpop might be helpful.
is what you are looking for something like intellisense?
insevim seems to address the issue.
link to screenshots here
Did someone mention code_complete?
http://www.vim.org/scripts/script.php?script_id=1764
But you did not like ctags, so this is probably not what you are looking for...