I am using Eclipse CDT on linux. I have a long header file with 5k lines of code. When I try to open declaration of some variable in this file by pressing F3, Eclipse freezes for about 20 seconds and then opens declaration. This issue makes code navigation unusable in a long file. In shorter files declaration opens almost instantly.
I tried to restart Eclipse and rebuild the index but this did not help.
My Eclipse version is:
Version: Neon.1 (4.6.1)
Build id: Z20161111-1340
How can I workaround this issue ?
Due to the way CDT is architected, operations on larger files will be slower than operations on smaller files.
CDT obtains semantic information about the code for operations like Open Declaration from two places:
For the currently open file: from the AST (abstract syntax tree) it builds for that file.
For included header files and other files in the project: from the index, which is a searchable database of semantic information about the project.
The index is initially built by creating an AST for every file in the project, and storing information from them into a database. This is a time-consuming process, but it only has to be done once (and then it's incrementally updated every time you save a file), and once it's built, querying the index is fast (querying is about O(log n) in the size of the index).
On the other hand, since the AST represents code that is (potentially) being currently edited, it is constantly being rebuilt "as you type". Since building an AST is at least O(n) in the length of the file (possibly worse; I haven't done a careful analysis), operations that depend on the AST get slower as the length of the file you're editing increases.
Now, for workarounds:
Enabling some of the scalability settings in Preferences | C/C++ | Editor | Scalability may help, by restricting the kinds of operations that require building an AST for large files (notice you get to define the threshold for "large"). It's not immediately clear to me whether it will make Open Declaration faster; try it and see.
Your best bet, however, is to break your header up into smaller headers. This has the added advantage of reducing compile times (since not all translation units may need to include all parts of the header), and organizing your code better (this last one is a matter of taste; feel free to disagree).
Looks like this is Eclipse bug 455467.
The reason of freeze is high cpu usage when opening declaration.
I applied workaround from Comment 5 and freeze dropped to 1-5 seconds:
Changing all settings in
.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.cdt.codan.core.prefs
from RUN_AS_YOU_TYPE\=>true to RUN_AS_YOU_TYPE\=>false seems to help
us out of this but this is not really what we want.
As I understand this workaround partially disables Codan - CDT static analysis framework.
Related
I use QtCreator whenever I can although its performance is not great sometimes.
I have a feeling situation gets worse with large source files, to put a number here I'll say over 1000 lines.
It seems disabling a couple of Helper plugins makes it take less CPU.
Is there a way to know CPU usage by each plugin? Which plugins are the most CPU hungry?
Now I'm going with the following and CPU usage seems good (almost close to 1% all the time).
you can disable the clangCodeModel plugin and cppCheck to reduce CPU usage but the main processing used by a background parser that tokenizes and read symbols from your source file. sometimes the third party library may contain myriad file count and make the Qtcreator slow. also, you can reduce the files that must parse by the "Clang Code Model" panel (Tools > Options > C++ > Code Model.).
I am having some troubles using eclipse to navigate a massive project. The problem I am attempting to facilitate is searching for where functions are defined, where classes are defined, and following other references throughout the code.
I was previously using grep to search everything, and that was no a very good solution as it took about 2 minutes for every search.
Is there a way to add all files to my Eclipse index?
the file 'soc-core.c' is currently not part of the index.
Here's a screenshot to illustrate. I believe I have selected the appropriate options. Thanks so much!
If you have large files in terms of LOC, you may be taken into scalable mode by eclipse where indexing didn't work for me. So. I changed the scalability settings to 50,000 lines. Now indexing is working large files too (<50k)
It sounds like you might be hitting a limit that prevents indexing from finishing. Here are some things to try.
Increase the memory available to eclipse. In your eclipse.ini file, set the -Xms and -Xmx values to bigger numbers. I'm using -Xms512m -Xmx2048m but you may need even bigger.
Increase the "Cache limits" fields at the bottom of the Indexer preference page.
Start eclipse and let it sit for a while. It should show you "C/C++ Indexer: (X%) progress bar in the lower right corner. Give it time to get to 100%.
You might try rebuilding the index. Menu->Project->C/C++ ndex->Rebuild.
In your project settings, you might need to add directories to C/C++ General->Paths and Symbols->Includes
Get a newer version of eclipse-cpp. I had a version a long while back that never seemed to finish indexing--it would get stuck. I'm now using eclipse-cpp-kepler-R and it works great.
I have a solution with 21 C++ projects and 1 VB.NET project.
The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement.
It only happens with this solution and only on my machine.
The solution has total of 2380 source and header files, of which 1280 are header files.
I tried to remove all connection to the source control (Perforce) but it didn't help.
Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued.
Any idea?
Deactivate intellisense.
Link
Intellisense parses the whole project and slows down the IDE drastically. If you use Visual Assist then you won't really need it. Visual Assist is less resource hungry and scans in the background, intellisense steals too many resources during its parsing.
Could this apply in your case?
http://coolthingoftheday.blogspot.com/2008/03/visual-basic-2008-hotfix-to-fix-slow.html
Note that disabling Intellisense may also break stuff like the Class Wizard (at least I'm pretty sure it does in VS2005). As already suggested it's a good idea to get rid of all the temporary files like .ncb regularly, because they can get huge and will slow down the IDE.
Also, if you're using Visual Assist, try rebuilding the database, disabling it or installing a different version.
I have a few solutions with over 100 projects, so I know exactly how you feel. Solutions containing some managed projects are especially bad. Disabling Intellisense helps a lot. I've never seen such problems from Visual Assist (or other similar refactoring tools), and that fills in a lot of the missing functionality from losing Intellisense.
I've also encountered some projects that had code that would cause the Intellisense thread to endlessly loop and never finish parsing the code. Most of those times we were never able to pin down the exact bit of code that caused the problem. Certain heavy use of templates and nested macros were often high on the suspicion list.
The only good way to be sure that Intellisense is disabled is to create a directory with the same name as the ncb file. Go to your solution directory, delete the ncb, and create a directory named your_solution_name.ncb. Because it can't recreate the ncb file, you'll get an error box to click through every time you open the solution, but that's a small price to pay.
Simply deleting the ncb will mean that VS will just create it again. The methods that I've seen from inside the VS options will turn off some of the features but will not prevent it from trying to parse all your code.
I have a substantial body of source code (OOFILE) which I'm finally putting up on Sourceforge. I need to decide if I should go with a monolithic include directory or keep the header files with the source tree.
I want to make this decision before pushing to the svn repo on SourceForge. I expect a lot of people who use it after that move will keep a working copy checked out directly from SF so won't want to change their structure.
The full source tree has about 262 files in 25 folders. There are a lot more classes than that suggests as due to conforming to 8.3 character names (yes it dates back to Win3.1) many classes are in one file. As I used to develop with ObjectMaster, that never bothered me but I will be splitting it up to conform to more recent trends to minimise the number of classes per file. From a quick skim of the class list, there are about 600 classes.
OOFILE is a cross-platform product expected to be built on Mac, Windows and assorted Unix platforms. As it started life on Mac, with compilers that point to include trees rather than flat include dirs, headers were kept with the source.
Later, mainly to keep some Visual Studio users happy, a build was reorganised with a single include directory. I'm trying to choose between those models.
The entire OOFILE product covers quite a few domains:
database front-end
range of database backends
simple 2D graphing engine for Mac and Windows
simple character-mode report-writer for trivial html and text listing
very rich banding report-writer with Mac and Windows Preview and Printing and cross-platform generation of text, RTF, HTML and XML reports
forms integration engine for easy CRUD forms binding to the database, with implementations on PowerPlant and MFC
cross-platform utility classes
file and directory manipulation
strings
arrays
XML and tag generation
Many people only want to use it on a single platform and some of those code areas are pure legacy (eg: PowerPlant UI framework on classic Mac). It therefore seems people would appreciate not having headers from those unwanted areas dumped in their monolithic include directory.
I started thinking about having an include directory split up into a few of the domains above and then realised that was sounding more like the original structure.
In summary, the choices seem to be:
Keep original model, all headers adjacent to source - max flexibility at cost of some complex includes in projects.
one include directory with everything inside
split includes by domain, so there may be about 6 directories for someone using the lot but a pure database user would probably have a single directory.
From a Unix build aspect, the recommended structure has been 2. My situation is complicated by needing to keep Visual Studio and XCode users happy (sniff, CodeWarrior, how I doth miss thee!).
Edit - the chosen solution:
I went with four subdirectories in include. I started trying to divide them up further by platform but it just got very noisy very quickly.
Personally I would go with 2, or 3 if really pushed.
But whichever you choose, please make it crystal clear in the build instructions how to set up the include paths. Nothing dooms an open source project more than it being really difficult to build - developers want a quick out-of-the-box experience and if it involves faffing around with many undocumented environment variables (or whatever) most will simply go away.
I have a source code of about 500 files in about 10 directories. I need to refactor the directory structure - this includes changing the directory hierarchy or renaming some directories.
I am using svn version control. There are two ways to refactor: one preserving svn history (using svn move command) and the other without preserving. I think refactoring preserving svn history is a lot easier using eclipse CDT and SVN plugin (visual studio does not fit at all for directory restructuring).
But right now since the code is not released, we have the option to not preserve history.
Still there remains the task of changing the include directives of header files wherever they are included. I am thinking of writing a small script using python - receives a map from current filename to new filename, and makes the rename wherever needed (using something like sed). Has anyone done this kind of directory refactoring? Do you know of good related tools?
If you're having to rewrite the #includes to do this, you did it wrong. Change all your #includes to use a very simple directory structure, at mot two levels deep and only using a second level to organize around architecture or OS dependencies (like sys/types.h).
Then change your make files to use -I include paths.
Voila. You'll never have to hack the code again for this, and compiles will blow up instantly if something goes wrong.
As far as the history part, I personally find it easier to make a clean start when doing this sort of thing; archive the old one, make a new repository v2, go from there. The counterargument is when there is a whole lot of history of changes, or lots of open issues against the existing code.
Oh, and you do have good tests, and you're not doing this with a release coming right up, right?
I would preserve the history, even if it takes a small amount of extra time. There's a lot of value in being able to read through commit logs and understand why function X is written in a weird way, or that this really is an off-by-one error because it was written by Oliver, who always gets that wrong.
The argument against preserving the history can be made for the following users:
your code might have embarrassing things, like profanity and fighting among developers
you don't care about the commit history of your code, because it's not going to change or be maintained in the future
I did some directory refactoring like this last year on our code base. If your code is reasonable structured at the beginning, you can do about 75-90% of the work using scripts written in your language of choice (I used Perl). In my case, we were moving from set of files all in one big directory, to a series of nested directories depending on namespaces. So, a file that declared the class protocols::serialization::SerializerBase was located in src/protocols/serialization/SerializerBase. The mapping from the old name to the new name was trivial, so that doing a find and replace on #includes in every source file in the tree was trivial, although it was a big change. There were a couple of weird edge cases that we had to fix by hand, but that seemed a lot better than either having to do everything by hand or having to write our own C++ parser.
Hacking up a shell script to do the svn moves is trivial. In tcsh it's foreach F ( $FILES ) ... end to adjust a set of files. Perl & Python offer better utility.
It really is worth saving the history. Especially when trying to track down some exotic bug. Those who do not learn from history are doomed to repeat it, or some such junk...
As for altering all the files... There was a similar question just the other day over at:
https://stackoverflow.com/questions/573430/
c-include-header-path-change-windows-to-linux/573531#573531