I have a gigantic C++ Builder 6 solution, when I try to compile it I get the following error when the linker starts its work:
It translates to:
---------------------------
Fehler
---------------------------
Access violation at address 0660EE22 in module 'ilink32.dll'. Reading from address 00000000.
---------------------------
OK
---------------------------
Does anyone have an idea how this comes and how I can fix it?
EDIT 1
Important note, the code sometimes compiles, mostly then, when I reset the working copy and then just modify the stuff in sublime text and use C++ Builder only for compiling. Including, I don't open a single file.
EDIT 2
Some more details, the project has about 80.000.000 lines of code (according to C++ Builder). The largest file is about 70.000 lines, but you cannot say clearly, because there are a lot of
#ifdef XY
#endif
Things.
The code itself is copy-paste from an existing part and got reviewed by some coworkers. So I think it is a bug in C++ Builder, because it actually works if I just use Sublime Text or Notepad++ to edit the stuff and then use C++ Builder to build it, it works (at least sometimes).
To be honest, I myself don't think there is a real solution. But I hope someone knows this bug. According to Google, the ilink32.dll is a C++ Builder library that is linked automatically.
Maybe someone has a solution.
The ilink32 has always had a lot of bugs. There's no chance of getting anything fixed in non-current versions , so your options are:
Look for workarounds on QC
Find your own workaround
Here are some QC searches that may or may not be useful to you.
AFAIK it is not possible to use a different linker. However you can turn on (or turn off) Incremental Linking via the project options and see if that makes a difference. Incremental linking is a speed optimization, it makes no difference to the semantics of linking.
the project has about 80.000.000 lines of code (according to C++ Builder).
Well, that number counts all lines in precompiled headers for each source file so maybe it doesn't mean much.
70K LOC is large for one source file; perhaps you could try refactoring code to have smaller object files, especially if it does seem that adding to a big file does trigger the problem.
It might be possible to identify which change you are making that is triggering the bug. For example it might be increasing a particular thing past some limit (e.g. size of one object file , number of object files, size of static data, etc.)
You could delete the precompiled header files (that is vclNN.csm, vclNN.#00, vclNN.#01, etc.) that are built and saved by default in the BCB6 lib directory. Perhaps they got corrupted or could be rebuilt better. PCH management is difficult in BCB6 anyway. (I ended up defining my own "all.h" and having every source file do #include "all.h" #pragma hdrstop). Later versions of CBB XE allow PCH injection making this process a lot tidier.
Have a look at the actual link command being passed to ilink32 and see if there are any unnecessary object files or libraries in it. You could delete and re-create the project files as they can build up crud over time as a project is developed. Actually that is probably a good idea anyway.
Another possibility might be to group some of the code into static libraries .
In all cases make sure you are using good source control so you can reverse out any failed options that might make things worse
Related
I'm trying to learn how to use Eclipse for C++ development (using MinGW) and have run into an interesting problem.
While writing a simple test program I get the following error:
However simply saving the file resolves the error... Why does this happen?
After saving:
I would really like simple bugs like this to be caught without having to go and manually save the file...
I know it's really simple to just click "Save" but I know myself and I will forget and I will spend hours trying to track down a bug that isn't actually a bug. (I'm sure this will probably also happen with things other than using namespace std;.)
Eclipse's CODAN tool runs as you type, unfortunately it only parses dependencies on demand, usually with a save.
Why? Eclipse's CODAN tool isn't exactly a gazelle, so having to track through all of the file's dependencies while the user is typing is probably a system killer. This may improve with time. In the meantime, save regularly.
And, to be honest, this is probably a dodge. It should only have to search dependencies when a dependency is added. But there are a lot of buts. What about when a dependency is added to a dependency? Or a new dependency is hidden in a macro (don't do this) and hard to parse without digging through the dependencies? Or exposed by adding a define that triggers conditional compilation (I prefer different implementation files and letting the linker sort it out to conditional compilation)? What about...?
Blah. If people want to write garbage code, that's their problem. A static analyzer should focus on the needs of people who aren't trying to trick it and circumvent good style. That said, I don't know the CODAN code, and I don't know how deep one would have to go to change it to catch and handle the easy cases at a tolerable real-time rate.
But at the end of the day, the only analyzer you should pay attention to is the compiler--with the warning levels turned up to 11, of course. CODAN is not perfect. It misses and misinterprets stuff, and you may find yourself hunting a bug that's not a bug in your code. If the compiler has a bug, that's a different case, but a lot less likely. Definitely take CODAN's help, but before you spend time on an odd error, make sure it really is an error by saving and building the program.
CODAN configuring stuff:
Most of CODAN's options can be found by visiting Project->Properties on the menu and navigate the Properties dialogue to C/C++ General->Code Analysis
To turn configure CODAN's run options, to turn off updating as you type for example, go one step further to C/C++ General->Code Analysis->Launching
You will also find that if you are editing included headers in another project, you will have to force an index rebuild to to catch the modifications. Select Project->C/C++ Index->Rebuild from the menu for the project doing the including.
I'm trying to compile a relatively big legacy c++ project in visual-studio-2013 using /clr flag. The project generates a dll.
I get the following run-time exception:
Type '<Module>' from assembly ... contains more methods than the current implementation allows
I must add that this happens in Debug configuration only (Release - works). Also, the project heavily uses templates and macros, which (I suppose) contribute to the large amount of generated methods...
There is little to no documentation regarding this problem.
What I know from searching the net (don't know if it's accurate) is:
There is a limit of ~65K methods in a clr dll. All methods of all native classes go into some special <Module>, so it poses a global limit.
One suggestion was to split the project, but that's not very trivial, due to inter-class-dependencies. I suppose this is doable...
Any help would be appreciated.
I ended up separating code into two dlls, and removing some code that I wasn't using. The hard part was identifying "dead" code and making sure it's using templates extensively (otherwise I was just removing drops in a bucket).
I know it's not a solution you want to hear, but I couldn't find any other working workaround.
I have been struggling with this issue for a few weeks with VS2015. In the end I found the linker option :/OPT:REF which can be found under Project properties->Linker->Optomization->References. This removed about 12MB from the size of the output DLL and the exception is no longer thrown at run time.
How do I find which parts of code are taking a long time to compile?
I am already using precompiled headers for all of my headers, and they definitely improve the compilation speed. Nevertheless, whenever I make a change to my C++ source file, compiling it takes a long time (this is CPU/memory-bound, not I/O-bound -- it's all cached). Furthermore, this is not related to the linking portion, just the compilation portion.
I've tried turning on /showIncludes, but of course, since I'm using precompiled headers, nothing is getting included after stdafx.h. So I know it's only the source code that takes a while to compile, but I don't know what part of it.
I've also tried doing a minimal build, but it doesn't help. Neither does /MP, because it's a single source file anyway.
I could try dissecting the source code and figuring out which part is a bottleneck by adding/removing it, but that's a pain and doesn't scale. Furthermore, it's hard to remove something and still let the code compile -- error messages, if any, come back almost immediately.
Is there a better way to figure out what's slowing down the compilation?
Or, if there isn't a way: are there any language constructs (e.g. templates?) that take a lot longer to compile?
What I have in my C++ source code:
Three (relatively large) ATL dialog classes (including the definitions/logic).
They could very well be the cause, but they are the core part of the program anyway, so obviously they need to be recompiled whenever I change them.
Random one-line (or similarly small) utility functions, e.g. a byte-array-to-hex converter
References to (inline) classes found inside my header files. (One of the header files is gigantic, but it uses templates only minimally, and of course it's precompiled. The other one is the TR1 regex -- it's huge, but it's barely used.)
Note:
I'm looking for techniques that I can apply more generally in figuring out the cause of these issues, not specific recommendations for my very particular situation. Hopefully that would be more useful to other people as well.
Two general ways to improve the compilation time :
instead of including headers in headers, use forward declare (include headers only in the source files)
minimize templated code (if you can avoid using templates)
Only these two rules will greatly improve your build time.
You can find more tricks in "Large-Scale C++ Software Design" by Lakos.
For visual studio (I am not sure if it is too old), take a look into this : How should I detect unnecessary #include files in a large C++ project?
Template code generally takes longer to compile.
You could investigate using "compiler firewalls", which reduce the frequency of a .cpp file having to build (they can reduce time to read included files as well because of the forward declarations).
You can also shift time spent doing code generation from the compiler to the linker by using Link-Time Code Generation and/or Whole Program Optimization, though generally you lose time in the long run.
I've inherited a fairly large C++ project in VS2005 which compiles to a DLL of about 5MB. I'd like to cut down the size of the library so it loads faster over the network for clients who use it from a slow network share.
I know how to do this by analyzing the code, includes, and project settings, but I'm wondering if there are any tools available which could make it easier to pinpoint what parts of the code are consuming the most space. Is there any way to generate a "profile" of the DLL layout? A report of what is consuming space in the library image and how much?
When you build your DLL, you can pass /MAP to the linker to have it generate a map file containing the addresses of all symbols in the resulting image. You will probably have to do some scripting to calculate the size of each symbol.
Using a "strings" utility to scan your DLL might reveal unexpected or unused printable strings (e.g. resources, RCS IDs, __FILE__ macros, debugging messages, assertions, etc.).
Also, if you're not already compiling with /Os enabled, it's worth a try.
If your end goal is only to trim the size of the DLL, then after tweaking compiler settings, you'll probably get the quickest results by running your DLL through UPX. UPX is an excellent compression utility for DLLs and EXEs; it's also open-source with a non-viral license, so it's okay to use in commercial/closed-source products.
I've only had it turn up a virus warning on the highest compression setting (the brute-force option), so you'll probably be fine if you use a lower setting than that.
While i don't know about any binary size profilers, you could alternatively look for what object files (.obj) are the biggest - that gives you at least an idea of where your problematic spots are.
Of course this requires a sufficiently modularized project.
You can also try to link statically instead of using a dll. Indeed, when the library is linked statically the linker removes all unused functions from the final exe. Sometime the final exe is only slightly bigger and you don't have any more dll.
If your DLL is this big because it's exporting C++ function with exceptionally long mangled names, an alternative is to use a .DEF file to export the functions by ordinal, without name (using NONAME in the .DEF file). Somewhat brittle, but it reduces the DLL size, EXE size and load times.
See e.g. http://home.hiwaay.net/~georgech/WhitePapers/Exporting/Exp.htm
Given that all your .obj files are about the same size, assuming that you're using precompiled headers, try creating an empty obj file and see how large it is. That will give you an idea of the proportion of each .obj that's due to the PCH compilation. The linker will be able to remove all the duplicates there, incidentally. Alternatively you could try disabling PCH so that the obj files will give you a better indication of where the main culprits are.
All good suggestions. What I do is get the map file and then just eyeball it. The kind of thing I've found in the past is that a large part of the space is taken by one or more class libraries brought in by the fact that some variable somewhere was declared as having a type that sounded like it would save some coding effort but wasn't really necessary.
Like in MFC (remember that?) they have a wrapper class to go around every thing like controls, fonts, etc. that Win32 provides. Those take a ton of space and you don't always need them.
Another thing that can take a ton of space is collection classes you could manage without. Another is cout I/O routines you don't use.
i would recommend one of the following:
coverage - you can run a coverage tool in the hope of detecting some dead code
caching - cache the dll on the client side on the initial activatio
splitting - split the dll into several smaller dlls, start the application with the bootstrap dll and download the other dlls after the application starts
compilation and linking - use smaller run time library, compile with size optimization, etc. see this link for more suggestions.
compression - if you have data or large resources within the dll, you can compress them and decompress only after the download or at runtime.
I am looking for a tool to simplify analysing a linker map file for a large C++ project (VC6).
During maintenance, the binaries grow steadily and I want to figure out where it comes from. I suspect some overzealeous template expansion in a library shared between different DLL's, but jsut browsign the map file doesn't give good clues.
Any suggestions?
This is a wonderful compiler generated map file analysis/explorer/viewer tool. Check if you can explore gcc generated map file.
amap : A tool to analyze .MAP files produced by 32-bit Visual Studio compiler and report the amount of memory being used by data and code.
This app can also read and analyze MAP files produced by the Xbox360, Wii, and PS3 compilers.
The map file should have the size of each section, you can write a quick tool to sort symbols by this size. There's also a command line tool that comes with MSVC (undname.exe) which you can use to demangle the symbols.
Once you have the symbols sorted by size, you can generate this weekly or daily as you like and compare how the size of each symbol has changed over time.
The map file alone from any single build may not tell much, but a historical report of compiled map files can tell you quite a bit.
Have you tried using dumpbin.exe on your .obj files?
Stuff to look for:
Using a lot of STL?
A lot of c++ classes with inline methods?
A lot of constants?
If anything of the above applies to you. Check if they have a wide visibility, i.e. if they are used/seen in large parts of your application.
No suggestion for a tool, but a guess as to a possible cause: do you have incremental linking enabled? This can cause expansion during subsequent builds...
The linker will strip unused symbols if you're compiling with /opt:ref, so if you're using that and not using incremental linking, I would expect expansion of the binaries to be only a result of actual new code being added. That's as far as I know... hope it helps a little.
Templates, macros, STL in general all use a tremendous amount of space. Heralded as a great universal library, BOOST adds much space to projects. BOOST_FOR_EACH is an example of this. Its hundreds of lines of templated code, which could simply be avoided by writing a proper loop handle, which is in general only a few more key strokes.
Get Visual AssistX to save typing, not using templates. Also consider owning the code you use. Macros and inline function expansion are not necessarily going to show up.
Also, if you can, move away from DLL architecture to statically linking everything into one executable which runs in different "modes". There is absolutely nothing wrong with using the same executable image as many times as you want just passing in a different command line parameter depending on what you want it to do.
DLL's are the worst culprit for wasting space and slowing down the running time of a project. People think they are space savers, when in fact they tend to have the opposite effect, sometimes increasing project size by ten times! Plus they increase swapping. Use fixed code sections (no relocation section) for performance.