Should you migrate a project to C++11? - c++

I have been trying to get our team to migrate a large C++ project to VS2012 from VS2008. I want to do it mostly because I want to start using C++11 and the IDE is much nicer. So my reasons are somewhat selfish.
My team lead is pushing back because he doesn't see the business case for the migration, citing that most performance improvement features we'll get with C++11 we already have with BOOST and other libraries. He also says that this will require changing runtimes on all of our platforms which may change certain behavior. Which would mean we would need to retest on all of the servers that we have deployed to.
The first argument I somewhat understand, although I believe C++11 code will be much cleaner than using BOOST (again not a great business case).
The argument about using different runtimes I don't understand. What runtimes do a native C++ application use? This is not VC++. Would his concern be just that the STL won't be exactly the same implementation?
I don't see what the issue there would be. Is there something I am missing? Are there any other good arguments for migrating that I should cite to help my case?

All 3rd party libraries need to be built with the new compiler
Code might currently be unknowingly reliant on undefined behavior, a new compiler might do something totally different for UB than the current one (and cause problems)
Performance won't change much because you wouldn't have been coding in C++11 style (basically, lots of stuff is passed by value where before it would not have been). If your code base has a lot of...
std::vector<Blah> func(std::vector<Asdf> v); // notice all the pass by value
... C++11 could be a great performance improvement. But in C++98/03 you just wouldn't do that.
You need to lower the barrier to entrance for your team lead. Do the migration yourself and smoke test your product. Then show it to him. After that, here are abstract reasons to upgrade:
C++11 style is less code and simpler
VS2012 has the C++11 standard library additions - you can stop hand rolling 50 bug ridden replacements
Programmers want to work using a modern language and modern tools. This will spark a company wide resurgence of learning and best practices which will improve code quality, employee retention, employee continuing education, etc
It's a delicate balance when to do this sort of upgrade. If you do it too often you're spending money without gaining any business advantage. If you do it too infrequently you're dealing with so much legacy tech and legacy code that maintenance can become a nightmare. When a significant language change comes, that you will ultimately move to, it's best to do it sooner than later (and btw, this isn't particularly soon) - otherwise you just keep accumulating what will later be considered legacy code. Moving to a new compiler for new tools often isn't worth it. Moving for an important language upgrade usually is worth it.
Whether any of that is compelling to your team lead is beyond me. Good luck though

VS2012 is so last year, obsolete already, so good Microsoft replaced it after only 1 year of use, criticisms, eye-blinding whiteness, and ALL CAPS!
But considering that you can build a VS2008 project in the newer IDEs means you can upgrade today to VS2013 and work on upgrading the project to the VS2013 tooling over time.
Your TL is correct though, an upgrade requires a complete re-test, but if you have time between adding features then it is possible to fit such a large test in.
I'd say the main aspect of upgrading is just to keep up to date, its not such a big deal today but in another 5 years time your old VS2008 builds might start to hold you back (as I know having recently upgraded a VS2002 project to 2010), its never a good idea to get so far behind as the longer you leave it, the greater the effort in the upgrade that you will eventually have to do. That's the real reason to do it - chances that Microsoft will not support 2008 builds in the next version, and older IDEs won't run on Windows 9, get greater every year. Best to fix this problem while you have time.

I guess I wouldn't expect much in the way of performance improvements, but regarding testing, I'd recommend full retest on any kind of compiler or tool-chain change regardless of if you change your code or not.
EDIT: I'll add that I'd push to move to the more recent compiler on the next release cycle... that removes the testing argument (since you'll need to test anyway).

Related

Visual Studio 2010 vs Visual Studio 2005 for C++

My department writes a mixture of Windows, Linux and cross platform (RHEL Linx and Windows Server 2003) C++ code for in house applications. We use the STL and Boost 1.39.
VS2010 is now available in my organisation. If we were to move to VS2010 I'd have to make a significant business case for it. What would some of the most noticable benefits we would see from the move? Do you think it would be worth the time cost to move?
Update
Given the size of our code base and the cross platform nature of our code, I'm mainly interested in what the new IDE offers, e.g. how good is the intellisense (say, compared to VS for .net). Does the intellisense work well for very large code bases? What's the refactoring support like? How is raw IDE performance? What is the debugger like, i.e. if I hover over a pointer to a collection of smart pointers is it relatively easy to see what's in the collection?
Thanks in advance
New C++ 0x features, e.g. lambda expressions are really nice to have.
The only real difference in the two compilers is some C++0x support in VS2010. The IDE has improved a lot more, but VS2005 is fine for me too. Now are these worth the time cost to move? Up to you...
Greatly improved IntelliSense. C++0x, which means shared_ptr, unordered_set/map, function, lambdas, etc. This will in practice simplify things for you since you don't need as much from Boost. You also get access to Parallel Patterns Library (parallel for_each, etc) which really helps if you are targeting multi-core. I'd say go for it!
If you're only interested in IDE improvements, and you make a big use of smart pointers, I'd suggest to wait up to SP1 (or some SP that comes with fixes to intellisense).
As some people pointed out, there are BIG changes in C++ intellisense, to support a lot of features that other languages already had for years. The thing is that they accidentally broke the intellisense of smart pointers when instantiated with a template type.
I've posted a question with that issue a couple of weeks ago, and as suggested by someone I sent the issue to Microsoft Connect. Sadly the response from the VC++ team was that it won't be fixed soon.
Since you use STL and Boost, performance might be a pretty big deal. VC2010 supports rvalue references and move semantics, which, even if you don't use it in your own code, speeds up Boost and STL code significantly. (Although I doubt Boost 1.39 utilizes this a lot though. But if at some point you upgrade to a recent version of Boost, you'll get the benefit)
Intellisense was reworked in a big way for 2010. It's still a bit wonky, and falls over the moment it sees a template, as it'll always do for C++, but I have to admit it works much better than it used to.

To use or not to use C++0x features [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How are you using C++0x today?
I'm working with a team on a fairly new system. We're talking about migrating to MSVC 2010 and we've already migrated to GCC 4.5. These are the only compilers we're using and we have no plans to port our code to different compilers any time soon.
I suggested that after we do it, we start taking advantage of some of the C++0x features already provided like auto. My co-worker suggested against this, proposing to wait "until C++0x actually becomes standard." I have to disagree, but I can see the appeal in the way he worded it. Nevertheless, I can't help but think that this counter-argument comes more out of fear and trepidation of learning C++0x than a genuine concern for standardization.
Given the new state of the system, I want for us to take advantage of the new technology available. Just auto, for instance, would make our daily lives easier (just writing iterator-based for loops until range-based loops come along, e.g.).
Am I wrong to think this? It is not as though I'm proposing we radically change our budding codebase, but just start making use of C++0x features where convenient. We know what compilers we're using and have no immediate plans to port (if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform). Otherwise it seems to me like avoiding the use of iostreams in 1997 just because the ISO C++ standard was not published yet in spite of the fact that all compilers already provided them in a portable fashion.
If you all agree, could you provide me arguments I could use to strengthen my position? If not, could I get a bit more details on this "until the C++0x is standard" idea? BTW, anyone know when that's going to be?
I'd make the decision on a per-feature basis.
Remember that the standard is really close to completion. All that is left is voting, bugfixing and more voting.
So a simple feature like auto is not going to go away, or have its semantics changed. So why not use it.
Lambdas are complex enough that they might have their wording changed and the semantics in a few corner cases fixed up a bit, but on the whole, they're going to behave the way they do today (although VS2010 has a few bugs about the scope of captured variables, MS has stated that they are bugs, and as such may be fixed outside of a major product release).
If you want to play it safe, stay away from lambdas. Otherwise, use them where they're convenient, but avoid the super tricky cases, or just be ready to inspect your lambda usage when the standard is finalized.
Most features can be categorized like this, they're either so simple and stable that their implementation in GCC/MSVC are exactly how they're going to work in the final standard, or they're tricky enough that they might get a few bugfixes applied, and so they can be used today, but you run the risk of running into a few rough edges in certain border cases.
It does sound silly to avoid C++0x feature solely because they're not formalized yet. Avoid the features that you don't trust to be complete, bug-free and stable, but use the rest.
Theoretical but not practical disadvantages of using C++0x:
Makes it harder to port to different compilers.
Not adhering to any published standard.
Practical advantages of using C++0x:
Makes your daily lives easier, hence more productive.
It's a debate between what's theoretically right, and what's practical. If your team has any intent of actually doing something with this code, the practical should outweigh the theoretical tenfold.
One thing you don't need to (mostly) worry about now is features being added or taken away because the working draft reached "Final Committee Draft" (FCD) back in march. Feature-wise should be frozen, standards committee will not accept any-more proposals for C++0x.
Downside is it's still a draft and not finalized yet, the standards committee are in the phase of making corrections and adjustments before finalizing and publish the ISO standard (expected release to be march 2011). That could mean minor syntactic or semantic/behaviour changes which could make your code not compilable or not work correctly once you compile with a compiler that is more standard compliant than the one you're using at the time you wrote the code.
You'll probably have to wait sometime for compilers like VC++10 to get update with any corrections/adjustments made.
We had the exact same problem so we compromised. We took the C++ 0x TR1 release and then only took the portions that we knew we wanted to use. Sounds like a lot of work, but so far it's worked out well. We're using the regex libraries, tuples, and a couple of others. Once the standard is ratified, then we'll migrate to the full C++ 0x. This obviously isn't the best solution but it was one that has worked well for us.
If you intend to make your system open source within a not-too-far future, then that's an argument for not using too many bleeding-edge features. A production system running Debian or Red Hat won't necessarily have a bleeding-edge compiler installed.
You said
if we ever port the code base, by then surely compilers will be available with C++0x features as well for the target platform
but that a compiler exists for a platform doesn't always mean that it's installed/used/wanted, especially on production systems.
If, on the other hand, you intend to do all the compiling yourself, this is not an issue.

Boost advocacy - help needed

Possible duplicates
Is there a reason to not use Boost?
What are the advantages of using the C++ BOOST libraries?
OK, the high-level question is "Please provide me with what you consider to be the most effective arguments of why entire Boost, or some specific parts of it, should be compiled on our company's system and endorsed in software engineering standards".
Details of what I need:
Would gladly accept both positive arguments (why install), as well as proposed rebuttals of likely counter-arguments I might hear (see question context below).
Arguments should be made aimed at both technical Software Engineering team members and/or very technical senior managers - in other words, for the latter, the details of the argument may/should be technical, but the thrust of the argument should be "how would this make/save the company X money vs losing the company Y money as a cost of adding it to our toolset".
Context of the question:
I'm a developer in a company with several hundred developers, many dosens of whom do C++.
I had the (mis)fortune of being reassigned from my beloved Perl development spot to a team where I am also doing C++ development. So far I found numerous things that I could easily have done in Perl that are very hard/cumbersome to do in C++ (foreach loop as an example), and anytime I hit one of these, the answer 50% likely ends up being "You can't do this in standard C++ but you can do it with Boost"
Our toolkit includes some legacy RogeWave libraries, and VERY limited number of Boost libraries (e.g. no regex, no foreach), of very old vintage.
Any development must use libraries compiled and vetted by Software Engineering team. That is a hard and fast rule.
SE team is somewhat resistant to adding new libraries, for a variety of reasons (e.g. effort to do this; functionality conflicts with RogeWave, for example for RegEx; the risk of installing and using any new software; cost of educating developers, etc...). They will add the libraries if presented with sufficient business need or majorly convincing cost/benefit ratio argument, but they have pretty tough threshold.
So, I'm looking for examples of which parts of Boost are so wonderful (with exact cost/benefit estimates) that installing them would be an Obviously Worth It Effort for Software Engineering.
Thanks in advance for any ideas/suggestions/examples.
Please don't mark this question as subjective as I am looking for measurable answers, not merely wonderful feelings :)
Wherever I worked in the last decade, when they had their own smart pointer class, I found bugs in that - usually within a few weeks. And, no, I never went and looked at it hoping to find errors.
I got into the habit of posting the following quote from the TR1 smart pointer proposal:
The Boost developers found a shared-ownership smart pointer exceedingly difficult to implement correctly. Others have made the same observation. For example, Scott Meyers [Meyers01] says:
"The STL itself contains no reference-counting smart pointer, and writing a good one - one that works correctly all the time - is tricky enough that you don't want to do it unless you have to. I published the code for a reference-counting smart pointer in More Effective C++ in 1996, and despite basing it on established smart pointer implementations and submitting it to extensive pre- publication reviewing by experienced developers, a small parade of valid bug reports has trickled in for years. The number of subtle ways in which reference-counting smart pointers can fail is remarkable."
This plus a detailed analysis of the bug(s) I found usually got me the job of incorporating the boost libs into the code base. :)
It seems to me you're doing this the wrong way around. Since proposals to add new libraries are going to be met with a lot of resistance, don't even bother trying to argue for boost as a whole. Choose your battles instead.
Find the specific Boost libraries which you know (with your knowledge of the application it's to be used in) will be useful and save time and money. And then propose adding those.
I could easily list the Boost libs I've found useful, and why I think they're great, but I don't know if they'll be of any use in your application.
Push for individual boost libraries to be included, and then perhaps, over time, so many of them will be included that it'll be simpler for everyone to just include all of Boost.
It's an open standard not controlled by a specific company ( no licensing costs )
It's cross platform
It's expertly designed / written and very fast / efficient extensively tested
There are open source implementations which your team can compile themselves.
Boost will soon become part of the standard C++ STL
Here is a slightly oldish 2005 article on Dr. Dobbs discussing the upcoming C++0x standard.
http://www.ddj.com/cpp/184401958
I had to maintain a component using that old vintage Tools.h++ from Roguewave, on a Solaris system.
On Solaris, if we want to use boost, we need to use either gcc, or SunStudio with STLport implementation of the standard (instead of Roguewave one). And as Tools.h++ requires the old Roguewave pre-standard implementation of the standard -- on Solaris --, I had to give up on boost.
In the end I rewrote a simplified version of a few boost-like features I needed.
If you are in that same situation (*) , you would not be able to move from Roguewave library to boost that easily. There is a non negligible cost in this operation, as for instance pointer containers from both libraries have quite different interfaces.
(*) Where we can't slowly change bits by bits of the old code to progressively use boost. In that situation, the migration has to be radical and to change simultaneously every occurrence of Tools.h++ by something more trendy, or even better.
NB: Most people are able to progressively use boost in old projects, and may miss a very important, and yes technical, point. Hence my negative answer.
Boost is a great tool and an invaluable part of our product development (we'd be lost without smart_ptr)... but because it is changing so fast the stability of releases can be effected.
For example, we were happily introducing new versions of Boost as soon as they came out without thinking twice. That is until we were stung with a bug in the 1.35 threading library that produced occassional (ie difficult to debug) but critical errors. Fortunately we identified the issue before anything was released to the public and could move back to 1.34.
Ever since then we've taken a specific version, extensively tested it, and not updated without a compelling reason to do so.
Here are two suggestions for advocating boost:
Who's Using Boost? (http://www.boost.org/users/uses.html)
lots of major projects use boost: (e.g., adobe photoshop, CERN)
Boost Project Cost (http://www.boost.org/development/index.html)
How much it would cost to hire a team to write boost from scratch? There's a nifty (somewhat gimmicky) calculator there that helps to put it in perspective.

Advantage of porting vc6 to vc2005/vc2008?

I was asking my team to port our vc6 application to vc2005, they are ready to allot sometime to do the same.Now they need to know what is the advantage of porting.
I don't thing they really understand what does it mean to adhere to standard compliance.
Help me list out the advantage to do the porting.
Problem I am facing are
1)No debugging support for standard containers
2)Not able to use boost libraries
3)We use lot of query generation but use CString format function that is not type safe
4)Much time is spent on trouble shooting vc6 problems like having >>
vector<vector<int>>
with out space between >>
Advantages:
More standards compliant compiler. This is a good thing because it will make it easier to port to another platform (if you ever want to do that). It also means you can look things up in the standard rather than in microsoft's documentation. In the end you will have to upgrade your compiler at some point in the feature. The sooner you do it, the less work it will be.
Not supported by MS. The new SDK doesn't work. 64-bit doesn't work. And I don't think they're still fixing bugs either.
Nicer IDE. Personally, I really prefer tabs to MDI. I also think that it's much easier to configure Visual Studio (create custom shortcuts, menu bars, etc.). Of course that's subjective. Check out an express edition and see if you agree.
Better plugin support. Some plugins aren't available for VC6.
Disadvantages:
Time it takes to port. This very much depends on what kind of code you have. If your code heavily uses non-standards compliant VC6 features, it might take some time. As Andrew said, if you're maintaining an old legacy project, it might not be worth it.
Worse Performance. If you're developing on really old computers, Visual Studio may be too slow.
Cost I just had a quick look and Visual Studio licenses seem to be a bit more expensive than VC6's.
Why VC2005? If you are going to invest the time (and testing!) to upgrade from VC6, why not target VC2008?
If you're maintaining a legacy project then there may be no advantage in porting. Simply converting projects and fixing up compiler problems could take weeks of time and introduce instability.
If you're actively developing a product then the main advantage is that you'll no longer be using a product that's over eight years old - which is clearly a good thing.
More recent versions of the Windows SDK don't work with VC6 - if you want to use the latest Windows features, you'll need a more recent compiler.
The later compilers are said to be more standards conforming. I'm sorry I can't be more specific. I do know that VC6 generates lots of compiler warnings just for using standard template classes.
If you use any external libraries that are compiled with a later compiler, you'll need to use something compatible.
Prepare for something of a harsh transition - the IDE's are more different than they should be.
To ensure complete compatibility of the application with different versions of the base platform. And to rectify any errors found thereby so as to give enough freedom to end user to use his own version of the base platform.
I'm not saying you shouldn't convert, but to take your specific points:
1)No debugging support for standard
containers
I debug code using standard containers with VC++ 6 all the time. What's your problem here?
2)Not able to use boost libraries
True. You may find you can use some of the simpler stuff.
3)Much time is spent on trouble
shooting vc6 problems like having >>
[can't get SO to stop mangling this, nb]
with out space between >>
Um, that is a syntax error (at least in the version of C++ understood by VC++6) and will be flagged as such. If your team is spending "much time" on this sort of thing, you need another team.
Edit:
3)We use lot of query generation but
use CString format function that is
not type safe
It will be equally type-unsafe under VS2005. I don't see why this is a reason for porting. If you want type safety use the standard C++ I/O mechanisms.
If your team can't see any advantage and you are unable to explain any advantage, why are you asking them to do this?
Sounds like you are porting just for the sake of it.

Common libraries in a large team

Assume you have five products, and all of them use one or more of the company's internal libraries, written by individual developers.
It sounds simple but in practice, I found it to be very difficult to maintain.
How do you deal with the following scenarios:
A developer unintentionally introduces a bug and breaks everything in production.
Every library has to mature, That means the API needs to evolve, so how do you deploy the updated version to production if every developer needs to update/test their code while they are extremely busy on other projects? Is this a resource and time issue?
Version control, deployment,and usage. Would you store this in one global location or force each project to use, say, svn:externals to "tie" a library?
I've found that it is extremely hard to come up with a good strategy. My own pet theory is this:
Each common library has to have a super-thorough set of tests or else it should never be common, even if it means someone else duplicates the effort. Duplicate untested code is better than common untested code (you break only one project).
Each common library has to have a dedicated maintainer (can be offset by a really good test suite in a smaller team).
Each project should check out the version of the library that is known to work with it. This means a developer does not have to get pulled away to update API usage, as the common code gets updated. Which it will be. Every non-trivial piece of code evolves over months and years.
Thank you for your thoughts on this!
You have a competing set of goals here. First, a library of reusable components must be open enough that people from the other projects can easily add to it (or submit components to it). If it's too difficult for them to do that, they'll build their own libraries, and ignore the common one, leading to a lot of duplicate code and wasted effort. On the other hand, you want to control the development of the library enough that you can ensure its quality.
I've been in this position. There's no easy answer. However, there are some heuristics that can help.
Treat the library as an internal project. Release it on regular intervals. Ensure that it has a well-defined release procedure, complete with unit tests and quality assurance. And, most important, release often, so that new submissions to the library show up in the product frequently.
Provide incentives for people to contribute to the library, rather than just making their own internal libraries.
Make it easy for people to contribute to the library, and make the criteria clear-cut and well-defined (e.g., new classes must come with unit tests and documentation).
Put one or two developers in charge of the library, and (IMPORTANT!) allocate time for them to work on it. A library that is treated as an afterthought will quickly become an afterthought.
In short, model the development and maintenance of your internal library after a successful open source library project.
I don't agree with this:
Duplicate untested code is better than
common untested code (you break only
one project).
If you are all equally likely to create bugs by implementing the same thing, then you'll all have to fix potentially different bugs in each instance of the "duplicate" library.
It also seems that it'd be much faster/cheaper to write the library once and, instead of having multiple other teams write the same thing, have some resources allocated to testing.
Now to solve your actual problem: I'd mimic what we do with real third-party libraries. We use a particular version until we're ready, or compelled to upgrade. I don't upgrade everything just because I can--there has to be a reason.
Once I see that reason (bug fix, new feature, etc.), then I upgrade with the risk that the new library may have new bugs or breaking changes.
So, you're library project would continue development as necessary, without impacting individual teams until they were ready to "upgrade".
You could publish releases or peg/branches/tag svn to help with all this.
If all teams have access to the bug tracker, they could easily see what known issues exist in the upgrade-candidate before they upgrade, too. Or, you could maintain that list yourself.
#Brian Clapper provides some excellent guidelines for how to run your library as a project in his answer.
I used to work in a similar situation to what you're describing, only my company had dozens of software products. I worked on the team that was responsible for maintaining and upgrading the core set of libraries that everyone else used.
We dealt with those scenarios as follows:
Test the heck out of the core libraries. Maintaining duplicate code is a nightmare. You're not just maintaining the core and one copy. Somewhere in your company's source control there are several copies of the same code. We had dozens of products, so that would have meant dozens of copies. Hunt them down and kill them.
We had a small team of 10-12 developers dedicated to maintaining the core library and its test suites. We were also responsible for fielding calls from the other 1100 developers in the company about how to use the core library, so as you can imagine, we were very busy.
Each other project needs to work with the version of the core library that it is known to work with. You can use version control branches to test new releases of the core library with old products to make sure you don't break code that works. If the core team does a thorough job of testing, this should go very smoothly. The only time this ever got really complicated for us was when the core API changed, or when we flat out screwed something up. Even if you're very confident in your core testing, use branches to test individual products.
I agree - this is difficult. In our small team (consulting .. not a product company - which made it harder), we had one common component that stood out from the others. In this case the recipe for success was:
Make a good developer responsible for developing the component
Make a good developer the gatekeeper for maintaining the component
Make sure all upgrades (there were several) are backward compatible
Make sure there is some basic documentation (or a simple reference application) explaining how the component is to be used
Make sure all developers know that the component exists (!) and where they can find it (along with the code, if they wish to review it)
Give developers the ability to review the code and suggest better implementations or refectoring, but have the final mods go through an experienced gatekeeper. When the component were upgraded, older apps did not have to upgrade. If we did a new release, we evaluated if we wanted to upgrade, and if we did, all we needed to do was swap the libraries - no code needed to change, unless we wanted to use some new features available through the upgrade. Resistance is inevitable, but sometimes it is a good sort of resistance when it comes from good developers who have better ideas for a new generation or refactored component.
Treat the development of the libraries like any other product. Each library has its own repository, its own releases and version numbers. The compiled and officially tested versions of the library are also kept in the repository. Document features and changes from version to version.
Then use the libraries like you would using third party libraries. Your product uses only fixed versions of the compiled libraries. Switch to a new version when you really need to and be aware that this involves more testing. Add the versions you use to your version control.
When you find a bug or require a new feature in a library, a new version or sub-version is created. Using a version control system like svn makes this easy. When you need the source code for debugging purposes, export it and include it in your projects, but do not change it there, but fix problems in the libraries' repositories.
This way, every team can contribute to the libraries without endangering the work of the other teams. Switching versions is done deliberately and not by accident.
Create an Anti-corruption (DDD) layer for the existing library... this is nothing but a facade.. and then write unit-test for this anti-corruption layer... Now even if someone upgrade/update the library you would know if something is broken by running the unit tests...
These tests could also serve as documentation of contract... and not every project that need to use the library has to write this anti -corruption layer, if they are using the same exact functionality..
"Duplication is the root of all evil"
Sounds to me like you need:
An artifact repository like Ivy so you can have the libraries shared and versioned with a distinction between versions that are API stable and ones that are "maturing"
Tests for the libraries
Tests for the projects using them
A continuous integration system so that when an incompatibility or bug is introduced both the project and the original library developer are notified
I think that one shared library is better than 3 duplicate ones (and 1 tested is definitely better than 3 untested). That's because when you find and fix problem, this makes the whole application area more solid (and development and maintenance are more efficient).
BTW, that't one of the reasons (apart from contributing back to the community) why our company exposes our .NET shared libraries to the public as open-source.
Plus, there's less code to write. And you can designate one dev to enforce good development practices on the library and its usages (i.e. through code contracts enforced on the unit tests within library consumers). This improves quality and and reduces maintenance costs.
We store shared libraries as binaries in the solution. That comes from the logical requirement that any solution has to be atomic and independent (this rules out svn:externals links).
API compatibility is not an issue at all. Just let your integration server rebuild and retest the whole product stack (while updating all the inner references and propagating changes) and you'll always be sure that all internal API's are solid. And whomever breaks the API has to either fix it or update the usages.
Duplication is the root of all evil
I would argue that unchecked government is the root of all evil :)
I do get a lot of flack for even suggesting that duplication should be an option. I understand why, but let me complicate this a bit.
Say you have a fairly large library that doesn't actually do anything in particular - it's just a collection of utilities. There are NO tests for this library - at all. You need only one function from it. Say, something that parses out a file's extension.
Pop quiz: do you just write something as small as this in your own project, or you bite the bullet and use the free-for-all untested set of utilities, which WILL break your application if someone breaks the function?
Also, imagine you are in environment where writing tests is not part of the culture, since most projects are very intense and have a very short development span.
Duplicating large systems - such as client registration - would be dumb beyond belief, of course. However, aren't there any cases where it is safer to duplicate something fairly small in your project if the alternative is not safe enough (no system for maintaining common code).
Think of it this way - and this happens all the time - multiple contractors working on different projects, for the same company. They don't even know about each other.
My argument is this:
If a team cannot dedicate to maintaining a solid common codebase, or if the environment does not give them enough time to, it's best to let them work as separate "contractors".
You will STILL need to use large existing systems that simply cannot be duplicated.
Duplicating large systems - such as
client registration - would be dumb
beyond belief,
That's why those systems publish external interfaces.
If you define a library as shared code between projects: in my experience that's almost always bad. A project should be stand alone, and updates for one project should not affect other projects.
Even if you start with libraries, you'll end up duplicating code anyway. Want to hotfix project 1? It was released with library 1.34, so to keep the hotfix as small as possible, you'll go back to library 1.34 and fix that. But hey-- now you did exactly waht the library was supposed to avoid-- you've duplicated the code.
Every developer uses Google to find code and copy it into his application. That's probably how they found Stack Overflow in the first place. Imagine what would happen if Stackoverflow published libraries instead of code snippets, and you'll get an idea of the problems that afflicts many well meaning library creators.
Libraries tend to be generic solutions to specific problems. Typically, the generic solution is more complex than the sum of the two specific solutions. This means you need one good programmer to solve a problem that could have been solved by two morons. Sounds like a bad tradeoff to me :D
I would like to point a problem in the solutions suggested above: treating the library as an internal project with its own versioning scheme.
The problem
If your company has more than one product (lets say two teams - two product: A, B), than each product has its own release schedule. Let's give an example: Team A is working on product A v1.6. Their release schedule is two weeks from now (suppose Oct 30th). Team B is working on product B v2.4. Their release schedule is 1.5 months from now - Nov 30th. Lets assume both are working on acme-commons-1.2-SNAPSHOT. Both are adding changes to acme-commons, as they need it. Couple of days before Oct 30th, team B introduce a change which is buggy, to acme-commons-1.2-SNAPSHOT. Team A is getting into stress mode since they discover the bug 1 day prior to code freeze.
This scenario shows that treating a common library as a third party library is almost impossible. The trivial, but problematic, solution is for each team to have their own copy of the version they are about to change. For example, product A v1.2 will create a branch (and version) for acme-commons named "1.2-A-1.6". Team B will also create a branch in acme-commons called "1.2-B-2.4". Their development will never collide and they will be stress free once they tested their product.
Of course, someone will have to merge their changes back to the original branch (lets say master or 1.2).
The problems I found with this solution is:
Branch inflation - the tree structure will be very puffy and it will be harder to understand the flow of changes/merges.
Merges back to 1.2 will probably never happen - Unless a team/developer is dedicated to this library, the chances that Team A or Team B merges their code back to 1.2 branch is slim. They will always stay focused on their tasks, thus creating and using their own branch space. Allocation of a developer/team is expensive, thus not always a viable solution.
I'm still trying to figure this one out, so any thoughts of this matter are welcome