I want to know why in the Zimbra Wiki only specific platforms are listed for build process. This means that building Zimbra on other for example Linux distributions is impossible?
What is the real reason behind the choice of a special Linux distribution for building Zimbra by the Zimbra community?
I'm currently trying to set up Zimbra on Arch Linux and I'm getting behind those reasons.
Imho Zimbra is a nice piece of work on the outside, useful webmail client, integrates various stuff nicely, etc. BUT the whole package violates every assumption you ever had about server software on linux by bundling and compiling nearly all libraries and third party software into it, this goes as far as basic stuff like popt.
The stated requirement is that you set up Zimbra on a machine on its own and don't you dare to run something else on there.
Since everything is so tightly integrated it's a huge thing to compile and since each bundled software may need to be compiled differently on any distro or anything might break anywhere, the effort must be huge to maintain a sane build for more than a handful of platforms.
It also boggles my mind how anyone could ever think this bundling a good idea. If any security issues arise in a single piece of it, everything must be rebuilt. Instead of the admin relying on the security updates of their distro, they must patch Zimbra themselves, etc.
It seems that building Zimbra on any platform is quite the adventure, to put it mildly. The only platform documented nicely and up to date is FreeBSD which is largely due to a single person not associated with Zimbra, as far as I can see.
I was the build engineer of a Zimbra based project for about a year. We were building it using CentOS. Building Zimbra almost on every platform is possible. It's just a matter of experience and proficiency, since too many programming languages and related technologies are involved in build process and you need to install the exact required version. Thus you might be forced to compile and build some extra packages as well.
I remember that first time that I tried to build Zimbra, It took about 2 weeks for a successful build. However I documented the process completely and clearly. Later on, it took about a night to build Zimbra.
Related
I would like to download and try out LLVM, before that I wanted to know:
(a) What are the factors to consider before finalizing a LLVM platform (Windows/Linux)?
(b) What is the best way to learn LLVM? I would like to get involved in one of the projects there. So I want to get overall idea about it. In the process I got overwhelmed by the sheer size of LLVM, its sub-projects, its tools, the support, etc.
FYI: I have gone through the basics of LLVM from the llvm.org
Also have worked on the compiler related development, static code analysis.
Please help.
Thank you.
LLVM doesn't run on just one platform. As a contributor, your patches will be expected to work on many platforms. If you're not setup to test on more than one, then you'll be reliant on the interest of others in your patches for them to test them for you on other platforms. Your best bet is to use virtual machines (via VMWare or VirtualBox or whatever) to provide you with multiple platforms to work with. You'll find the most stability on OS X (Darwin), with Linux a close 2nd, owing primarily to the large number of buildbot slaves that test these configurations.
Your best bet is to pick a smaller project that is relatively contained. For example, you might choose something that is contained within a single pass, a single target, etc. The modularity of the code should make a lot of projects possible without understanding the entire source base. Pick an area to understand deeply first and then move on to others. It is not expected that somebody who can work on the testsuite is also capable of understanding the nuances of LiveIntervals.
Use the mailing lists and IRC to seek help. Few LLVM contributors seem to regularly visit StackOverflow. As is said frequently within the project, "patches welcome."
I am asking this here because I think my last question was more than one question so creating another question seemed appropriate. However, you can close it if it does not adhere to the SO policies.
In this comment on my last question , I was given a nice advice by Michael Aaron Safyan (at least I liked it):
Once you feel somewhat comfortable in the language, then I would recommend taking a look at Google Code and seeing if there are any C++ projects that are in need of some help.
I am going to be developing on XCode on Mac. My question is that do I have to take that into consideration when developing for C++ projects. Is environment consideration project based or can a generic OS/Env be used like Mac/Xcode or Ubuntu/Anjuta?
Thanks.
In theory, no; in practice, yes.
For core algorithms, the environment wouldn't matter.
For UI, the platform matters termendously.
Unfortunately, UI drives most applications. Usually, the core algorithms are trivial.
Even in the rare cases where UI doesn't matter, you would still have the problem of common libraries.
When it comes to C++ development there are two major camps: MSVC, and gcc. Porting a project between them is not always easy, so since Xcode is gcc-based as long as you stick to gcc projects (typical filenames to look for are configure and Makefile) you should not have a problem.
If a project has been ported, or is portable, to your platform of choice, of course you can work on it.
But if a project is specific to a certain platform, and that's not your chosen one (e.g., a Windows application), you will have a hard time working on that project as you cannot compile the source, let alone have a test run.
Of course, today there are many virtualization solutions available that allow you to run a different OS in a virtual machine. In my opinion, though, that just adds another layer of problems, especially for a novice programmer.
I'm designing a training program in C++ that will be distributed to a large number of facilities, most of which won't have much in the way of an IT staff. The program connects via a TCP connection to a central database which stores various pieces of data for research and evaluation purposes.
The problem I have is that I would like to make the transmission secure, and the most commonly recommended way to do that seems to be OpenSSL - which seems all well and good, but I've got a problem. As I understand it, OpenSSL must be installed specifically on each of the systems. The facilities won't have the expertise required to compile and install the source on their systems, the computers will be sufficiently varied (all Windows boxes, but of varying make and quality) to rule out distributing a specifically-compiled binary, and continent-wide distribution makes it impossible for my team to personally set it up.
Does anyone have a recommendation for how to solve this problem? Am I simply incorrect in my assumptions, and one can distribute it without installation? If not, is there a more practical alternative?
As long as all your machines are XP+, with two versions of OpenSSL you should be ready, one for 32bits and one for 64bits. Just provide two separate installers and that should be it. There's no need to compile for each machine.
Just remember to include the Visual C++ redistributable package in your installer as well.
If you have to support ancient Windows versions, it gets a bit more complex but not that much.
Actually, OpenSSL seems like a good option based on what you described.
From what I understand of OpenSSL, it is a library written in C (with wrappers around it for other languages), meaning that you can include it in the code base of whatever it is you are writing.
I'm pretty sure that it is not a program that has to be installed, so I think that you shouldn't have to worry about that.
You might also like to experiment with IPSEC- if you are concerned with distribution of binaries etc to client machines, IPSEC could be interesting solution. Since virtually all Windows boxes support it, all you have to do is to configure IPSEC policy on DB server - by making it as "required" this way, all the data between client machines and DB server will be encrypted.
I am developing a portable C++ application and looking for some best practices in doing that. This application will have frequent updates and I need to build it in such a way that parts of program can be updated easily.
For a frequently updating program, creating the program parts into libraries is the best practice? If program parts are in separate libraries, users can just replace the library when something changes.
If answer for point 1 is "yes", what type of library I have to use? In LINUX, I know I can create a "shared library", but I am not sure how portable is that to windows. What type of library I have to use? I am aware about the DLL hell issues in windows as well.
Any help would be great!
Yes, using libraries is good, but the idea of "simply" replacing a library with a new one may be unrealistic, as library APIs tend to change and apps often need to be updated to take advantage of, or even be compatible with, different versions of a library. With a good amount of integration testing though, you'll be able to 'support' a range of different versions of the library. Or, if you control the library code yourself, you can make sure that changes to the library code never breaks the application.
In Windows DLLs are the direct equivalent to shared libraries (so) in Linux, and if you compile both in a common environment (either cross-compiling or using MingW in Windows) then the linker will just do it the same way. Presuming, of course, that all the rest of your code is cross-platform and configures itself correctly for the target platform.
IMO, DLL hell was really more of a problem in the old days when applications all installed their DLLs into a common directory like C:\WINDOWS\SYSTEM, which people don't really do anymore simply because it creates DLL hell. You can place your shared libraries in a more appropriate place where it won't interfere with other non-aware apps, or - the simplest possible - just have them in the same directory as the executable that needs them.
I'm not entirely convinced that separating out the executable portions of your program in any way simplifies upgrades. It might, maybe, in some rare cases, make the update installer smaller, but the effort will be substantial, and certainly not worth it the one time you get it wrong. Replace all executable code as one in most cases.
On the other hand, you want to be very careful about messing with anything your users might have changed. Draw a bright line between the part of the application that is just code and the part that is user data. Handle the user data with care.
If it is an application my first choice would be to ship a statically-linked single executable. I had the opportunity to work on a product that was shipped to 5 platforms (Win2K,WinXp, Linux, Solaris, Tru64-Unix), and believe me maintaining shared libraries or DLLs with large codebase is a hell of a task.
Suppose this is a non-trivial application which involves use of 3rd Party GUI, Threads etc. Using C++, there is no real one way of doing it on all platforms. This means you will have to maintain different codebases for different platforms anyway. Then there are some wierd behaviours (bugs) of 3rd Party libraries on different platforms. All this will create a burden if application is shipped using different library versions i.e. different versions are to be attached to different platforms. I have seen people shipping libraries to all platforms when the fix is only for a particular platform just to avoid the versioning confusion. But it is not that simple, customer often has a different angle to how he/she wants to upgrade/patch which is also to be considered.
Ofcourse if the binary you are building is huge, then one can consider DLLs/shared-libraries. Even if that is the case, what i would suggest is to build your application in the form of layers like:-
Application-->GUI-->Platform-->Base-->Fundamental
So here some libraries can have common-code for all platforms. Only specific libraries like 'Platform' can be updated for specific behaviours. This will make you life a lot easier.
IMHO a DLL/shared-library option is viable when you are building a product that acts as a complete solution rather than just an application. In such a case different subsystems use common logic simultaneously within your product framework whose logic can then be shared in memory using DLLs/shared-libraries.
HTH,
As soon as you're trying to deal with both Windows and a UNIX system like Linux, life gets more complicated.
What are the service requirements you have to satisfy? Can you control when client systems get upgraded? How many systems will you need to support? How much of a backward-compatibility requirement do you have.
To answer your question with a question, why are you making the application native if being portable is one of the key goals?
You could consider moving to a a virtual platform like Java or .Net/Mono. You can still write C++ libraries (shared libraries on linux, DLL's on windows) for anything that would be better as native code, but the bulk of your application will be genuinely portable.
Assume you have five products, and all of them use one or more of the company's internal libraries, written by individual developers.
It sounds simple but in practice, I found it to be very difficult to maintain.
How do you deal with the following scenarios:
A developer unintentionally introduces a bug and breaks everything in production.
Every library has to mature, That means the API needs to evolve, so how do you deploy the updated version to production if every developer needs to update/test their code while they are extremely busy on other projects? Is this a resource and time issue?
Version control, deployment,and usage. Would you store this in one global location or force each project to use, say, svn:externals to "tie" a library?
I've found that it is extremely hard to come up with a good strategy. My own pet theory is this:
Each common library has to have a super-thorough set of tests or else it should never be common, even if it means someone else duplicates the effort. Duplicate untested code is better than common untested code (you break only one project).
Each common library has to have a dedicated maintainer (can be offset by a really good test suite in a smaller team).
Each project should check out the version of the library that is known to work with it. This means a developer does not have to get pulled away to update API usage, as the common code gets updated. Which it will be. Every non-trivial piece of code evolves over months and years.
Thank you for your thoughts on this!
You have a competing set of goals here. First, a library of reusable components must be open enough that people from the other projects can easily add to it (or submit components to it). If it's too difficult for them to do that, they'll build their own libraries, and ignore the common one, leading to a lot of duplicate code and wasted effort. On the other hand, you want to control the development of the library enough that you can ensure its quality.
I've been in this position. There's no easy answer. However, there are some heuristics that can help.
Treat the library as an internal project. Release it on regular intervals. Ensure that it has a well-defined release procedure, complete with unit tests and quality assurance. And, most important, release often, so that new submissions to the library show up in the product frequently.
Provide incentives for people to contribute to the library, rather than just making their own internal libraries.
Make it easy for people to contribute to the library, and make the criteria clear-cut and well-defined (e.g., new classes must come with unit tests and documentation).
Put one or two developers in charge of the library, and (IMPORTANT!) allocate time for them to work on it. A library that is treated as an afterthought will quickly become an afterthought.
In short, model the development and maintenance of your internal library after a successful open source library project.
I don't agree with this:
Duplicate untested code is better than
common untested code (you break only
one project).
If you are all equally likely to create bugs by implementing the same thing, then you'll all have to fix potentially different bugs in each instance of the "duplicate" library.
It also seems that it'd be much faster/cheaper to write the library once and, instead of having multiple other teams write the same thing, have some resources allocated to testing.
Now to solve your actual problem: I'd mimic what we do with real third-party libraries. We use a particular version until we're ready, or compelled to upgrade. I don't upgrade everything just because I can--there has to be a reason.
Once I see that reason (bug fix, new feature, etc.), then I upgrade with the risk that the new library may have new bugs or breaking changes.
So, you're library project would continue development as necessary, without impacting individual teams until they were ready to "upgrade".
You could publish releases or peg/branches/tag svn to help with all this.
If all teams have access to the bug tracker, they could easily see what known issues exist in the upgrade-candidate before they upgrade, too. Or, you could maintain that list yourself.
#Brian Clapper provides some excellent guidelines for how to run your library as a project in his answer.
I used to work in a similar situation to what you're describing, only my company had dozens of software products. I worked on the team that was responsible for maintaining and upgrading the core set of libraries that everyone else used.
We dealt with those scenarios as follows:
Test the heck out of the core libraries. Maintaining duplicate code is a nightmare. You're not just maintaining the core and one copy. Somewhere in your company's source control there are several copies of the same code. We had dozens of products, so that would have meant dozens of copies. Hunt them down and kill them.
We had a small team of 10-12 developers dedicated to maintaining the core library and its test suites. We were also responsible for fielding calls from the other 1100 developers in the company about how to use the core library, so as you can imagine, we were very busy.
Each other project needs to work with the version of the core library that it is known to work with. You can use version control branches to test new releases of the core library with old products to make sure you don't break code that works. If the core team does a thorough job of testing, this should go very smoothly. The only time this ever got really complicated for us was when the core API changed, or when we flat out screwed something up. Even if you're very confident in your core testing, use branches to test individual products.
I agree - this is difficult. In our small team (consulting .. not a product company - which made it harder), we had one common component that stood out from the others. In this case the recipe for success was:
Make a good developer responsible for developing the component
Make a good developer the gatekeeper for maintaining the component
Make sure all upgrades (there were several) are backward compatible
Make sure there is some basic documentation (or a simple reference application) explaining how the component is to be used
Make sure all developers know that the component exists (!) and where they can find it (along with the code, if they wish to review it)
Give developers the ability to review the code and suggest better implementations or refectoring, but have the final mods go through an experienced gatekeeper. When the component were upgraded, older apps did not have to upgrade. If we did a new release, we evaluated if we wanted to upgrade, and if we did, all we needed to do was swap the libraries - no code needed to change, unless we wanted to use some new features available through the upgrade. Resistance is inevitable, but sometimes it is a good sort of resistance when it comes from good developers who have better ideas for a new generation or refactored component.
Treat the development of the libraries like any other product. Each library has its own repository, its own releases and version numbers. The compiled and officially tested versions of the library are also kept in the repository. Document features and changes from version to version.
Then use the libraries like you would using third party libraries. Your product uses only fixed versions of the compiled libraries. Switch to a new version when you really need to and be aware that this involves more testing. Add the versions you use to your version control.
When you find a bug or require a new feature in a library, a new version or sub-version is created. Using a version control system like svn makes this easy. When you need the source code for debugging purposes, export it and include it in your projects, but do not change it there, but fix problems in the libraries' repositories.
This way, every team can contribute to the libraries without endangering the work of the other teams. Switching versions is done deliberately and not by accident.
Create an Anti-corruption (DDD) layer for the existing library... this is nothing but a facade.. and then write unit-test for this anti-corruption layer... Now even if someone upgrade/update the library you would know if something is broken by running the unit tests...
These tests could also serve as documentation of contract... and not every project that need to use the library has to write this anti -corruption layer, if they are using the same exact functionality..
"Duplication is the root of all evil"
Sounds to me like you need:
An artifact repository like Ivy so you can have the libraries shared and versioned with a distinction between versions that are API stable and ones that are "maturing"
Tests for the libraries
Tests for the projects using them
A continuous integration system so that when an incompatibility or bug is introduced both the project and the original library developer are notified
I think that one shared library is better than 3 duplicate ones (and 1 tested is definitely better than 3 untested). That's because when you find and fix problem, this makes the whole application area more solid (and development and maintenance are more efficient).
BTW, that't one of the reasons (apart from contributing back to the community) why our company exposes our .NET shared libraries to the public as open-source.
Plus, there's less code to write. And you can designate one dev to enforce good development practices on the library and its usages (i.e. through code contracts enforced on the unit tests within library consumers). This improves quality and and reduces maintenance costs.
We store shared libraries as binaries in the solution. That comes from the logical requirement that any solution has to be atomic and independent (this rules out svn:externals links).
API compatibility is not an issue at all. Just let your integration server rebuild and retest the whole product stack (while updating all the inner references and propagating changes) and you'll always be sure that all internal API's are solid. And whomever breaks the API has to either fix it or update the usages.
Duplication is the root of all evil
I would argue that unchecked government is the root of all evil :)
I do get a lot of flack for even suggesting that duplication should be an option. I understand why, but let me complicate this a bit.
Say you have a fairly large library that doesn't actually do anything in particular - it's just a collection of utilities. There are NO tests for this library - at all. You need only one function from it. Say, something that parses out a file's extension.
Pop quiz: do you just write something as small as this in your own project, or you bite the bullet and use the free-for-all untested set of utilities, which WILL break your application if someone breaks the function?
Also, imagine you are in environment where writing tests is not part of the culture, since most projects are very intense and have a very short development span.
Duplicating large systems - such as client registration - would be dumb beyond belief, of course. However, aren't there any cases where it is safer to duplicate something fairly small in your project if the alternative is not safe enough (no system for maintaining common code).
Think of it this way - and this happens all the time - multiple contractors working on different projects, for the same company. They don't even know about each other.
My argument is this:
If a team cannot dedicate to maintaining a solid common codebase, or if the environment does not give them enough time to, it's best to let them work as separate "contractors".
You will STILL need to use large existing systems that simply cannot be duplicated.
Duplicating large systems - such as
client registration - would be dumb
beyond belief,
That's why those systems publish external interfaces.
If you define a library as shared code between projects: in my experience that's almost always bad. A project should be stand alone, and updates for one project should not affect other projects.
Even if you start with libraries, you'll end up duplicating code anyway. Want to hotfix project 1? It was released with library 1.34, so to keep the hotfix as small as possible, you'll go back to library 1.34 and fix that. But hey-- now you did exactly waht the library was supposed to avoid-- you've duplicated the code.
Every developer uses Google to find code and copy it into his application. That's probably how they found Stack Overflow in the first place. Imagine what would happen if Stackoverflow published libraries instead of code snippets, and you'll get an idea of the problems that afflicts many well meaning library creators.
Libraries tend to be generic solutions to specific problems. Typically, the generic solution is more complex than the sum of the two specific solutions. This means you need one good programmer to solve a problem that could have been solved by two morons. Sounds like a bad tradeoff to me :D
I would like to point a problem in the solutions suggested above: treating the library as an internal project with its own versioning scheme.
The problem
If your company has more than one product (lets say two teams - two product: A, B), than each product has its own release schedule. Let's give an example: Team A is working on product A v1.6. Their release schedule is two weeks from now (suppose Oct 30th). Team B is working on product B v2.4. Their release schedule is 1.5 months from now - Nov 30th. Lets assume both are working on acme-commons-1.2-SNAPSHOT. Both are adding changes to acme-commons, as they need it. Couple of days before Oct 30th, team B introduce a change which is buggy, to acme-commons-1.2-SNAPSHOT. Team A is getting into stress mode since they discover the bug 1 day prior to code freeze.
This scenario shows that treating a common library as a third party library is almost impossible. The trivial, but problematic, solution is for each team to have their own copy of the version they are about to change. For example, product A v1.2 will create a branch (and version) for acme-commons named "1.2-A-1.6". Team B will also create a branch in acme-commons called "1.2-B-2.4". Their development will never collide and they will be stress free once they tested their product.
Of course, someone will have to merge their changes back to the original branch (lets say master or 1.2).
The problems I found with this solution is:
Branch inflation - the tree structure will be very puffy and it will be harder to understand the flow of changes/merges.
Merges back to 1.2 will probably never happen - Unless a team/developer is dedicated to this library, the chances that Team A or Team B merges their code back to 1.2 branch is slim. They will always stay focused on their tasks, thus creating and using their own branch space. Allocation of a developer/team is expensive, thus not always a viable solution.
I'm still trying to figure this one out, so any thoughts of this matter are welcome