Related
My current workflow when developing Apps or programs with Java or C/C++ is as follows:
I don't use any IDE like IntelliJ, Visual Studio, ...
Using linux or OS X, I use vim as code editor. When I build with a makefile or (when in Java) gradle, I :!make and wait for the compiler and linker to create the executable, which will be run automatically.
In case of compilation errors, the output of the compiler can get very long and the lines exceed the columns of the console. So everything gets messy, and sometimes takes too much time to find out, what the first error ist (often causing all following compile errors).
My question is, what is your workflow as a C++ developer? For example is there a way, to generate a nicely formatted local html file, that you can view / update in your browser window. Or other ideas?
Yes, I know. I could use Xcode or any other IDE. But I just don't want.
Compiling in vim with :!make instead of :make doesn't make any sense -- it's even one of the early features of vim. The former will expect us to have good eyes. The latter will display compilation errors into the quickfix window, which we can navigate. In other words, no need to use an auxiliary log file: we can navigate compilation errors even in (a coupled of) editors that run into a console.
I did expand on a related topic in https://stackoverflow.com/a/35702919/15934.
Regarding compilation, there are a few plugins that permits to compile in background. I've added this facility in build-tool-wrapper lately (it requires vim 7.4-1980 -- and it's still in a development branch at this time). This plugin also permits me to easily filter errors in the standard library with the venerable STLfilt, and to manage several build configurations (each in a separate directory).
For past 4 years, I have been programming with Eclipse (for Java), and Visual Studio Express (for C#). The IDEs mentioned always seemed to provide every facility a programmer might ask for (related to programming, of course).
Lately I have been hearing about something called "build tools". I heard they're used almost in all kind of real world development. What are they exactly? What problems are they designed to solve? How come I never needed them in past four years? Are they kind of command-line stripped down IDEs?
What are build tools?
Build tools are programs that automate the creation of executable
applications from source code (e.g., .apk for an Android app). Building
incorporates compiling,linking and packaging the code into a usable or
executable form.
Basically build automation is the act of scripting or automating a
wide variety of tasks that software developers do in their day-to-day
activities like:
Downloading dependencies.
Compiling source code into binary code.
Packaging that binary code.
Running tests.
Deployment to production systems.
Why do we use build tools or build automation?
In small projects, developers will often manually invoke the build
process. This is not practical for larger projects, where it is very
hard to keep track of what needs to be built, in what sequence and
what dependencies there are in the building process. Using an
automation tool allows the build process to be more consistent.
Various build tools available(Naming only few):
For java - Ant,Maven,Gradle.
For .NET framework - NAnt
c# - MsBuild.
For further reading you can refer following links:
1.Build automation
2.List of build automation software
Thanks.
Build tools are tools to manage and organize your builds, and are very important in environments where there are many projects, especially if they are inter-connected. They serve to make sure that where various people are working on various projects, they don't break anything. And to make sure that when you make your changes, they don't break anything either.
The reason you have not heard of them before is that you have not been working in a commercial environment before. There is a whole lot of stuff that you have probably not encountered that you will within a commercial environments, especially if you work in software houses.
As others have said, you have been using them, however, you have not had to consider them, because you have probably been working in a different way to the usual commercial way of working.
Build tools are usually run on the command line, either inside an IDE or completely separate from it.
The idea is to separate the work of compiling and packaging your code from creation, debugging, etc.
A build tool can be run on the command or inside an IDE, both triggered by you. They can also be used by continuous integration tools after checking your code out of a repository and onto a clean build machine.
make was an early command tool used in *nix environments for building C/C++.
As a Java developer, the most popular build tools are Ant and Maven. Both can be run in IDEs like IntelliJ or Eclipse or NetBeans. They can also be used by continuous integration tools like Cruise Control or Hudson.
Build tools are generally to transform source code into binaries - it organize source code, set compile flags, manage dependencies... some of them also integrate with running unit test, doing static analysis, a generating documentation.
Eclipse or Visual Studio are also build systems (but more of an IDE), and for visual studio it is the underlying msbuild to parse visual studio project files under the hood.
The origin of all build systems seems like the famous 'make'.
There are build systems for different languages:
C++: make, cmake, premake
Java: ant+ivy, maven, gradle
C#: msbuild
Usually, build systems either using a propriety domain specific language (make, cmake), or xml (ant, maven, msbuild) to specify a build. The current trend is using a real scripting language to write build script, like lua for premake, and groovy for gradle, the advantage of using a scripting is it is much more flexible, and also allows you the to come up with a set of standard APIs(as build DSL).
These are different types of processes by which you can get your builds done.
1. Continuous Integration build: In this mainly developers check-in their code and right after their check-in a build initiates for building of the recent changes so we should know whether the changes done by the developer has worked or not right after the check-in is done. This is preferred for smaller projects or components of the projects. In case where multiple teams are associated with the project or there are a large no. of developers working on the same project this scenario becomes difficult to handle as if there are 'n' no. of check-in’s and the build fails at certain points it becomes highly difficult to trace whether all the breakage has occurred because of one issue or with multiple issues so if the older issues are not addressed properly than it becomes very difficult to trace down the later defects that occurred after that change. The main benefit of these builds is that we get to know whether a particular check-in is successful or not.
2. Gated check-in builds: In this type of check in a build is initiated right after the check in is done keeping the changes in a shelve sets. In this case if the build succeeds than the shelve-set check-in gets committed otherwise it will not be committed to the Team Foundation Server. This gives a slightly better picture from the continuous integration build as only the successful check-in's are allowed to get committed.
3. Nightly builds: This is also referred as Scheduled builds. In this case we schedule the builds to run for a specific time in order to build the changes. All the previous uncommitted changes from the last build are built during this build process. This is practiced when we want to check in multiple times but do not want a build every time we check in our code so we can have a fixed time or period in which we can initiate the build for building of the checked-in code.
The more details about these builds can be found at the below location.
Gated-check in Builds
Continuous Integration Builds
Nightly Builds
Build Process is a Process of compiling your source code for any errors using some build tools and creating builds(which are executable versions of the project). We(mainly developers) do some modifications in the source code and check-in that code for the build process to happen. After the build process it gives two results :
1. Either build PASSES and you get an executable version of your project(Build is ready).
2. It fails and you get certain errors and build is not created.
There are different types of build process like :
1. Nightly Build
2. gated Build
3. Continuous integration build etc.
Build tools help and automates the process of creating builds.
*So in Short Build is a Version of Software in pre-release format used by the Developer or Development team to gain confidence for the final result of their Product by continuously monitoring their Product and solving any issues early during the development process.*
You have been using them - IDE is a build tool. For the command line you can use things like make.
People use command line tools for things like a nightly build - so in the morning with a hangover the programmer has realised that the code that he has been fiddling with with the latest builds of the libraries does not work!
"...it is very hard to keep track of what needs to be built" - Build tools does not help with that all. You need to know what you want to build. (Quoted from Ritesh Gun's answer)
"I heard they're used almost in all kind of real-world development" - For some reason, software developers like to work in large companies. They seem to have more unclear work directives for every individual working there.
"How come I never needed them in past four years". Probably because you are a skilled programmer.
Pseudo, meta. I think build tools do not provide any really real benefit at all. It is just there to add a sense of security arising from bad company practices, lack of direction - bad software architectural leadership leading to bad actual knowledge of the project. You should never have to use build tools(for testing) in your project. To do random testing with a lack of knowledge of the software project does not give any sort of help at all.
You should never ever add something to a project without knowing it's purpose, and how it will work with the other components. Components can be functional separate, but not work together. (This is the responsibility of the software architect I assume).
What if 4-5 components are added into the project. You add a 6th component. Together with the first added component, it might screw up everything. No automatic would help to detect that.
There is no shortcut other than to think think think.
Then there is the auto download from repositories. Why would you ever want to do that? You need to know what you download, what you add to the project. How do you detect changes in versions of the repositories? You need to know. You can't "auto" anything.
What if we were to test bicycles and baby transports blindfolded with a stick and just randomly hit around with it. That seems to be the idea of build tool testing.
I'm sorry there are no shortcut
https://en.wikipedia.org/wiki/Scientific_method
and
https://en.wikipedia.org/wiki/Analysis
We have a handful of developers working on a non-commercial (read: just for fun)
cross-platform C++ project. We've already identified all the cross-platform libraries we'll need. However, some of our developers prefer to use Microsoft Visual C++ 2008, others prefer to code in Emacs on GNU/Linux. We're wondering if it is possible for all of us to work more or less simultaneously out of both environments, from the same code repository. Ultimately we want the project to compile cleanly on both platforms from the start.
Any of our developers are happily willing to switch over to the other environment if this is not possible. We all use both Linux and Windows on a regular basis and enjoy both, so this isn't a question of trying to educate one set devs about the virtues of the other platform. This is about each of us being able to develop in the environment we enjoy most yet still collaborate on a fun project.
Any suggestions or experiences to share?
Use CMake to manage your build files.
This will let you setup a single repository, with one set of text files in it. Each dev can then run the appropriate cmake scripts to build the correct build environment for their system (Visual Studio 2008/2005/GNU C++ build scripts/etc).
There are many advantages here:
Each dev can use their own build environment
Dependencies can be handled very cleanly, including platform specific deps.
Builds can be out of source, which helps prevent accidentally committing inappropriate files
Easy migration to new dev. environments (ie: when VS 2010 is released, some devs can migrate there just by rebuilding their build folder)
I've done it and seen it done without too many problems.
You'll want to try an isolate the code that is different for the different platforms. Also, you'll want to think about your directory structure. Something like
project/src <- .cc and .h files
project/src/linux|win <- code that is specific to one platform or the other
project/linux <- make files and other project related stuff
project/win <- .sln and .csproj files
Basically you just want to be really clear what is specific to each system and what is common.
Also, unit tests are going to be really important since there may be minor difference and you want to make it easy for the windows guys to run some tests to make sure the linux code works as expected and the other way around.
as mentioned in previous posts, Qt is a very easy method to do real simultaneous multi-platform development - independent of your IDE and with many different compilers (even for Symbian, ARM, Windows CE/Mobile...).
In my last and current company I work in mixed Linux- and Windows-developer teams, who work together using Subversion and Qt (which has a very easy and powerful build-system "QMake", which hides all the different platform/compiler specific build environments - you just have to write one build file for all platforms - so easy!).
And: Qt contains nearly everything you need:
simple string handling, with easy translation support and easy string conversion (utf8, utf16, asci...)
simple classes for file i/o, images, networking etc.
comfortable gui classes
graphical designer for the gui layout/design
graphical translation tool to create dynamic translations for your apps (you can run one binary with different selectable languages - even cyrillic, asian fonts...)
integrated testing framework (unit tests)
So Qt is a full featured and very reliable environment - I use it for 10 years.
And: Qt integrates seamlessly into IDEs like VC++, Eclipse or provides its own IDE "QtCreator".
see: http://www.trolltech.com + http://doc.trolltech.com
Best Regards,
Chris
I had such experience working in Acronis. We had the same codebase used to build binaries (fully packaged installers, actually) targetting Win32/64, Linux, and OS X (and a bunch of other more exotic platforms, such as EFI), with everyone working in their IDE of choice. The trick here is to avoid compiler- and IDE-specific solutions, and make your project fully buildable from clean cross-platform make files. Note that you can perfectly well use any make system together with VC++, so it isn't a problem (we used Watcom make for historical reasons, but I wouldn't recommend it).
One other trick you can do is add a make script that automatically produces project files from the input lists in your makefiles, for all IDEs you use (e.g. VS and Eclipse CDT). That way, every developer generates that stuff for himself, and then opens those projects in IDE to edit/build/debug, but the source repository only has makefiles in it.
Ensuring that code is compilable for everyone can be a problem, mostly because VC++ is generally more lax in applying the rules than g++ (for example, it will let you bind an rvalue to a non-const reference, albeit with a warning). If you compile with treat warnings as errors, and highest warning levels (with perhaps a few hand-picked warnings disabled), you will mostly avoid this. Having contiguous rolling build set up is another way to catch those early. One other approach we've used in Acronis is to have the Windows build environment have Cygwin-based cross-compilation tools in it, so any Windows dev could do a build targeting Linux (with the same g++ version and all) from his box, and see if that fails, to verify that his changes will compile in g++.
I personally use cmake/mingw32/Qt4 for all my C++ needs.
QtCreator is a crossplatform IDE which is somewhat optimized for qt4 programming.
We're working on a cross-platform project as well. We're using Emacs to code, SCons to build and Visual Studio 2008 to debug. It's my 1st time using Emacs + SCons and I must say that It's very very nifty once you figure out how does the SConstruct and SConscripts work.
I'm late to this question, and there are a lot of good answers here, but I haven't seen anyone enumerate all the issues I've run into (most of my work in C++ is and has been cross-platform), so:
Cross-platform compilation. Everyone has covered this quite well. CMake, and SCons are useful among others.
Platform differences. These aren't actually limited to Linux v Windows. Different versions of Windows have plenty of subtle issues. If you ever want to jump from Linux to OS X you'll find the same. So do processor architecture differences: 32 v 64 bit doesn't usually matter - until it does and things break on you very badly. This is something C++ programmers face more than in most other languages, whose implementations are working hard to hide this sort of thing from programmers.
Test everything. Alan mentions unit tests but let me emphasize them. The real problem with cross-platform programming is not code that won't compile: it's code that compiles just fine and does the wrong thing in subtle cases. Unit tests matter here much more than they do when you are working on a single platform.
Use portable libraries whenever possible. Getting this right is full of subtle gotchas. It's better to take advantage of someone else's work than to roll your own. Qt is useful here, as are NSPR and APR (the latter two are C, but wrappers exist if you don't want to mess with them directly.) These are the libraries that explicitly target platform abstraction as a goal, but most other libraries you use should be checked for this. Be reasonably wary of cool libraries that don't have a track record of portability. I'm not saying don't use them: just test first. Doing this right saves you enormous effort on testing.
Pavel mentions continual integration. This may not matter if there are only a handful of you. But I've found that the number of platforms you get when you consider all of the OS, OS variant, and processor differences means that without some form of continuous build-test cycle you will always be missing some edge case. If your project gets bigger than a handful of people consider doing this.
Version control. The most obvious issue is handling newlines and filename case-sensitivity but there are others. Most of the well-known open source VCSes do this right by now, but it's notable that it usually takes them several years to find all their bugs related to portability. As an example Mercurial, which has had portability as one of its selling points, has a series of fixes spanning three or four releases dealing with Windows filenames in uncommon situations. Make sure you can check out what the others check in early on.
Scripts. Use Perl/Python/Ruby (or something like them) for your scripting. Trying to keep Linux shell and Windows batch scripts in sync is painfully tedious.
Despite all of the above: overall cross-platform is quite doable, and there's no real reason to avoid it. My experience is that the problems tend to crop up sporadically, but when they do they take more than there fair share of time. If you've been programming a while though you won't find that unusual.
The issue is not really editing the C++ source files, but the build process itself. VS doesn't use the same Makefile architecture as Linux development typically does. A radical suggestion, but both groups could actually use the same C++ IDE and build process - see Code::Blocks for more info.
This is possible (I do that to earn my daily money :-) ).
But you have to keep in mind the differences in the OS supplied libraries which could be very annoying.
Two good options to get around that are BOOST and QT.
Both supply you with platform independent functions of useful stuff that isn't handled by the C lib and the STL.
I've done this in two ways:
Each platform has its own build system. Whenever someone changes something (adding a source file), they either adapt the other platform too, or they mail a notification and someone more familiar with the other platform adapts it accordingly.
Use CMake.
I can't say I really liked CMake, but a cross-platform tool certainly has its advantages.
Generally, using different compilers to compile your code from the very first few lines is always a good idea, as it helps improving code quality.
Another vote for cmake.
One thing to watch out for, filenames are cased but not case sensitive on Windows. They are case sensitive on most unix filesystems.
This is generally a problem with #include
eg. given a file FooBar.h
#include "foobar.h" // works on Windows, fails on Linux.
I do agree with everyone here that has suggested cmake for cross platform development in C++. It is an excellent tool.
Another thing I would suggest is to use eclipse CDT as development environment. It works in any place where you can run Java an gcc and that unifies your development environment.
I think that was Alan Jackson who gave emphasis to the unit test. For doing so yo would need some cross platform unit test libraries. I read this post time ago regarding C++ unit test frameworks. It is a bit outdated but thoughtful. The one missing that also works in both platforms is googletest.
Finally if you want to run those test automatically in both platforms cmake has another tool called ctest that is very good for doing so.
Check out premake... It is pretty similar to CMake but written using lua.
I have used this on a number of development projects and find it easy to learn and integrate into existing company and project structures.
Give it a try!
http://industriousone.com/premake
we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...
I'am aware there has been a generic question about a "best IDE in C++" but I would like to stress I'm a new to C++ and programming in general. This means I have the needs of a student:
relatively easy and unbloated working environment
things just work, focus on the code
color coding to show the different language features (comments, etc)
not too unfriendly (not a simple editor, something to handle projects from start to finish)
cross-platform so not to be bound with specific system practices
I think the above are relatively reasonable demands for an educational IDE, perhaps excluding the last as such universal tool might not exist. Any ideas?
It depends on which world are you coming from to learn C++.
Do you have previous Java experience? - Use Eclipse CDT.
Have used .NET previously? - Go with Visual Studio C++ Express Edition (and then throw it away if you really need multiplatform IDE, not just code).
Are you an Unix guy? Use just a syntax-highlighting editor + Makefile. When you want to learn basics of the C++, the project should not be complicated and it is well invested time to learn how the C++ compiler is called with preprocessor options, etc.
Code::Blocks is free and really easy to install and use. I always recommend it to my students.
I've heard good things about Code::Blocks. Might be a bit complex, but you can close any unneeded panes, and it's cross-platform.
I would recommend Komodo Edit.
It functions as a great text editor that I've used on Ubuntu, Windows(XP/7) and OSX. It's big brother is a full blown IDE but KE still allows for projects and some great extensions. It's also free and open source. I found it easy to get started quickly with it and as your skills grow, it has the ability to keep up.
Edit to add a link to ActiveState's community site for Komodo Extensions. If you decide to try out KE, I'd suggest the RemoteDrive Tree (ssh,ftp,scp remote editing) and Source Tree as a start.
If you are using both windows and linux (as your comment indicates), I'd recommend Qt Creator. Qt is cross platform so your apps will work on linux, windows, and mac. Qt has excellent documentation, too, so it's very newbie friendly. Signals and Slots take a bit of getting used to, but IMO it's worth it.
Until the last point I would have said Microsoft Visual C++ Express Edition, which is free and fits your first 4 criteria. Cross platform you'd be looking at something like emacs or vim, neither of which are particularly friendly. On Windows I actually use Notepad++ for small C++ programs as it has good syntax highlighting and a (limited) intellisense.
I'd recommend Eclipse CDT as it does good code completion and it builds code on the fly, so you can see your errors immediately which is very good for a language studying.
Assuming Linux/Unix like system ...
I've found out that it's much easier and beneficial to go the other way round. Try using 'simple' editor like vim and for C++ just Makefiles to compile using gcc and linker.
I've started using that at uni and 5> years and couple of companies later it's still the easiest and most flexible option because you have quick access to all settings in one simple file.
Even when you switch to IDE later on you will know what to look for if things don't work because you will know the basics for example what are the steps to go from source file to object file and link to binary executable, how to handle libraries and so on. These things change between IDE's and are often complicated to trace and modify.
You can start with simple makefile and keep improving it over years. It's easy to copy it to your project directory and update file names - for C++ the compilation process will be fairly standard between projects.
I highly encourage you to consider this option. I've learned a lot doing it that way and you have a backup plan when you IDE just wouldn't work.
I keep one generic Makefile that compiles main.cpp into executable. To compile something quickly I just copy it into directory and make.
My current workflow is to open all files in project directory (flat file system) with vim (vim *.cpp *.hpp), edit, compile with :mak (or :mak -C .. debug) from within vim to invoke the Makefile stored in relevant directory, after compiling it'll jump to first warning/error, use :cn to go over errors, fix what's needed, open errors in separate window with :cope (close with :clo or unload file with :bd, jump between split windows with ctrl-w ctrl-w or ctrl-ww - hold ctrl and press w twice) ...
Vim has syntax highlighting millions of other features, I'm using tags (or ctags) to navigate code from within vim and so on.
Personally, it's my opinion that all C++ IDEs suck. When I write C or C++, I tend to use some sort of powerful programmer's text editor along with command line compilation. If I'm just messing around and have a couple source files, I'll just invoke gcc -g -o myprog *.c on the command line myself. If I have a more involved project, I'll just write a simple makefile. You could also look into gmakemake if you don't want to bother learning makefile syntax just to compile your programs.
On the Mac side, I have always been a fan of both BBEdit and TextMate, but much more so of the latter, especially given its lesser price tag and more modern feel. Both have project organization features.
On Windows, I'd stick with either e (which is basically a port of TextMate to Windows) or Notepad++. The downside of Notepad++ is that it doesn't have any project organization features, whereas e does. You could also look at SciTE, but like Notepad++, it has no project org features.
As for Linux, I'm personally unsure. I'd stick with other people's answers covering that platform for recommendations there.