Are there any Netbooks powerful enough for moderate C++ compilation? - c++

I've tried a few Asus Ones, and found that even switching between multiple windows could take seconds. Is there anything powerful enough in that form factor for C++ programmers to build small to moderate size projects?

I also give it a qualified yes.
What OS you use may matter a lot. I have Kubuntu on a HP 2140 netbook with only 1 gb of ram and the usual Asus N270 cpu. And it is actually rather snappy for window or desktop switches etc under KDE 4.3.
Compile-times are ok but I am spoiled by better machines at the office or even at home. But I got this for the form factor and I take it with me while commuting. I mostly edit, write docs, etc pp while I am on the train and then commit back to SVN at the other end. That works well for me, including the occassional make or make check.

It depends on what compiler and editor/IDE you decide to use. The wimpiest Netbook is still a killer machine compared to what we used 20 or even 10 years ago. One of the easiest routes to better performance is to use an older editor/IDE (the compiler itself will probably be all right). Of course, we expected slower compilation back then too, but even so a minute to switch between windows would have been excessive.

Perhaps the HP Mini note? Amazon Link
You could also try to compile with the Nice command, which will supposedly only do intensive things during the moments when your not using your computer much.

I have an eeepc and it's okay for compilation. I definitely wouldn't want to compile a complete Boost build on it. It works, but you're kind of slumming it. P4 speed, slow hard drive, small amount of RAM... less everything.

A low-quality netbook has more resources and capabilities than my development workstations did a decade ago. What you will need, however, is RAM. If you try to do too many things at a time on one of these things you will swap like crazy because modern software and modern operating systems are written by lazy, slack developers who think RAM is a limitless resource. Alternatively you can boost your RAM. My wife's netbook got boosted to 1GB before the unit was even taken home and it's not at all bad.

Related

Minix vs Linux for Learning Operating System Design?

I wish to learn operating system design. I was wondering if I should tackle Minix or GNU/Linux in the process? I like books so I would be following mainly a book, though video resources (presumably videotaped lectures) would also be welcome.
I have formally studied C and C# and can program small to medium sized programs in them. I also have a very basic understanding of data structures.
If I take the Minix route, should I tackle version 2 (simpler??) or version 3?
I would go for the Minix route, just because of my personal experience with it. Minix is very straightforward, and written from an educational point of view. Linux kernel on the other hand has been around for so long, and is therefore optimized heavily. I do not think that it is a good start.
I wouldn't worry too much about which minix version. The concept remains the same. With the newer versions you are able to run X on it, which can be helpful, but at the same time adds more complexity. Just go with the version you find a good book of.
Operating Systems: Design and Implementation covers Minix, so this might a good argument pro Minix.
Without having touched this topic myself, Linux is rather large (last time I checked 10 millions line+, though of course you would not have to study all of it), and Minix uses a microkernel architecture with separate modules, so it might be easier to grasp.
I would go for Minix.
(on the other hand, O'Reilly has a number of books on Linux; but I think I would still go with Minix, having that phat book as the reference)
At an internship I did, I had to change the hard drive driver in Minix so that it serves requests using the elevator algorithm instead of first-come-first-served. I was supposed to do it in Minix 2, but I wanted to do it in Minix 3 because I never like using old technologies.
In the 2 months I was working on it, the most frustrating thing was that Minix 3 took about 20 minutes to compile in VMWare on a laptop with an I5 processor, 4GB of RAM running Windows 7. Finally, after 2 months, I gave up on Minix 3 and switched to Minix 2, which compiled in about 20 s.
Now I'm not saying there couldn't have been something very wrong about how I was compiling the system, but I was trying really hard to speed it up with no success.
Let me just say that at the time I had just received my Master's degree in computer science and I had 5 years of intensive experience with programming in C (just so that you don't think I'm a self-taught programmer that just decided to jump in to programming by redesigning an operating system :D )
EDIT: In the end, I suggest you to try compiling Minix 3 to see how it goes for you. If you have more luck, definitely go with this one because it has more modern OS concepts, on the other hand, if you are a complete beginner, you'll probably learn tons from Minix 2. I did.
As other posters have said, starting with Linux can be difficult because it is now so large and complex that the barrier to entry has sky-rocketed. But, if you do choose this route, I recommend starting with one small subsystem and focusing on that.
There are other possibilities, FreeBSD or even GNU/Hurd (or even your own toy kernel). And it depends of what you really want to learn.
If you know Linux and what to learn how to write drivers, writing your own Linux driver kernel module is sensible.
It also depends upon your precise definition of Operating System. This is not necessarily the same as an OS kernel
Not all operating systems are "Unix-like", e.g. coyotos, Kangaroo, ... See also tunes.org
J.Pitrat's book Artificial Beings (the conscience of a conscious machine) have interesting insights on what an OS could be.

C++ BoundsChecker followup

We've been running for years with BoundsChecker for Visual C++ 6 (I think it was BoundsChecker 5 or 6, too). We've upgaded to VS2008 (finally!), and now need a follow-up for the outdated BoundsChecker.
How's the landscape?
What tools are out there?
Any new kids in town?
Any new ideas dealing with the problems we used memory profilers for?
Your recent experiences with these tools?
Recommendations?
The main application is C++ with many COM DLL's, we are looking to track native, C++ and COM leaks and objects. Bounds Checker for that size was already a pain in performance, sorting out the slew of data and some of its limitations.
Support for managed applications (primarily C#) is required, though that may be a separate tool.
Related (but IMO incomplete) question: Modern equivalent of BoundsChecker for Visual Studio 2008
[edit]
Regardign the comment, "In modern C++, you just use self-checking types, and bounds are never broken" :
Reference counted smart pointers can have cyclic references. Interfacing COM components is inherently unsafe, as it requires a lot of manual memory management. I've had a UI-less 3rd party service leak GDI handles so it crashed our overnight tests - the vendor blamed it on a "strange" Microsoft API. I have to interface C-based libraries, I have tons of legacy code that assumes allocation trickery in the sense of Numerical Recipes is a good thing and variable names longer than 3 letters are for typists. I have code from engineers for whom a std::vector<double>::iterator looks much more scary than a double ***, good luck developing and testing these without a solid background in signal processing.
So unless you come here, rewrite and encapsulate the core of a million lines of code in fool-proof C++ classes and make sure a few dozen products still work as before, keep your smart-assery to yourself. I wish I wouldn't need a memory checker, but I do. Thank you.
Disclaimer and warning: I work for Micro Focus, owner of the DevPartner Studio and BoundsChecker products.
BoundsChecker 10.5, part of DevPartner Studio 10.5 (though you can buy it by itself), supports Visual Studio 2005, 2008 and 2010 unmanaged code for 32 and 64 bit applications in essentially the same way it supported 32 bit applications on Visual Studio 6.0. While enhancing it to support X64 applications, we found and fixed quite a few very old problems, and made a start at working in spite of the .NET 4.0 code present in some VS 2010 applications. I say "in spite of" because .NET 4.0 turns out to do a lot of very nasty things in the process space, doing some things that Microsoft warns everybody else not to do, and has a certain amount of built-in resistance to tools like BoundsChecker, which are essentially gigantic viruses.
Anyway, since that release (February 4th), we have updated it to work on Windows 7 SP1 (which isn't quite public yet), and as far as BoundsChecker is concerned, we work with Visual Studio 2010 SP1 as well. We also discovered a nasty .NET 4.0 trap, and figured out to prevent it from taking us down. These enhancements and fixes will be available in our next public update, hopefully within the next month or so.
I have a massive application (here at work), and the new bounds checker 10.5 (supports 64 bit apps now) pretty much works with it. The trick Max is not to turn on all the checker features of devpartner bounds checker at once. Turn on just memory leaks, or turn on just some other feature, then run your app. And by all means exclude modules you don't need. There are quite a few things you can use to tune your settings so it goes faster. But yes, it does take a performance hit. But that is the name of the ballgame.
Intel's Parrallel inspector gave us thousands and thousands of false positives. Didn't use it.
Purify only works on 32 bit apps. i.e. small 32 bit native apps. Forget about using it with a managed C++ app.
And just for the record, if you have a large 32 bit app, memory analysis tools in general won't work very much, because of the massive memory overhead. And since you have very limited memory in a 32 bit address space, you quickly run out of room, and the tools fail.
We evaluated Boundschecker, Intel's Inspector and Purify.
They were all more or less crap.
For our main application, BoundsChecker would not start it after many hours; it only worked for a couple of smaller applications; but find a couple of things (I think we're still in contact with them to figure things out)
Intel's Inspector works, but does not instrument the code, it runs on the executable only (maybe works better when used with the whole suite of Intel products).
Purify failed miserably; we were never able to use it.
We're still in limbo about that.
Max.
Boundschecker: I just bought a (&(^ subscription which only entitles me to use the damned product for 99 days, so I'm pretty damned upset about that) but anyhow I was having big memory troubles and thought I ought to run this thing. It seems to catch lots of interesting things, but is so damned slow that, well put it this way: my appliciation is still in the DLL init code; it has been running for at least a couple hours, and so far it hasn't even gotten as far as the app does normally in the first COUPLE SECONDS. Boundschecker used to be 'the shit' back in numega days, but it seems like it is really another technological orphan being peddled by an opportunitstic business entity, like borland compilers.
So I really like it when it works, it has lots of great info. I just need to see if I will be able to actually get any decent results. It is currently using 4+ GB of RAM and hasn't even started up fully yet. Since I use win7/64 with a crippled home edition which will only recognize 12GB, I may run out of memory before anything really interesting happens. And it will be sometime a few days from now...

Decent profiler for Windows? [duplicate]

This question already has answers here:
What are some good profilers for native C++ on Windows? [closed]
(8 answers)
Closed 9 years ago.
Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows.
I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones.
[Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]
For Windows, check out the free Xperf that ships with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Outstanding memory allocations (leaks)
Who is doing the most registry queries?
Disk writes? etc.
I know I'm adding my answer months after this question was asked, but I thought I'd point out a decent, open-source profiler: Very Sleepy.
It doesn't have the feature count that some of the other profilers mentioned before do, but it's a pretty respectable sampling profiler that will work very well in most situations.
Intel VTune is good and is non-instrumenting. We evaluated a whole bunch of profilers for Windows, and this was the best for working with driver code (though it does unmanaged user level code as well). A particular strength is that it reads all the Intel processor performance counters, so you can get a good understanding of why your code is running slowly, and it was useful for putting prefetch instructions into our code and sorting out data layout to work well with the cache lines, and the way cache lines get invalidated in multi core systems.
It is commercial, and I have to say it isn't the easiest UI in the world.
AMD's CodeAnalyst is FREE here
We use both VTune and AQTime, and I can vouch for both. Which works best for you depends on your needs. Both have free trial versions - I suggest you give them a go.
The Windows Driver Kit includes a non-instrumenting user/kernel sampling profiler called "kernrate". It seems useful for profiling multi-process applications, applications that spend most of their time in the kernel, and device drivers (of course). It's also available in the KrView (Kernrate Viewer) and Windows Server 2003 Resource Kit Tools packages.
Kernrate works on Windows 2000 and later (unlike Xperf, which requires Vista / Server 2008). It's command-line based and the documentation has a somewhat intimidating list of options. I'm not sure if it can record call stacks or just the program counter. If you use a symbol server, make sure to put an up-to-date dbghelp.dll and symsrv.dll in the same directory as kernrate.exe to prevent it from using the ancient version of dbghelp.dll that is installed in %SystemRoot%\system32.
I have tried Intel's vtune with a rather large project about two years ago. It was an instrumenting profiler then and it took so long to instrument the DLL that I was attempting to profile that I eventually lost patience after an hour.
The one tool that I have had quite good success and which i would highly recommend is that of AQTime. It not only provides excellent performance profiling resources but it also doe really good memory profiling which has been of significant help to me in tracking down memory leaks.
Luke Stackwalker seems promising -- it's not as polished as I'd like, but it is open source and it does do something that seems very close to what #Mike Dunlavey keeps saying we ought to do. (Of course, it then tries to smoosh it all down into the typically-unhelpful call graphs that Mike is so weary of, but it shouldn't be too hard to fix that with the source as our ally.)
It even seems to count time spent waiting in the kernel, as far as I can tell...
I'm not sure what a non-instrumenting profiler is, but I can say for .NET I love RedGate's ANTS Profiler. Version 3 beats the MS version for ease of use and Version 4, which allows arbitrary time slices, makes MS look like a joke.

What is the big deal with BUILDING 64-bit versions of binaries?

There are a ton of drivers & famous applications that are not available in 64-bit. Adobe for instance does not provider a 64-bit Flash player plugin for Internet Explorer. And because of that, even though I am running 64-bit Vista, I have to run 32-bit IE. Microsoft Office, Visual Studio also don't ship in 64-bit AFAIK.
Now personally, I haven't had much problems building my applications in 64-bit. I just have to remember a few rules of thumb, e.g. always use SIZE_T instead of UINT32 for string lengths etc.
So my question is, what is preventing people from building for 64-bit?
If you are starting from scratch, 64-bit programming is not that hard. However, all the programs you mention are not new.
It's a whole lot easier to build a 64-bit application from scratch, rather than port it from an existing code base. There are many gotchas when porting, especially when you get into applications where some level of optimization has been done. Programmers use lots of little assumptions to gain speed, and these are not always easy to quickly port to 64-bit. A few examples I've had to deal with:
Proper alignment of elements within a struct. As data sizes change, assumptions that certain fields in a struct will be aligned on an optimal memory boundary may fail
Length of long integers change, so if you pass values over a socket to another program that may not be 64-bit, you need to refactor your code
Pointer lengths change, as so hard to decipher code written be a guru that has left the company become a little trickier to debug
Underlying libraries will also need to have 64-bit support to properly link. This is a large part of the problem of porting code if you rely on any libraries that are not open source
In addition to the things in #jvasak's post, the major thing that can causes bugs:
pointers are larger than ints - a huge amount of code makes the assumption that the sizes are the same.
Remember that Windows will not even allow an application (whether 32-bit or 64-bit) to handle pointers that have an address above 0x7FFFFFFF (2GB or above) unless they have been specially marked as "LARGE_ADDRESS_AWARE" because so many applications will treat the pointer as a negative value at some point and fall over.
The biggest issues that I've run into porting our C/C++ code to 64 bit is support from 3rd party libraries. E.g. there is currently only 32 bit versions of the Lotus Notes API and also MAPI so you can't even link against them.
Also since you can't load a 32 bit DLL into your 64 bit process you get burnt again trying to load things dynamically. We ran into this problem again trying to support Microsoft Access under 64 bit. From wikipedia:
The Jet Database Engine will remain
32-bit for the foreseeable future.
Microsoft has no plans to natively
support Jet under 64-bit versions of
Windows
Just a guess, but I would think a large part of it would be support - If Adobe compiles the 64 bit version, they have to support it. Even though it may be a simple compile switch, they'd still have to run through a lot of testing, etc, followed by training their support staff to respond correctly, when they do run into issues fixing them either results in a new version of the 32 bit binary or a branch in the code, etc. So while it seems simple, for a large application it can still end up costing a lot.
Another reason that a lot of companies have not gone through the effort of creating 64 bit versions is simply they don't need to.
Windows has WoW64 (Windows on Windows 64 bit) and Linux can have the 32 bit libraries available alongside the 64 bit. Both of these allow us to run 32 bit applications in 64 bit environments.
As long as the software is able to run in this way, there is not a major incentive to convert to 64 bit.
Exceptions to this are things such as device drivers as they are tied in deeper with the operating systems and cannot run in the 32 bit layer that the x86-64/AMD64 based 64-bit operating systems offer (IA64 is unable to do this from what I understand).
I agree with you on flash player though, I am very disappointed in Adobe that they have not updated this product. As you have pointed out, it does not work properly in 64 bit requiring you to run the 32 bit version of Internet Explorer.
I think it is a strategic mistake on Adobe's part. Having to run the 32 bit browser for flash player is an inconvenience for users, and many will not understand this solution. This could lead to developers being apprehensive about using flash. The most important thing for a web site is to make sure everyone can view it, solutions that alienate users are typically not popular ones. Flash's popularity was fed by its own popularity, the more sites that used it, the more users had it on their systems, the more users that had it on their systems, the more sites were willing to use it.
The retail market pushes these things forward, when a general consumer goes to buy a new computer, they aren't going to know they don't need a 64 bit OS they are going to get it either because they hear it is the latest and greatest thing, the future of computing, or just because they don't know the difference.
Vista has been out for about 2 years now, and Windows XP 64-bit was out before that. In my mind that is too long for a major technology such as Flash to not be upgraded if they want to hold on to their market. It may have to do with Adobe taking over Macromedia and this is a sign that Adobe does not feel Flash is part of their future, I find it hard to believe as I think Flash and Dreamweaver were the top parts of what they got out of Macromedia, but then why have they not updated it yet?
It is not as simple as just flipping a switch on your compiler. At least, not if you want to do it right. The most obvious example is that you need to declare all your pointers using 64-bit datatypes. If you have any code which makes assumptions about the size of these pointers (e.g. a datatype which allocates 4 bytes of memory per pointer), you'll need to change it. All this needs to have been done in any libraries you use, too. Further, if you miss just a few then you'll end up with pointers being down-casted and at the wrong location. Pointers are not the only sticky point but are certainly the most obvious.
Primarily a support and QA issue. The engineering work to build for 64-bit is fairly trivial for most code, but the testing effort, and the support cost, don't scale down the same way.
On the testing side, you'll still have to run all the same tests, even though you "know" they should pass.
For a lot of applications, converting to a 64-bit memory model doesn't actually give any benefit (since they never need more than a few GB of RAM), and can actually make things slower, due to the larger pointer size (makes every object field twice as large).
Add to that the lack of demand (due to the chicken/egg problem), and you can see why it wouldn't be worth it for most developers.
Their Linux/Flash blog goes some way to explain why there isn't a 64bit Flash Player as yet. Some is Linux-specific, some is not.

How to measure performance in a C++ (MFC) application?

What good profilers do you know?
What is a good way to measure and tweak the performance of a C++ MFC application?
Is Analysis of algorithms really neccesary? http://en.wikipedia.org/wiki/Algorithm_analysis
I strongly recommend AQTime if you are staying on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C++, .NET, Delphi, Borland C++, Intel C++ and even gcc. And it integrates into Visual Studio, but can also be used standalone. I love it.
If you're (still) using Visual C++ 6.0, I suggest using the built-in profiler. For more recent versions you could try Compuware DevPartner Performance Analysis Community Edition.
For Windows, check out Xperf which ships free with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Who is doing the most registry queries?
Disk writes? etc.
You will be quite surprised when you find the bottlenecks, as they are probably not where you expected!
It's been a while since I profiled unmanaged code, but when I did I had good results with Intel's vtune. I'm sure somebody else will tell us if that's been overtaken.
Algorithmic analysis has the potential to improve your performance more profoundly than anything you'll find with a profiler, but only for certain classes of application. If you operate over reasonably large sets of data, algorithmic analysis might find ways to be more efficient in CPU/Memory/both, but if your app is mainly form-fill with a relational database for storage, it might not offer you much.
Intel Thread Checker via Vtune performance analyzer- Check this picture for the view i use the most that tells me which function eats up the most of my time.
I can further drill down inside and decompose which functions inside them eats up more time etc. There are different views based on what you are watching (total time = time within fn + children), self time (time spent only in code running inside the function etc).
This tool does a lot more than profiling but i haven't explored them all. I would definitely recommend it. The tool is also available for downloading as a fully functional trial version that can run for 30 days. If you have cost constraints, i would say this window is all that you require to pin point your problem.
Trial download here - https://registrationcenter.intel.com/RegCenter/AutoGen.aspx?ProductID=907&AccountID=&ProgramID=&RequestDt=&rm=EVAL&lang=
ps : I have also played with Rational Rational but for some reason I did not take much to it. I suspect Rational might be more expensive than Intel too.
Tools (like true time from DevPartner) that let you see hit counts for source lines let you quickly find algorithms that have bad 'Big O' complexity. You still have to analyse the algorithm to determine how to reduce the complexity.
I second AQTime, having both AQTime and Compuwares DevPartner, for most cases. The reason being that AQTime will profile any executable that has a valid PDB file, whereas TrueTime requires you to make an instrumented build. This greatly speeds up and simplifies ad hoc profiling. DevPartner is also quite a bit more expensive if this is an issue. Where DevPartner comes into its own is with BoundsChecker, which I still rate as a better tool for catching leaks and overwrites than AQTimes execution profiler. TrueTime can be slighly more accurate than AQTime, but I have never found this to be an issue.
Is profiling worthwhile, IMO yes, if you need performance gains on a local application. I think you also learn a lot about how your program and algorithms really work, and the cost implications of using certain types of object classes for storing and iterating through your data.
Glowcode is a very nice profiler (when it works). It can attach to a running program and requires only symbol files - you don't need to rebuild.
Some versions pf visual studio 2005 (and maybe 2008) actually come with a pretty good performance profiler.
if you have it it should be available under the tools menu
or you can search for a way to open the "performance explorer" window to start a new performance session.
A link to MSDN
FYI, Some versions of Visual Studio only come with a non-optimizing compiler. For one of my my MFC apps if I compile it with MINGW/MSYS ( gcc compiler ) with -o3 then it runs about 5-10x as fast as my release compile with Visual Studio.
For example I have an openstreetmap xml compiler and it takes about 3 minutes ( the gcc compiled version) to process a 2.7GB xml file. My visual studio compile of the same code takes about 18 minutes to run.