Minix vs Linux for Learning Operating System Design? - c++

I wish to learn operating system design. I was wondering if I should tackle Minix or GNU/Linux in the process? I like books so I would be following mainly a book, though video resources (presumably videotaped lectures) would also be welcome.
I have formally studied C and C# and can program small to medium sized programs in them. I also have a very basic understanding of data structures.
If I take the Minix route, should I tackle version 2 (simpler??) or version 3?

I would go for the Minix route, just because of my personal experience with it. Minix is very straightforward, and written from an educational point of view. Linux kernel on the other hand has been around for so long, and is therefore optimized heavily. I do not think that it is a good start.
I wouldn't worry too much about which minix version. The concept remains the same. With the newer versions you are able to run X on it, which can be helpful, but at the same time adds more complexity. Just go with the version you find a good book of.

Operating Systems: Design and Implementation covers Minix, so this might a good argument pro Minix.
Without having touched this topic myself, Linux is rather large (last time I checked 10 millions line+, though of course you would not have to study all of it), and Minix uses a microkernel architecture with separate modules, so it might be easier to grasp.
I would go for Minix.
(on the other hand, O'Reilly has a number of books on Linux; but I think I would still go with Minix, having that phat book as the reference)

At an internship I did, I had to change the hard drive driver in Minix so that it serves requests using the elevator algorithm instead of first-come-first-served. I was supposed to do it in Minix 2, but I wanted to do it in Minix 3 because I never like using old technologies.
In the 2 months I was working on it, the most frustrating thing was that Minix 3 took about 20 minutes to compile in VMWare on a laptop with an I5 processor, 4GB of RAM running Windows 7. Finally, after 2 months, I gave up on Minix 3 and switched to Minix 2, which compiled in about 20 s.
Now I'm not saying there couldn't have been something very wrong about how I was compiling the system, but I was trying really hard to speed it up with no success.
Let me just say that at the time I had just received my Master's degree in computer science and I had 5 years of intensive experience with programming in C (just so that you don't think I'm a self-taught programmer that just decided to jump in to programming by redesigning an operating system :D )
EDIT: In the end, I suggest you to try compiling Minix 3 to see how it goes for you. If you have more luck, definitely go with this one because it has more modern OS concepts, on the other hand, if you are a complete beginner, you'll probably learn tons from Minix 2. I did.

As other posters have said, starting with Linux can be difficult because it is now so large and complex that the barrier to entry has sky-rocketed. But, if you do choose this route, I recommend starting with one small subsystem and focusing on that.

There are other possibilities, FreeBSD or even GNU/Hurd (or even your own toy kernel). And it depends of what you really want to learn.
If you know Linux and what to learn how to write drivers, writing your own Linux driver kernel module is sensible.
It also depends upon your precise definition of Operating System. This is not necessarily the same as an OS kernel
Not all operating systems are "Unix-like", e.g. coyotos, Kangaroo, ... See also tunes.org
J.Pitrat's book Artificial Beings (the conscience of a conscious machine) have interesting insights on what an OS could be.

Related

What is the current status of C++ AMP

I am working on high performance code in C++ and have been using both CUDA and OpenCL and more recently C++AMP, which I like very much. I am however a little worried that it is not being developed and extended and will die out.
What leads me to this thought is that even the MS C++AMP blogs have been silent for about a year. Looking at the C++ AMP algorithms library http://ampalgorithms.codeplex.com/wikipage/history it seems nothing at all has happened for over a year.
The only development I have seen is that now LLVM sort of supports C++AMP, so it is not windows only, but that is all, and not something which has been told far and wide.
What kind of work is going on, if any, that you know of?
What leads me to this thought is that even the MS C++AMP blogs have been silent for about a year. Looking at the C++ AMP algorithms library http://ampalgorithms.codeplex.com/wikipage/history it seems nothing at all has happened for over a year.
I used to work on the C++AMP algorithms library. After the initial release, which Microsoft put together I built a number of additional features and ported it to newer versions of VS. It seemed like there was a loss of momentum around C++AMP. I have no plans to do further work on the project.
Make of this what you will. Perhaps someone from Microsoft can clarify things?
I've found that AMD is still using the C++AMP..
http://developer.amd.com/community/blog/2015/09/15/programming-models-for-heterogeneous-systems/
http://developer.amd.com/community/blog/2015/01/19/bolt-1-3-whats-new/
and there are some forum references where Intel is mentioning it too.
The main thing I see is that we the programmers are finally starting to play with the idea that we can use the GPU for ordinary tasks also. Especially now that the HBMs are coming to the APUs you could do a lot on a relatively cheap system.
So no copying of data to graphic card or main memory, but keep it in a BIG HBM "cache" where it can be accessed "real-time" i.e. without GPU latency.
So Microsoft build a really really nice technology which will become relevant only in next few years i.e. when the hardware is finally "user friendly".
But the thing can become obsolete if they wont advance as others do. Not that something wouldn't work in C++ AMP, but because the speed of change is so big lately that programmers wont risk to start using it, if they don't see some advancements... at least a blog or two per year, where they tested something with it so that you see Microsoft still believes in it.
FWIW we are also using C++AMP in the financial world. very successful relatively easy to code. CUDA is probably a safer choice but if anyone is considering learning AMP i suggest brush up on your basic STL first then read up on array views.
I'm still using amp. Right now I'm making a gpu path tracer (hopefully) for games use.
It seams that amp doesn't have much documentation at the moment or many new updates sadly. Its definitely something I would like to see updated and used more, but it seams dead.

Micro-Controllers programming

I'm having this robotic arm project along with some engineers we haven't settled for the Micro Controller of choice yet but currently a PIC is being tested. I was wondering if there were Micros that support C++ ?
Background:
I'm a (Java) software developer, beginner in Embedded systems, currently programming using Mikro Elektronika IDE and C language.
AVR, MSP-430, Blackfin, almost anything 32 bit (ARM, AVR32, Renasis RX family).
If you are starting from nothing, an ARM is probably the best way to go. Atmel, NXP, TI and others have single chip ARM microcontrollers with inexpensive development kits.
I know you're asking for C++, but I just got a netduino that runs C# (very similar in syntax and concept to Java) and I'm loving it.
The whole dev board (which in many aspects is compatible with readily available arduino shields) costs less than 40 bucks.
I would add to hexa's answer that for ARM llvm is also a good compiler (I use binutils to assemble and link).
Going metal with C++ is not optimal for a number of reasons, simply because you are not running on top of an operating system and, to name one, dynamic memory allocation simply doesn't exists. No new no malloc. I don't mean you CAN'T go C++, but I would refrain.
I've used Mikroe C for PICs, it's ok but I'd go with MPLAB, just a matter of personal taste.
If you wanna go ARM, go GCC.
Why don't you try the mbed plattform? It's an open source arduino-like board which I consider to be more powerful. It is programmed in C/C++ and the good part is that there are literally thousands of APIs you can use in your project.
Hope this helps
https://mbed.org/

Porting Actionscript into C++ - has any one crearted any instructions on such topic?

Porting Actionscript into C++ - has any one crearted any instructions on such topic? So I vant to try to port papervision3d into C++ for than porting it backwards using alchymy. What do you think of it? Is it possible?
1) Why do I want to port PV3d? It is fast. It is simple. I know and like it. It could push new leap of PV3d interest. It would probably beat current Alternativa 7.5 if g++ and LLVM can optimize code as wall as they say it can.
2) As far as I know there is a way to create real working swf's using Alchemy libs from C/C++ and compiling into swf so it means all event model and display list are probably already there. (prooving link to video on adobe tv from max develop 2008))
It's not completely unreasonable to port ActionScript to C++, however, what you will be missing is all the support code that Flash supplies you with. You'd have to reimplement the display list, event dispatching and so on.
Disregarding that, I wouldn't recommend porting Papervision, it's more than a year since the last update and the lead developer has left the project. If anything, I'd recommend looking into the considerably more "alive" Away3D.
Thirdly, the "molehill" version of flash player will have support for proper hardware accelerated 3d (and a software compatibility layer) making your porting efforts rather pointless within a few months.
All in all. Don't do it.
Don't. Even if your port were successful, all you would do is translate ActionScript 3 to C++ to ActionScript 3. So you'd end up with just about the same code you had in the first place, or possibly even worse, since you'd have a second translation you have little or no influence on.
It would likely be more productive to try to improve the original papervision3d source code, although I wouldn't expect great performance leaps.

Are there any Netbooks powerful enough for moderate C++ compilation?

I've tried a few Asus Ones, and found that even switching between multiple windows could take seconds. Is there anything powerful enough in that form factor for C++ programmers to build small to moderate size projects?
I also give it a qualified yes.
What OS you use may matter a lot. I have Kubuntu on a HP 2140 netbook with only 1 gb of ram and the usual Asus N270 cpu. And it is actually rather snappy for window or desktop switches etc under KDE 4.3.
Compile-times are ok but I am spoiled by better machines at the office or even at home. But I got this for the form factor and I take it with me while commuting. I mostly edit, write docs, etc pp while I am on the train and then commit back to SVN at the other end. That works well for me, including the occassional make or make check.
It depends on what compiler and editor/IDE you decide to use. The wimpiest Netbook is still a killer machine compared to what we used 20 or even 10 years ago. One of the easiest routes to better performance is to use an older editor/IDE (the compiler itself will probably be all right). Of course, we expected slower compilation back then too, but even so a minute to switch between windows would have been excessive.
Perhaps the HP Mini note? Amazon Link
You could also try to compile with the Nice command, which will supposedly only do intensive things during the moments when your not using your computer much.
I have an eeepc and it's okay for compilation. I definitely wouldn't want to compile a complete Boost build on it. It works, but you're kind of slumming it. P4 speed, slow hard drive, small amount of RAM... less everything.
A low-quality netbook has more resources and capabilities than my development workstations did a decade ago. What you will need, however, is RAM. If you try to do too many things at a time on one of these things you will swap like crazy because modern software and modern operating systems are written by lazy, slack developers who think RAM is a limitless resource. Alternatively you can boost your RAM. My wife's netbook got boosted to 1GB before the unit was even taken home and it's not at all bad.

Decent profiler for Windows? [duplicate]

This question already has answers here:
What are some good profilers for native C++ on Windows? [closed]
(8 answers)
Closed 9 years ago.
Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows.
I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones.
[Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]
For Windows, check out the free Xperf that ships with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
Who is using the most CPU? Drill down to function name using call stacks.
Who is allocating the most memory?
Outstanding memory allocations (leaks)
Who is doing the most registry queries?
Disk writes? etc.
I know I'm adding my answer months after this question was asked, but I thought I'd point out a decent, open-source profiler: Very Sleepy.
It doesn't have the feature count that some of the other profilers mentioned before do, but it's a pretty respectable sampling profiler that will work very well in most situations.
Intel VTune is good and is non-instrumenting. We evaluated a whole bunch of profilers for Windows, and this was the best for working with driver code (though it does unmanaged user level code as well). A particular strength is that it reads all the Intel processor performance counters, so you can get a good understanding of why your code is running slowly, and it was useful for putting prefetch instructions into our code and sorting out data layout to work well with the cache lines, and the way cache lines get invalidated in multi core systems.
It is commercial, and I have to say it isn't the easiest UI in the world.
AMD's CodeAnalyst is FREE here
We use both VTune and AQTime, and I can vouch for both. Which works best for you depends on your needs. Both have free trial versions - I suggest you give them a go.
The Windows Driver Kit includes a non-instrumenting user/kernel sampling profiler called "kernrate". It seems useful for profiling multi-process applications, applications that spend most of their time in the kernel, and device drivers (of course). It's also available in the KrView (Kernrate Viewer) and Windows Server 2003 Resource Kit Tools packages.
Kernrate works on Windows 2000 and later (unlike Xperf, which requires Vista / Server 2008). It's command-line based and the documentation has a somewhat intimidating list of options. I'm not sure if it can record call stacks or just the program counter. If you use a symbol server, make sure to put an up-to-date dbghelp.dll and symsrv.dll in the same directory as kernrate.exe to prevent it from using the ancient version of dbghelp.dll that is installed in %SystemRoot%\system32.
I have tried Intel's vtune with a rather large project about two years ago. It was an instrumenting profiler then and it took so long to instrument the DLL that I was attempting to profile that I eventually lost patience after an hour.
The one tool that I have had quite good success and which i would highly recommend is that of AQTime. It not only provides excellent performance profiling resources but it also doe really good memory profiling which has been of significant help to me in tracking down memory leaks.
Luke Stackwalker seems promising -- it's not as polished as I'd like, but it is open source and it does do something that seems very close to what #Mike Dunlavey keeps saying we ought to do. (Of course, it then tries to smoosh it all down into the typically-unhelpful call graphs that Mike is so weary of, but it shouldn't be too hard to fix that with the source as our ally.)
It even seems to count time spent waiting in the kernel, as far as I can tell...
I'm not sure what a non-instrumenting profiler is, but I can say for .NET I love RedGate's ANTS Profiler. Version 3 beats the MS version for ease of use and Version 4, which allows arbitrary time slices, makes MS look like a joke.