This is just a general question - I was sitting and waiting for a bit of software to compile (we use Incredibuild here but can still take 10/15 mins) and it got me wondering, does anyone know how long it took to compile Windows XP or Vista?
I did some googling but didn't really find any useful information
OP is asking about Windows:
"There are no other software projects
like this," Lucovsky said, "but the
one thing that's remained constant
[over the years] is how long it takes
to build [Windows]. No matter which
generation of the product, it takes 12
hours to compile and link the system."
Even with the increase in processing
horsepower over the years, Windows has
grown to match, and the development
process has become far more
sophisticated, so that Microsoft does
more code analysis as part of the
daily build. "The CPUs in the build
lab are pegged constantly for 12
hours," he said. "We've adapted the
process since Windows 2000. Now, we
decompose the source [code] tree into
independent source trees, and use a
new build environment. It's a
multi-machine environment that lets us
turn the crank faster. But because of
all the new code analysis, it still
takes 12 hours."
SOURCE
Also see Mark Lucovsky classic presentation on developing Windows NT/2000.
I don't work at Microsoft, so I don't know for sure...
Third-hand information I have is that it takes about a day to complete a Windows build. Which is more or less in line with attempting to build your favorite OSS Operating System from scratch.
Building a modern operating system is a complex and difficult task. The only reason why it doesn't take longer is because companies like Microsoft have build environments setup to help automate integration testings. Thus they can build a system with less manual effort than is involved in most OSS builds.
If you've like to get a feel for what it takes to build an operating system, might I recommend the free eBook: Linux from Scratch
For a more automated build, try Gentoo. Both options should give you a better idea of the Operating System build process.
Ales Holecek, vice president for development in Windows team said that it takes about 16 hours to build Windows 10. And that it's built automatically every day during the night.
It's not built using single machine, of course, but rather using some build farm.
The answers that say 12-24 hours or overnight are almost certainly correct.
Long ago when I was at Microsoft and every time I've heard it since, they build 'every night'. (I used to load daily builds of NT fairly regularly. If I recall correclty, they did a "checked build" weekly or some such.)
Those numbers on the end of the version are (or at least WERE) these daily build increments:
My windows 8.1:
C:\WINDOWS\system32 > ver
Microsoft Windows [Version 6.3.9600]
Dividing 9600 by 365 gives about 26 years of daily builds, or going back from 2013 when 8.1/2012-R2 released, arrives at about 1987 for the start of development on NT with Windows NT 3.1 released in 1993.
Well, try it out yourself: Grab a Gentoo (or other Linux) distro or try out the Singularity project from Microsoft Research. Another interesting alternative is the ReactOS project. Compiling the kernel alone takes (depending on the machine) about the 15 minutes you've waited for your program. Compiling the whole system takes considerably longer!
I remember hearing that Vista took somewhere along the lines of more than a day to build (can't find a reference now though, argh). It has somewhere in the neighborhood of 50 million lines of code to it.
How long it takes will really depend on the build set up, I really doubt that the Vista engineers need a day to build the code even if it would take a day on a single machine.
I work on a project of a similar scale and until recently builds could take up to 12 hours on a shared multiprocessor sun server. Since we have switched to a Linux based build farm a clean build can happen in less than an hour and rebuilds in a few minutes.
It would be interesting to know what set up the Vista guys are using, Linux based build farms seem unlikely... maybe Windows based build farms then :)
I don't know how long is taking to compile XP, but 10/15 minutes is not so big at all.
Our project that include Linux kernel as one of the components (not the biggest) was taking about a hour to compile. We improved this by using ccache and now it's taking only few minutes.
Not exactly the answer to your question but i though it's might be relevant/useful.
Related
I am using Eclipse for C/C++ development. I am trying to compile and run a project. When I compile and run the project after a while my CPU gets to 100% usage . I checked "Task Manager" and there I found that Eclipse isn't closing any of the previous build and it's running in the background which uses my CPU heavily. How do I solve this problem. When at 100% usage my PC becomes very very slow.
If you don't want the build to use up all your CPU time (maybe because you want to do other stuff while building) then you could decrease the parallelism of the build to a point where it leaves one or more cores unused. For example, if you have 8 cores you could configure your build to only use 6 of them.
Your build will take longer, but your machine will be more responsive for other tasks while the build runs.
Adding More RAM seems to have solved my problem. Disk usage is also low now. Maybe Since there wasnt enough RAM in my laptop the CPU was fetching data from the Disk directly which made the disk usage to go up.
I would like to be able to create a native iOS app that will let the user write an Arduino sketch, and then compile it to HEX code that can be uploaded to the Arduino board.
It is POSSIBLE to do this (your iOS phone probably already has a compiler for OpenCL on it), but it's certainly not the most ideal platform for a fairly CPU intensive application like compilation. Mobile phones do not like to run at 100% cpu time for several seconds every minute or so, as you are debugging, editing, compiling, debugging, editing, compiling to get the Arduino code to "work right".
I sometimes run gcc on my development board(s) at work, which is comparable in performance to a mobile phone (of some reasonably modern kind), and it isn't exactly "blindingly fast", and that is for fairly small portions of code - the source code for my applications that I compile this way is typically a single file and a couple of dozen kilobytes - of course, it does include some header files.
Bear in mind also that the dev tools will probably take up several dozen megabytes of memory on the phone - I don't see it as something that many people will want to use. And of course, typing on a phone or iPad isn't exactly wonderful, no matter how good the touch techniques are these days. A real keyboard is still miles better.
Run the compiler on-line. There are already several microcontroller projects that do this and use a web GUI as a code editor.
Check out free(mium) ArduinoCode - Arduino IDE that runs on iOS. However becuase of Apple limitations you have to run tiny java app on your desktop: to do the hard work and communicate with your Arduino over USB. Wireless uploading over BLE available.
Is there any good way too get a indication if a computer is capable to run a program/software without any performance problem, using pure JavaScript (Google V8), C++ (Windows, Mac OS & Linux), by requiring as little information as possible from the software creator (like CPU score, GPU score)?
That way can I give my users a good indication whether their computer is good enough to run the software or not, so the user doesn't need to download and install it from the first place if she/he will not be able to run it anyway.
I thinking of something like "score" based indications:
CPU: 230 000 (generic processor score)
GPU: 40 000 (generic GPU score)
+ Network/File I/O read/write requirements
That way can I only calculate those scores on the users computer and then compare them, as long as I'm using the same algorithm, but I have no clue about any such algorithm, whose would be sufficient for real-world software for desktop usage.
I would suggest testing on existence of specific libraries and environment (OS version, video card presence, working sound drivers, DirectX, OpenGL, Gnome, KDE). Assign priorities to these libraries and make comparison using the priorities, e.g. video card presence is more important than KDE presence.
The problem is, even outdated hardware can run most software without issues (just slower), but newest hardware cannot run some software without installing requirements.
For example, I can run Firefox 11 on my Pentium III coppermine (using FreeBSD and X server), but if you install windows XP on the newest hardware with six-core i7 and nVidia GTX 640 it still cannot run DirectX 11 games.
This method requires no assistance from the software creator, but is not 100% accurate.
If you want 90+% accurate information, make the software creator check 5-6 checkboxes before uploading. Example:
My application requires DirectX/OpenGL/3D acceleration
My application requires sound
My application requires Windows Vista or later
My application requires [high bandwith] network connection
then you can test specific applications using information from these checkboxes.
Edit:
I think additional checks could be:
video/audio codecs
pixel/vertex/geometry shader version, GPU physics acceleration (may be crucial for games)
not so much related anymore: processor extensions (SSE2 MMX etc)
third party software such as pdf, flash, etc
system libraries (libpng, libjpeg, svg)
system version (Service Pack number, OS edition (premium professional etc)
window manager (some apps on OSX require X11 for functioning, some apps on Linux work only on KDE, etc)
These are actual requirements I (and many others) have seen when installing different software.
As for old hardware, if the computer satisfies hardware requirements (pixel shader version, processor extensions, etc), then there's a strong reason to believe the software will run on the system (possibly slower, but that's what benchmarks are for if you need them).
For GPUs I do not think getting a score is usable/possible without running some code on the machine to test if the machine is up to spec.
With GPU's this is typically checking what Shader Models it is able to use, and either defaulting to a lower shader model (thus the complexity of the application is of less "quality") or telling them they have no hope of running the code and thus quitting.
I have to program a C/C++ application. I have windows and linux (ubuntu 9.04) in my machine, and can program in both of them (via gcc/code blocks/vim,etc). The problem is that the executable is going to be run in a Unix (not linux) machine (will have access to it in about 20 days).
My Team Leader doesn´t want all of the code to be programmed in linux - unix. So, some of it will have to be developed in windows (and then many prayers will follow so that nothing bad happens).
The best thing i´ve come up with is to program remotely on the server, from windows and linux, so every programmer (only me for the next 2 weeks) is sort of happy.
In windows, I think i´m stuck with putty, but is there any alternative on linux, rather than ssh-vim? .
I read this Remote debugging with Eclipse CDT but it didn´t bring much light on the subject. At least not much hope.
There is another issue. I don´t consider C/C++ to be a portable language. At least for real programs. Sure it compiles, but many issues will arise, even with boost / stl. I haven´t taken a careful look to the code, but still, how wrong a I?
Any tips will be greatly appreciated.
Thanks in advance.
You could ssh w/ xming for a gui ide/editor that is on the remote machine.
If all the code is one the remote machine and compiled there, don't you have to worry about developers trying to work with the same resources? Also might the machine/network not be able to handle multiple ssh connections if you're using xming?
If you can convince your system administrator to install the libraries (an X server is not required), you can use X forwarding with SSH, which will allow you to execute X apps remotely and have them come up on your local server. If you're using Linux locally, you probably have X running already, and if you are using Windows, you can use the Xming server (with a little configuration to get it to accept remote connections). For debugging, if you need a separate shell, just set another instance of SSH going and perform debugging from another process.
As for portability, it depends on what you are trying to do. If all you want is a simple console-based application, you shouldn't run into any major portability concerns. If you are using more complex code, portability depends heavily on two things. The first is the choice of libraries - sure, you can run applications written for Win32 on Linux with Wine or actually compile them with Winelib, but it's not a pleasant experience. If you choose something more portable like Qt or gtkmm, you'll have a much easier time of things. Likewise for filesystem code - using a library like Boost.Filesystem will make things significantly simpler. The second thing that makes a big difference for portability is to follow the documentation. It's hard to stress this enough - many things that you do incorrectly will have varied results on different platforms, especially if you are using libraries that don't do checks (note: I highly recommend checking code against debug standard libraries for this reason). I once spent nearly a weak tracking down a portability bug that arose because someone didn't read the docs and was passing an invalid parameter.
If you want to use remote desktop like facility try VNC www.realvnc.com
or in case its just a remote login Hummingbird, EXceed could help
You might want to check the wingdb visual studio extension.
Not sure if this will help, but take a peek at sshfs. I haven't had a need to use it myself, but I have read others use it to access remote files and directories via ssh and work on the files locally. I presume you could access your remote home directory via sshfs and then use your local tools to work on the source files. I would very interested in knowing if this works out, so please post back if you give it a shot.
I use No Machine NX, which gives you the entire desktop of the remote machine. It also has a Windows client. I work remotely from home on Fridays, so I'm using it right now. You'll have to install it on the remote machine, and then install a client on your Windows or Linux machine.
Any good place to learn about POST and how to design and code one? I am a C++ programmer and quite baffeled with the term.
Thanks
You might want to take a look at the code for coreboot, a free software (open source) BIOS project that runs on many different kinds of hardware.
You can checkout the OpenBIOS project.
They have information on numberous opensource bios/firmware implementations.
Being open source you can grab the code from svn or read it online for all of them.
BIOS? That's not too common in the embedded world, the one place where people still write POSTs. Typically, they happen before the OS itself starts, or alternatively, as the OS starts.
The goal is to figure out whether the device can run, run in degraded mode, or should signal malfunction. A typical sequence is test CPU and XIP flash, then memory, fixed hardware, and then optional hardware. You define a series of tests. A test has a start function and a check function. The start functions kicks off the test; the check polls to see if a result is already available. Tests have dependencies, and the test controller starts those tests for which the dependencies have passed (CPU and RAM being the special cases, if they're broken it's not feasible to have a nice test controller).
As you can infer from the CPU and RAM tests, you don't have the luxury of C++. You can't even assume you can use all of C. During the first part of the POST, you might not even have a stack (!)
Open source EFI BIOS, with documentation and specs (good way to learn):
https://www.tianocore.org/
Background In June of 2004 Intel
announced that it would release the
"Foundation Code" of its next
generation firmware technology - a
successor to the PC BIOS - under an
open source license later in the year.
The Foundation Code, developed by
Intel as part of a project code named
Tiano, is Intel's "preferred
implementation" of the Extensible
Firmware Interface (EFI)
Specification. The code to be released
includes the core of the Foundation
Code as well as a driver development
kit. To follow through with its
intentions to release the code as open
source, Intel partnered with
Collabnet, an industry leader in
providing tools and services to
support an open source initiative, to
create a community for this effort.
The result of this partnership is this
Open Source Website.
Since there are more projects that are
EFI-based working in parallel with the
Foundation Code, it was decided to
release the EFI Shell Application and
the EFI Self Certification Test (SCT)
project to the open source community.
POST (Power On Self Test) is part of the Bios, and writing a POST, but not other parts of the BIOS, seems like an odd task indeed.
The documentation section of the processor manufacturer's web site would be a good start for BIOS programming. I remember writing an 80186 BIOS and POST a long time ago, and I worked exclusively with the Intel specs.
And btw, you will be doing this in Assembler, not C++.