F# performance in scientific computing - c++

I am curious as to how F# performance compares to C++ performance? I asked a similar question with regards to Java, and the impression I got was that Java is not suitable for heavy numbercrunching.
I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++? specific questions about current implementation are:
How well does it do floating-point?
Does it allow vector instructions
how friendly is it towards optimizing
compilers?
How big a memory foot print does it have? Does it allow fine-grained control over memory locality?
does it have capacity for distributed
memory processors, for example Cray?
what features does it have that may be of interest to computational science where heavy number processing is involved?
Are there actual scientific computing
implementations that use it?
Thanks

I am curious as to how F# performance compares to C++ performance?
Varies wildly depending upon the application. If you are making extensive use of sophisticated data structures in a multi-threaded program then F# is likely to be a big win. If most of your time is spent in tight numerical loops mutating arrays then C++ might be 2-3× faster.
Case study: Ray tracer My benchmark here uses a tree for hierarchical culling and numerical ray-sphere intersection code to generate an output image. This benchmark is several years old and the C++ code has been improved upon dozens of times over the years and read by hundreds of thousands of people. Don Syme at Microsoft managed to write an F# implementation that is slightly faster than the fastest C++ code when compiled with MSVC and parallelized using OpenMP.
I have read that F# is supposed to be more scalable and more performant, but how is this real-world performance compares to C++?
Developing code is much easier and faster with F# than C++, and this applies to optimization as well as maintenance. Consequently, when you start optimizing a program the same amount of effort will yield much larger performance gains if you use F# instead of C++. However, F# is a higher-level language and, consequently, places a lower ceiling on performance. So if you have infinite time to spend optimizing you should, in theory, always be able to produce faster code in C++.
This is exactly the same benefit that C++ had over Fortran and Fortran had over hand-written assembler, of course.
Case study: QR decomposition This is a basic numerical method from linear algebra provided by libraries like LAPACK. The reference LAPACK implementation is 2,077 lines of Fortran. I wrote an F# implementation in under 80 lines of code that achieves the same level of performance. But the reference implementation is not fast: vendor-tuned implementations like Intel's Math Kernel Library (MKL) are often 10x faster. Remarkably, I managed to optimize my F# code well beyond the performance of Intel's implementation running on Intel hardware whilst keeping my code under 150 lines of code and fully generic (it can handle single and double precision, and complex and even symbolic matrices!): for tall thin matrices my F# code is up to 3× faster than the Intel MKL.
Note that the moral of this case study is not that you should expect your F# to be faster than vendor-tuned libraries but, rather, that even experts like Intel's will miss productive high-level optimizations if they use only lower-level languages. I suspect Intel's numerical optimization experts failed to exploit parallelism fully because their tools make it extremely cumbersome whereas F# makes it effortless.
How well does it do floating-point?
Performance is similar to ANSI C but some functionality (e.g. rounding modes) is not available from .NET.
Does it allow vector instructions
No.
how friendly is it towards optimizing compilers?
This question does not make sense: F# is a proprietary .NET language from Microsoft with a single compiler.
How big a memory foot print does it have?
An empty application uses 1.3Mb here.
Does it allow fine-grained control over memory locality?
Better than most memory-safe languages but not as good as C. For example, you can unbox arbitrary data structures in F# by representing them as "structs".
does it have capacity for distributed memory processors, for example Cray?
Depends what you mean by "capacity for". If you can run .NET on that Cray then you could use message passing in F# (just like the next language) but F# is intended primarily for desktop multicore x86 machines.
what features does it have that may be of interest to computational science where heavy number processing is involved?
Memory safety means you do not get segmentation faults and access violations. The support for parallelism in .NET 4 is good. The ability to execute code on-the-fly via the F# interactive session in Visual Studio 2010 is extremely useful for interactive technical computing.
Are there actual scientific computing implementations that use it?
Our commercial products for scientific computing in F# already have hundreds of users.
However, your line of questioning indicates that you think of scientific computing as high-performance computing (e.g. Cray) and not interactive technical computing (e.g. MATLAB, Mathematica). F# is intended for the latter.

In addition to what others said, there is one important point about F# and that's parallelism. The performance of ordinary F# code is determined by CLR, although you may be able to use LAPACK from F# or you may be able to make native calls using C++/CLI as part of your project.
However, well-designed functional programs tend to be much easier to parallelize, which means that you can easily gain performance by using multi-core CPUs, which are definitely available to you if you're doing some scientific computing. Here are a couple of relevant links:
F# and Task-Parallel library (blog by Jurgen van Gael, who is doing machine-learning stuff)
Another interesting answer at SO regarding parllelism
An example of using Parallel LINQ from F#
Chapter 14 of my book discusses parallelism (source code is available)
Regarding distributed computing, you can use any distributed computing framework that's available for the .NET platform. There is a MPI.NET project, which works well with F#, but you may be also able to use DryadLINQ, which is a MSR project.
Some articles: F# MPI tools for .NET, Concurrency with MPI.NET
DryadLINQ project hompepage

F# does floating point computation as fast as the .NET CLR will allow it. Not much difference from C# or other .NET languages.
F# does not allow vector instructions by itself, but if your CLR has an API for these, F# should not have problems using it. See for instance Mono.
As far as I know, there is only one F# compiler for the moment, so maybe the question should be "how good is the F# compiler when it comes to optimisation?". The answer is in any case "potentially as good as the C# compiler, probably a little bit worse at the moment". Note that F# differs from e.g. C# in its support for inlining at compile time, which potentially allows for more efficient code which rely on generics.
Memory foot prints of F# programs are similar to that of other .NET languages. The amount of control you have over allocation and garbage collection is the same as in other .NET languages.
I don't know about the support for distributed memory.
F# has very nice primitives for dealing with flat data structures, e.g. arrays and lists. Look for instance at the content of the Array module: map, map2, mapi, iter, fold, zip... Arrays are popular in scientific computing, I guess due to their inherently good memory locality properties.
For scientific computation packages using F#, you may want to look at what Jon Harrop is doing.

As with all language/performance comparisons, your mileage depends greatly on how well you can code.
F# is a derivative of OCaml. I was surprised to find out that OCaml is used a lot in the financial world, where number crunching performance is very important. I was further surprised to find out that OCaml is one of the faster languages, with performance on par with the fastest C and C++ compilers.
F# is built on the CLR. In the CLR, code is expressed in a form of bytecode called the Common Intermediate Language. As such, it benefits from the optimizing capabilities of the JIT, and has performance comparable to C# (but not necessarily C++), if the code is written well.
CIL code can be compiled to native code in a separate step prior to runtime by using the Native Image Generator (NGEN). This speeds up all later runs of the software as the CIL-to-native compilation is no longer necessary.
One thing to consider is that functional languages like F# benefit from a more declarative style of programming. In a sense, you are over-specifying the solution in imperative languages such as C++, and this limits the compiler's ability to optimize. A more declarative programming style can theoretically give the compiler additional opportunities for algorithmic optimization.

It depends on what kind of scientific computing you are doing.
If you are doing traditional heavy computing, e.g. linear algebra, various optimizations, then you should not put your code in .Net framework, at least not suitable in F#. Because this is at the algorithm level, most of the algorithms must be coded in an imperative languages to have good performance in running time and memory usage. Others mentioned parallel, I must say it is probably useless when you doing low level stuff like parallel an SVD implementation. Because when you know how to parallel an SVD, you simply won't use an high level languages, Fortran, C or modified C(e.g. cilk) are your friends.
However, a lot of the scientific computing today is not of this kind, which is some kind of high level applications, e.g. statistical computing and data mining. In these tasks, aside from some linear algebra, or optimization, there are also a lot of data flows, IOs, prepossessing, doing graphics, etc. For these tasks, F# is really powerful, for its succinctness, functional, safety, easy to parallel, etc.
As others have mentioned, .Net well supports Platform Invoke, actually quite a few projects inside MS are use .Net and P/Invoke together to improve the performance at the bottle neck.

I don't think that you'll find a lot of reliable information, unfortunately. F# is still a very new language, so even if it were ideally suited for performance heavy workloads there still wouldn't be that many people with significant experience to report on. Furthermore, performance is very hard to accurately gauge and microbenchmarks are hard to generalize. Even within C++, you can see dramatic differences between compilers - are you wondering whether F# is competitive with any C++ compiler, or with the hypothetical "best possible" C++ executable?
As to specific benchmarks against C++, here are some possibly relevant links: O'Caml vs. F#: QR decomposition; F# vs Unmanaged C++ for parallel numerics. Note that as an author of F#-related material and as the vendor of F# tools, the writer has a vested interest in F#'s success, so take these claims with a grain of salt.
I think it's safe to say that there will be some applications where F# is competitive on execution time and likely some others where it isn't. F# will probably require more memory in most cases. Of course the ultimate performance will also be highly dependent on the skill of the programmer - I think F# will almost certainly be a more productive language to program in for a moderately competent programmer. Furthermore, I think that at the moment, the CLR on Windows performs better than Mono on most OSes for most tasks, which may also affect your decisions. Of course, since F# is probably easier to parallelize than C++, it will also depend on the type of hardware you're planning to run on.
Ultimately, I think that the only way to really answer this question is to write F# and C++ code representative of the type of calculations that you want to perform and compare them.

Here are two examples I can share:
Matrix multiplication:
I have a blog post comparing different matrix multiplication implementations.
LBFGS
I have a large scale logistic regression solver using LBFGS optimization, which is coded in C++. The implementation is well tuned. I modified some code to code in C++/CLI, i.e. I compiled the code into .Net. The .Net version is 3 to 5 times slower than the naive compiled one on different datasets. If you code LBFGS in F#, the performance can not be better than C++/CLI or C#, (but would be very close).
I have another post on Why F# is the language for data mining, although not quite related to the performance issue you concern here, it is quite related to scientific computing in F#.

If I say "ask again in 2-3 years" I think that will answer your question completely :-)
First, don't expect F# to be any different than C# perf-wise, unless you are doing some convoluted recursions on purpose and I'd guess you are not since you asked about numerics.
Floating-point wise it is bound to be better than Java since CLR doesn't aim at cross-platform uniformity, meaning that JIT will go to 80-bits whenever it can. On the other side you don't control over that beyond watching the number of variables to make sure there's enough FP registers.
Vector-wise, if you scream loud enough maybe something happens in 2-3 yr since Direct3D is entering .NET as a general API anyway and C# code done in XNA runs on Xbox whihc is as close to the bare metal you can get with CLR. That still means that you'd need do so some intermediary code on your own.
So don't expect CUDA or even ability to just link NVIDIA libs and get going. You'd have much more luck trying that approach with Haskell if for some reason you really, really need a "functional" language since Haskell was designed to be linking-friendly out of pure necessity.
Mono.Simd has been mentioned already and while it should be back-portable to CLR it might be quite some work to actually do it.
There,s quite some code in a social.msdn posting on using SSE3 in .NET, vith C++/CLI and C#, come array blitting, injecting SSE3 code for perf etc.
There was some talk about running CECIL on compiled C# to extract parts into HLSL, compile into shaders and link a glue code to schedule it (CUDA is doing the equivalent anyway) but I don't think that there's anything runnable coming out of that.
A thing that might be worth more to you if you want to try something soon is PhysX.Net on codeplex. Don't expect it to just unpack and do the magic. However, ih has currently active author and the code is both normal C++ and C++/CLI and yopu can probably get some help from the author if you want to go into details and maybe use similar approach for CUDA. For full speed CUDA you'll still need to compile your own kernels and then just interface to .NET so the easier that part goes the happier you are going to be.
There is a CUDA.NET lib which is supposed to be free but the page gives just e-mail address so expect some strings attached, and while the author writes a blog he's not particularly talkative about what's inside the lib.
Oh and if you have the budget yo might give that Psi Lambda a look (KappaCUDAnet is the .NET part). Apparently they are going to jack up the prices in Nov (if it's not a sales trick :-)

Firstly C is significantly faster than C++.. So if you need so much speed you should make the lib etc in c.
With regards to F# most bench marks use Mono which is up to 2 * slower than MS CLR due t partially to its use of the boehm GC ( they have a new GC and LVVM but these are still immature dont support generics etc).
.NEt languages itself are compiled to an IR ( the CIL) which compile to native code as efficiently as C++. There is one problem set that most GC languages suffer in and that is large amounts of mutable writes ( this includes C++ .NET as mentioned above) . And there is a certain scientific problem set that requires this , these when needed should probably use a native library or use the Flyweight pattern to reuse objects from a pool ( which reduces writes) . The reason is there is a write barrier in the .NET CLR where when updating a reference field (including a box) it will set a bit in a table saying this table is modified . If your code consists of lots of such writes it will suffer.
That said a .NET app like C# using lots of static code , structs and ref/out on the structs can produce C like performance but it is very difficult to code like this or maintain the code ( like C) .
Where F# shines however is parralelism over immutable data which goes hand and hand with more read based problems. Its worth noting most benchmarks are much higher in mutable writes than real life applications.
With regard to floating point , you should use an alternative lib ( ie the .Net one) to the oCaml ones due to it being slow. C/C++ allows faster for lower precision which oCaml doesnt by default.
Lastly i woudl argue a high level language like C#, F# and proper profiling will give you betetr pefromance than c and C++ for the same developer time. If you change a bottle neck to a c lib pinvoke call you will also end up with C like performance for critical areas. That said if you have unlimited budget and care more about speed then maintenance than C is the way to go ( not C++) .

Last I knew, most scientific computing was still done in FORTRAN. It's still faster than anything else for linear algebra problems - not Java, not C, not C++, not C#, not F#. LINPACK is nicely optimized.
But the remark about "your mileage may vary" is true of all benchmarks. Blanket statements (except mine) are rarely true.

Related

What's a good and safe language for drawing intensive particle systems over a long period of time?

Another one of my rather ambiguous question today, sorry.
Currently I have written some half decent software that has a 'roll your own' RESTful client, which pulls data from twitter. This data is then visualized with a number of particle systems using Open FrameWorks (a framework that works with c++).
My plans for this were to run the software indefinitely on my VPS, and build some kind of front end GUI allowing users to explore the pretty particles and so on. Between the JSON library I am using, C/C++, OpenFrameworks, and freaking Xcode4 I have produced way too many SIGBIRT and GDB errors to care for. I have go to the ends of the virtual world to fix them, and re wrote everything over and over. I even managed to SIGBIRT the openframeworks draw circle method, HAH!
(TL;DR starts here) Ok so anyway I am starting from scratch, looking for a powerful language that can crunch maths and blast through a good set of particles, and run quite well over the longest periods of time. Right now I am thinking about haskell, any ideas?
Thanks in advance all!
Haskell's (or more specifically GHC's) number crunching speed is approaching that of C++ but it's a little way behind. However, it's certainly not terrible, and Haskell's advantages in parallelism may become important. That is, if you write it in straight Haskell first, there's a good chance that it'll be easy to refactor it to run in parallel now or in the future. That isn't so true of C++.
The 'vector' package (on Hackage) would be a good choice for arrays suitable for number crunching. It supports mutable arrays in case that sort of approach is needed. However, if you're prepared to go more on the bleeding edge and your algorithm can be parallelized, you might want to look at the 'repa' package, and for extreme performance on a GPU, take a look at 'Accelerate' (which works but is still categorized as experimental).
The crashes you mention sound like they could be an indication of a bit of complexity in your problem. Where Haskell does well is in managing the complexity of... well, anything. So, if the problem is complex, then Haskell will serve you very well.
The foreign function interface in Haskell is well designed, though you will need to write C glue between Haskell and C++. So, that's another option for your number crunching.
For the web interface, take a look at 'yesod' which is seeing very active development and advertises itself as doing RESTful.
AFAIK, number crunching speed is not Haskell's strongest point - it's a highly abstract language, far from the 'metal'; its strength in a numeric processing context lies in the "mathiness" of its semantics - Haskell code often reads much like a Mathematical proof, and many of its concepts are borrowed from various fields of Mathematics.
For plain old number crunching, C++ is probably still your best choice, as it allows you to stay close to the hardware and optimize tight loops at the machine level, while offering higher-level programming constructs to manage complexity.
OTOH, if you have a library in place for the heavy lifting, and you merely need to write the glue to make the various parts work together, then go with whatever you're most comfortable with - python, C#, java, haskell, C++, ... - as long as they have bindings for all your libraries, you're good. If you don't have a library, then you might also consider writing the performance critical parts in C, and then pull them into your favorite high-level language - this is trivial in C++, slightly harder in python or haskell, and pretty damn inconvenient in java.

What are examples of efficient, type-inferred languages appropriate for working with multi-dimensional arrays

I don't care much about garbage collection, if present it should be optional. The language D fits the bill but I am exploring other options. Surprising to me, this seems to be a sparsely populated spot among languages. I want something with which I can run things at 80% of the speed of C, if possible.
I also would like the language to have good support for multicores. Not necessarily via threads, but anything that does not involve lots copying. For example GNU's parallel mode for libstdc++ is a reasonably good abstraction for me, but a little weak at giving pre-baked array primitives (this is not a complaint, it is not its job to give array primitives).
I suspect what I am driving at is a OCaMl like language with:
good support for multidimensional arrays,
no(or optional) garbage collection,
parallel programming primitives for array intensive code,
convenient C FFI,
and with a reasonable chance of running at 80% of C speed.
I am not sure what tags to use so suggestions are welcome. I also want to make this a wiki, but not sure how to do it. I have heard of Felix but do not know if that would be appropriate here.
You're pretty much describing D, especially if you have a long time horizon and are willing to wait for some stuff that's in the works to fully pan out.
Good support for multidimensional arrays: I'm mentoring a Google Summer of Code project focused on just that. It's not polished and ready for prime time yet, but it's already usable if you live on the bleeding edge. It also includes bindings to BLAS and LAPACK and expression templates and evaluators to provide a nice wrapper over these libraries.
D's default approach to memory management is garbage collection, but enough low-level facilities exist that you don't have to use it where performance or space efficiency is a high priority. You have full access to the C standard library and can use C's malloc and free directly. The ability to work with untyped blocks of memory also allows custom allocators to be implemented. Both I and the GSoC student I'm mentoring have been using a region allocator, which will be reviewed soon for inclusion in Phobos (the D standard library).
Parallel programming: See the std.parallelism module of the standard library. It's not geared specifically to array intensive code but it's definitely usable for that purpose.
D fully supports the C ABI. All you have to do is translate the header file and link in a C object file. There's even a tool called htod that can automate the simple cases.
D is currently somewhat slower than C because it hasn't had years of work put into optimization, as the focus has been on features and bug fixes. However, it's not far behind and almost all of the performance difference can be explained by a somewhat naive garbage collector and lack of aggressive inlining. If you avoid using the garbage collector in the most performance-critical parts of your code and manually inline a few functions here and there, it should be as fast as C.
Although I'm a huge fan of both OCaml and D, in regards to existing libraries I have found c++ to be the out and out winner here. Expression template work cut it's teeth in c++'s manipulation of multidimensional arrays and has been used to create extremely efficient libraries that vastly outperform bare c. Temporaries may be removed from calculations that you just cannot remove from c code that is generic enough to compose operations. Loops can be unrolled automatically during translationtime. And you can use autoparallelisation features out of the box that further drive your use of multicore boxes to new heights. With c++11's type inference features, you have all of your requested behavior in a mature implementation that will outperform nearly any other language out there.
Now, I'm not saying you can't do the same in D. You just won't have the maturity of libraries like uBLAS, Eigen, or Blitz++. And you won't have the compiler support that you find with Intel's Cilk Plus and other Parallel Building Blocks. Obviously, down the road I have no doubt that this kind of support will be available from the community, but c++ is the only language I have used that offers it today.
I am saying that you can't get that with OCaml, at least the standard compiler and library, simply because of the lack of true symmetric multiprocessing. Something like JoCaml, OC4MC, etc. may provide the necessary parallelisation, but you still lack a deep use of temporary reduction in commonly available matrix libraries.
This is just my experience. It is much harder yet to get these kinds of optimisations from C#, F#, and that family due to lack of deterministic destruction of temporaries - making expression template techniques much more unwieldy. The lack of compiletime type inference of templates makes many techniques unreachable. Of all the languages I have worked with over the years, building tensor and spinor libraries, graph libraries, abstract algebras (represented elements in matrices), etc., c++ still holds the best support for efficiency, now in a much better type deduced environment.
I would suggest F#.
Familiar syntax of Ocaml
2D array support in std library for F#
TPL, PLINQ and Async support for parallel programming
CLR based, hence garbage collection (Runs on Microsoft .NET as well as open source Mono)
PInvoke for call native C libraries
Good performance
Jay's FIsh provides what you want, except for the C FFI. FIsh is polyadic, not merely polymorphic. It infers the dimensions of arrays. You can write code and test on 1 or 2 dimensions but calculate with 5 or 6. FIsh is a partial evaluator. With the Fortran backend it outperforms C code easily. FIsh code is evaluated entirely on the stack (no heap, let alone garbage collector).
FYI: Felix does not have type inference. It is otherwise suitable, and easily has the best binding to C of any language other than C++. It's intended to provide low level concurrency (share memory multi-threading). Felix can usually outperform C by use of high level optimisations no C compiler could dream of. The author of Felix will provide any modifications necessary to make your code easy to read and write and run faster than C (I'm the author :) Just give me some use cases!

Programming in gamedev (performance related)

I am just wondering how some things work in gamedev:
I know, that the performance is actually crucial so there is still (and I think never will be) no place to use managed languages/platforms as Java/.NET just because of their performance. But... recently I have read somewhere here on SO, that even though people creating games use C++ as a primary language, they actually do not use STL or Boost (or a lot of them). In think it has something in common with performance, right? If I am wrong, could you please tell me what are the reasons to avoid those libraries (that I think make developer's life much easier)? Is it because of licensing (Boost)? And what about EA's version of STL? Do other studios make their own versions too?
How "close to metal" game programming really is? Do you go deeper and closer to the machine? Do you sometimes use Assembly for critical inner loops, or C++ is actually the lowest abstraction layer that you use at the moment? I assume that in such products where performance is the most important thing profiling is very, very common task - but are you sometimes forced to use assembly to speed some parts up, or good C++ is "good enough"?
Edit:
Sorry, It may not have been clear, but I am interested in answers from people having game industry experience. I am not interested in some assumptions given by people who do not have commercial experience in game development. I am also not interested in examples of some niche-games created in C#/Java whatever. However if you know a product that looks better than FarCry2 (just and example, but your favourite modern great looking game name here), and is written entirely in Java/.NET, and has similar performance to FarCry2... do not hesitate to mention this product! Thanks.
1.
Contrary to some beliefs, the STL is quite optimized and not at all bad code. The reason for why most game studios don't use it is memory. You don't have as much control over memory allocation and deallocation as if you would write your own version of the STL containers. This is also the reason why managed languages are not preferred.
Writing your own containers will let you write cross platform code and do memory tracking easier. This is especially true on consoles where, for instance the PS3, requires detailed knowledge of the hardware in order to get the best performance out of it. Which usually in the end means that you need full control over memory flow between the PPU, SPUs and RSX.
2.
Assembler is only "required" (in quotes since it's not actually required but helps) for very specialized operations, e.g. math library functions. What's more common is SIMD intrinsics which vectorizes the code. However, most studios have legacy code which is quite optimized and since these optimizations are quite low level it's not code that needs to change greatly between hardware generations. I'd say on consoles it's much more common that you use lower level code.
I know, that the performance is actually crucial so there is still (and I think never will be) no place to use managed languages/platforms as Java/.NET just because of their performance.
No, you don't know this. You think it, you want to believe it because you romanticize game development, and because you think high-level languages can't be fast.
.NET performance is perfectly good enough for 90% of the games out there. And it's only going to get better. There is no inherent reason why managed platforms must be slower. They have the potential to be faster because they're JIT'ed. In practice, their performance tends to be about the same as reasonably good C++ code, much better than typical C++ code, and slightly worse than really good C++ code. And most big games use more than one language anyway. They use scripting languages, like Lua or Python, or some home-brewed stuff, all of which are orders of magnitudes slower than .NET.
Similarly, there is absolutely no reason why most of your game couldn't be written in .NET. And then the three really performance-critical functions can, if necessary, be ported to native C++ later.
But... recently I have read somewhere here on SO, that even though people creating games use C++ as a primary language, they actually do not use STL or Boost (or a lot of them). In think it has something in common with performance, right? If I am wrong, could you please tell me what are the reasons to avoid those libraries
Same as you're guilty of above... Superstition about game development. "Oh no, we can't afford to use other people's code! It's far too inefficient". Game development is stuck in the 80's in terms of programming practices and methodologies. In other words, don't worry too much about what other game developers do. If the STL or Boost make your code easier to write, then use it! And then, if you experience performance problems, profile it, and if necessary, replace that particular library component with your own.
But most of the STL is literally zero overhead. And 95% of the code in any game is not performance critical. Treat game development like you would any other programming. Don't treat it as some magical land where every line of code must be perfectly optimized and where normal rules don't apply.
And what about EA's version of STL? Do other studios make their own versions too?
As far as I know, no. EA made it partly for internal use, but also as input to the C++ community as a whole, as an example of what they (and a lot of game developers) would like to see from future revisions to the standard (it was submitted to the standards committee as well)
I can recommend the book C++ for game programmers. It has an in-depth discussion of the performance cost of c++ features such as the STL, exceptions and RTTI. It also touches on having your own memory manager, and various common performance optimizations.
Appearlenty there is a new edition out, but it has a different author. What's up with that?
About 1:
I haven't tried EA's STL version, but I can confirm from my own game development experience that the STL can sometimes be a bottleneck. So far I was always able to find workarounds though.
Boost can be very helpful, but it really depends on the particular part of boost whether it's helpful or not for performance-critical code. For example, Boost::filesystem was very useful for me, whereas boost::signals was barely useable due to very poor performance. So I implemented my own signaling library using FastDelegates instead.
About 2:
Most of the time you will get away with regular C++ code. Once the game is running and you can identify bottlenecks with your profiler, you can start optimizing those pieces of code. And even then, you might not have to write any assembler code if you do it right.
For example, my custom-built 2d game engine runs without hardware acceleration. I developed it when 3D drivers were still quite buggy and most casual gamers have outdated graphics card drivers, so compatibility was more important than pure performance at that time.
Still, in our game latest game Gemsweeper, we are using a lot of alpha blending with 8-bit alpha masks and the game still has to run on 500 mhz cpu's. So alpha blending turned out to be a performance critical area.
To optimize this, I've used the VectorC compiler so that I could take advantage of MMX, SSE and the like without having to write assembler code. But the same code can be fast on one CPU and slow on the other (e.g. Intel vs. AMD), so I also compiled the alpha blending code several times with different optimization settings. The game runs a benchmark at runtime to find the fastest blitting module for each blitting method and uses that module from then on.
I've compared the result with some other 2d blending libraries, one of which claimed to be pure hand-optimized assembler and my engine had about the same performance in average, as measured on different CPUs.
Bottom line:
Do not optimize without measuring first and think about alternatives before starting to write assembler. This usually wields good enough results and wil save you a lot of time.
We use 'STL' (ie. the standard C++ library) and a small amount of Boost. However some of it is avoided or frowned upon (std::map, std::list, boost::shared_ptr) typically for the over-exuberant memory allocation policies or poor cache coherency. These can typically be worked around, eg. with custom allocators, but instead we have other approaches to the same problems with their own benefits.
As for how close to the metal it really is, it depends. In our project C++ is the lowest level we go. In another project in this studio, there is a little assembly, especially on the non-PC platforms. These days in certain projects you're just as likely to be limited by the GPU as by the CPU so the days of low level code optimisation are getting fewer and the days of optimising shaders and art assets are growing.
Be wary of claiming that Java/.NET etc is never used however. Not everybody needs the performance of FarCry2 (which is a pretty excessive spec) which is why you're seeing more and more games written in managed languages with C++ just for optimisation.
It is true that in game development, STL is not used. In spite of what certain people always rush to claim, they also never use Java or C# or other managed languages.
I'm not talking about small X-Box Live Arcade downloadable games or web browser games, or such things. I'm talking about high-end development in AAA games.
They don't use STL. However, they do use their own custom implementations that look a lot like STL. There will be smart-arrays, there will be hash tables, there will be smart pointers, they just won't be STL.
Consoles have some performance characteristics that are very different from PCs. Even game projects that exclusively target PC are usually using codebases that have been used for console projects in the past. A lot of tweaking goes into making the basic template structures work as desired.
Most game studios also want code that they can adapt to other platforms. Locking into an implementation from MS/Sony/Nintendo makes for a lot more pain when it comes time to port the game to a new platform. The provided template libraries (which aren't necessarily STL to start with) are often less than stellar. At least they are that way early in the hardware cycle when a studio is ironing out the engine they plan to keep using for the next five years.
At the studios I've worked at, I've certainly seen a fair degree of "not-built-here" attitude to dismiss third party code. Sometimes it's justified, sometimes it's not. In the case of basic data structure templates, it typically is.
As for your second question, assembler is occasionally used. But only in isolated situations where a large volume of math needs to happen very frequently. An entire engine might contain two or three smallish files of asm blocks.
You can find out for yourself (to a degree) by looking at game SDKs.
Almost all the id Tech 4 games (DooM III, Prey, Quake IV, ET:QW) have SDKs out, complete with physics, script, AI, math, etc. systems included. The only asm used is for specialized math code, everything else is pure C++.
Crytek has a Crysis SDK out (you'll need the game installed to install the SDK though) and Far Cry SDK.
Valve has the Source SDK available to anyone who has purchased a Source game through steam.
There are a lot more if you look. A lot of the code isn't particularly clean or flexible (sometimes not even fast), but I suppose it's easier to adjust things in code you've written as opposed to monolithic libraries full of hard to understand template-fu.
No, you are largely wrong. Both .NET and Java have been used in commercial games, certainly on Windows (and probably on consoles too).
STL is also used widely, I know that quite a large proportion of amateur games developers use it.
Probably the main reason for not using STL is inertia, and using third party libraries/engines which do not.
I imagine that historically on some platforms, good STL implementations were no available, especially on RAM-limited stuff like PS2.

Is Communicating Sequential Processes ever used in large multi threaded C++ programs?

I'm currently writing a large multi threaded C++ program (> 50K LOC).
As such I've been motivated to read up alot on various techniques for handling multi-threaded code. One theory I've found to be quite cool is:
http://en.wikipedia.org/wiki/Communicating_sequential_processes
And it's invented by a slightly famous guy, who's made other non-trivial contributions to concurrent programming.
However, is CSP used in practice? Can anyone point to any large application written in a CSP style?
Thanks!
CSP, as a process calculus, is fundamentally a theoretical thing that enables us to formalize and study some aspects of a parallel program.
If you instead want a theory that enables you to build distributed programs, then you should take a look to parallel structured programming.
Parallel structural programming is the base of the current HPC (high-performance computing) research and provides to you a methodology about how to approach and design parallel programs (essentially, flowcharts of communicating computing nodes) and runtime systems to implements them.
A central idea in parallel structured programming is that of algorithmic skeleton, developed initially by Murray Cole. A skeleton is a thing like a parallel design pattern with a cost model associated and (usually) a run-time system that supports it. A skeleton models, study and supports a class of parallel algorithms that have a certain "shape".
As a notable example, mapreduce (made popular by Google) is just a kind of skeleton that address data parallelism, where a computation can be described by a map phase (apply a function f to all elements that compose the input data), and a reduce phase (take all the transformed items and "combine" them using an associative operator +).
I found the idea of parallel structured programming both theoretical sound and practical useful, so I'll suggest to give a look to it.
A word about multi-threading: since skeletons addresses massive parallelism, usually they are implemented in distributed memory instead of shared. Intel has developed a tool, TBB, which address multi-threading and (partially) follows the parallel structured programming framework. It is a C++ library, so probably you can just start using it in your projects.
Yes and no. The basic idea of CSP is used quite a bit. For example, thread-safe queues in one form or another are frequently used as the primary (often only) communication mechanism to build a pipeline out of individual processes (threads).
Hoare being Hoare, however, there's quite a bit more to his original theory than that. He invented a notation for talking about the processes, defined a specific set of signals that can be sent between the processes, and so on. The notation has since been refined in various ways, quite a bit of work put into proving various aspects, and so on.
Application of that relatively formal model of CSP (as opposed to just the general idea) is much less common. It's been used in a few systems where high reliability was considered extremely important, but few programmers appear interested in learning (yet another) formal design notation.
When I've designed systems like this, I've generally used an approach that's less rigorous, but (at least to me) rather easier to understand: a fairly simple diagram, with boxes representing the processes, and arrows representing the lines of communication. I doubt I could really offer much in the way of a proof about most of the designs (and I'll admit I haven't designed a really huge system this way), but it's worked reasonably well nonetheless.
Take a look at the website for a company called Verum. Their ASD technology is based on CSP and is used by companies like Philips Healthcare, Ericsson and NXP semiconductors to build software for all kinds of high-tech equipment and applications.
So to answer your question: Yes, CSP is used on large software projects in real-life.
Full disclosure: I do freelance work for Verum
Answering a very old question, yet it seems important that one
There is Go where CSPs are a fundamental part of the language. In the FAQ to Go, the authors write:
Concurrency and multi-threaded programming have a reputation for difficulty. We believe this is due partly to complex designs such as pthreads and partly to overemphasis on low-level details such as mutexes, condition variables, and memory barriers. Higher-level interfaces enable much simpler code, even if there are still mutexes and such under the covers.
One of the most successful models for providing high-level linguistic support for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. Occam and Erlang are two well known languages that stem from CSP. Go's concurrency primitives derive from a different part of the family tree whose main contribution is the powerful notion of channels as first class objects. Experience with several earlier languages has shown that the CSP model fits well into a procedural language framework.
Projects implemented in Go are:
Docker
Google's download server
Many more
This style is ubiquitous on Unix where many tools are designed to process from standard in to standard out. I don't have any first hand knowledge of large systems that are build that way, but I've seen many small once-off systems that are
for instance this simple command line uses (at least) 3 processes.
cat list-1 list-2 list-3 | sort | uniq > final.list
This system is only moderately sized, but I wrote a protocol processor that strips away and interprets successive layers of protocol in a message that used a style very similar to this. It was an event driven system using something akin to cooperative threading, but I could've used multithreading fairly easily with a couple of added tweaks.
The program is proprietary (unfortunately) so I can't show off the source code.
In my opinion, this style is useful for some things, but usually best mixed with some other techniques. Often there is a core part of your program that represents a processing bottleneck, and applying various concurrency increasing techniques there is likely to yield the biggest gains.
Microsoft had a technology called ActiveMovie (if I remember correctly) that did sequential processing on audio and video streams. Data got passed from one filter to another to go from input to output format (and source/sink). Maybe that's a practical example??
The Wikipedia article looks to me like a lot of funny symbols used to represent somewhat pedestrian concepts. For very large or extensible programs, the formalism can be very important to check how the (sub)processes are allowed to interact.
For a 50,000 line class program, you're probably better off architecting it as you see fit.
In general, following ideas such as these is a good idea in terms of performance. Persistent threads that process data in stages will tend not to contend, and exploit data locality well. Also, it is easy to throttle the threads to avoid data piling up as a fast stage feeds a slow stage: just block the fast one if its output buffer grows too big.
A little bit off-topic but for my thesis I used a tool framework called TERRA/LUNA which aims for software development for Embedded Control Systems but is used heavily for all sorts of software development at my institute (so only academical use here).
TERRA is a graphical CSP and software architecture editor and LUNA is both the name for a C++ library for CSP based constructs and the plugin you'll find in TERRA to generate C++ code from your CSP models.
It becomes very handy in combination with FDR3 (a CSP refinement checker) to detect any sort of (dead/life/etc) lock or even profiling.

Languages faster than C++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
It is said that Blitz++ provides near-Fortran performance.
Does Fortran actually tend to be faster than regular C++ for equivalent tasks?
What about other HL languages of exceptional runtime performance? I've heard of a few languages suprassing C++ for certain tasks... Objective Caml, Java, D...
I guess GC can make much code faster, because it removes the need for excessive copying around the stack? (assuming the code is not written for performance)
I am asking out of curiosity -- I always assumed C++ is pretty much unbeatable barring expert ASM coding.
Fortran is faster and almost always better than C++ for purely numerical code. There are many reasons why Fortran is faster. It is the oldest compiled language (a lot of knowledge in optimizing compilers). It is still THE language for numerical computations, so many compiler vendors make a living of selling optimized compilers. There are also other, more technical reasons. Fortran (well, at least Fortran77) does not have pointers, and thus, does not have the aliasing problems, which plague the C/C++ languages in that domain. Many high performance libraries are still coded in Fortran, with a long (> 30 years) history. Neither C or C++ have any good array constructs (C is too low level, C++ has as many array libraries as compilers on the planet, which are all incompatible with each other, thus preventing a pool of well tested, fast code).
Whether fortran is faster than c++ is a matter of discussion. Some say yes, some say no; I won't go into that. It depends on the compiler, the architecture you're running it on, the implementation of the algorithm ... etc.
Where fortran does have a big advantage over C is the time it takes you to implement those algorithms. And that makes it extremely well suited for any kind of numerical computing. I'll state just a few obvious advantages over C:
1-based array indexing (tremendously helpful when implementing larger models, and you don't have to think about it, but just FORmula TRANslate
has a power operator (**) (God, whose idea was that a power function will do ? Instead of an operator?!)
it has, I'd say the best support for multidimensional arrays of all the languages in the current market (and it doesn't seem that's gonna change so soon) - A(1,2) just like in math
not to mention avoiding the loops - A=B*C multiplies the arrays (almost like matlab syntax with compiled speed)
it has parallelism features built into the language (check the new standard on this one)
very easily connectible with languages like C, python, so you can make your heavy duty calculations in fortran, while .. whatever ... in the language of your choice, if you feel so inclined
completely backward compatible (since whole F77 is a subset of F90) so you have whole century of coding at your disposal
very very portable (this might not work for some compiler extensions, but in general it works like a charm)
problem oriented solving community (since fortran users are usually not cs, but math, phy, engineers ... people with no programming, but rather problem solving experience whose knowledge about your problem can be very helpful)
Can't think of anything else off the top of my head right now, so this will have to do.
What Blitz++ is competing against is not so much the Fortran language, but the man-centuries of work going into Fortran math libraries. To some extent the language helps: an older language has had a lot more time to get optimizing compilers (and , let's face it, C++ is one of the most complex languages). On the other hand, high level C++ libraries like Blitz++ and uBLAS allows you to state your intentions more clearly than relatively low-level Fortran code, and allows for whole new classes of compile-time optimizations.
However, using any library effectively all the time requires developers to be well acquainted with the language, the library and the mathematics. You can usually get faster code by improving any one of the three...
FORTAN is typically faster than C++ for array processing because of the different ways the languages implement arrays - FORTRAN doesn't allow aliasing of array elements, whereas C++ does. This makes the FORTRAN compilers job easier. Also, FORTRAN has many very mature mathematical libraries which have been worked on for nearly 50 years - C++ has not been around that long!
This will depend a lot on the compiler, programmers, whether it has gc and can vary too much. If it is compiled directly to machine code then expect to have better performance than interpreted most of the time but there is a finite amount of optimization possible before you have asm speed anyway.
If someone said fortran was slightly faster would you code a new project in that anyway?
the thing with c++ is that it is very close to the hardware level. In fact, you can program at the hardware level (via assembly blocks). In general, c++ compilers do a pretty good job at optimisations (for a huge speed boost, enable "Link Time Code Generation" to allow the inlining of functions between different cpp files), but if you know the hardware and have the know-how, you can write a few functions in assembly that work even faster (though sometimes, you just can't beat the compiler).
You can also implement you're own memory managers (which is something a lot of other high level languages don't allow), thus you can customize them for your specific task (maybe most allocations will be 32 bytes or less, then you can just have a giant list of 32-byte buffers that you can allocate/deallocate in O(1) time). I believe that c++ CAN beat any other language, as long as you fully understand the compiler and the hardware that you are using. The majority of it comes down to what algorithms you use more than anything else.
You must be using some odd managed XML parser as you load this page then. :)
We continously profile code and the gain is consistently (and this is not naive C++, it is just modern C++ with boos). It consistensly paves any CLR implementation by at least 2x and often by 5x or more. A bit better than Java days when it was around 20x times faster but you can still find good instances and simply eliminate all the System.Object bloat and clearly beat it to a pulp.
One thing managed devs don't get is that the hardware architecture is against any scaling of VM and object root aproaches. You have to see it to believe it, hang on, fire up a browser and go to a 'thin' VM like Silverlight. You'll be schocked how slow and CPU hungry it is.
Two, kick of a database app for any performance, yes managed vs native db.
It's usually the algorithm not the language that determines the performance ballpark that you will end up in.
Within that ballpark, optimising compilers can usually produce better code than most assembly coders.
Premature optimisation is the root of all evil
This may be the "common knowledge" that everyone can parrot, but I submit that's probably because it's correct. I await concrete evidence to the contrary.
D can sometimes be faster than C++ in practical applications, largely because the presence of garbage collection helps avoid the overhead of RAII and reference counting when using smart pointers. For programs that allocate large amounts of small objects with non-trivial lifecycles, garbage collection can be faster than C++-style memory management. Also, D's builtin arrays allow the compiler to perform better optimizations in some cases than C++'s STL vector, which the compiler doesn't understand. Furthermore, D2 supports immutable data and pure function annotations, which recent versions of DMD2 optimize based on. Walter Bright, D's creator, wrote a JavaScript interpreter in both D and C++, and according to him, the D version is faster.
C# is much faster than C++ - in C# I can write an XML parser and data processor in a tenth the time it takes me to write it C++.
Oh, did you mean execution speed?
Even then, if you take the time from the first line of code written to the end of the first execution of the code, C# is still probably faster than C++.
This is a very interesting article about converting a C++ program to C# and the effort required to make the C++ faster than the C#.
So, if you take development speed into account, almost anything beats C++.
OK, to address tht OP's runtime only performance requirement: It's not the langauge, it's the implementation of the language that determines the runtime performance. I could write a C++ compiler that produces the slowest code imaginable, but it's still C++. It is also theoretically possible to write a compiler for Java that targets IA32 instructions rather than the Java VM byte codes, giving a runtime speed boost.
The performance of your code will depend on the fit between the strengths of the language and the requirements of the code. For example, a program that does lots of memory allocation / deallocation will perform badly in a naive C++ program (i.e. use the default memory allocator) since the C++ memory allocation strategy is too generalised, whereas C#'s GC based allocator can perform better (as the above link shows). String manipulation is slow in C++ but quick in languages like php, perl, etc.
It all depends on the compiler, take for example the Stalin Scheme compiler, it beats almost all languages in the Debian micro benchmark suite, but do they mention anything about compile times?
No, I suspect (I have not used Stalin before) compiling for benchmarks (iow all optimizations at maximum effort levels) takes a jolly long time for anything but the smallest pieces of code.
if the code is not written for performance then C# is faster than C++.
A necessary disclaimer: All benchmarks are evil.
Here's benchmarks that in favour of C++.
The above two links show that we can find cases where C++ is faster than C# and vice versa.
Performance of a compiled language is a useless concept: What's important is the quality of the compiler, ie what optimizations it is able to apply. For example, often - but not always - the Intel C++ compiler produces better performing code than g++. So how do you measure the performance of C++?
Where language semantics come in is how easy it is for the programmer to get the compiler to create optimal output. For example, it's often easier to parallelize Fortran code than C code, which is why Fortran is still heavily used for high-performance computation (eg climate simulations).
As the question and some of the answers mentioned assembler: the same is true here, it's just another compiled language and thus not inherently 'faster'. The difference between assembler and other languages is that the programmer - who ideally has absolute knowledge about the program - is responsible for all of the optimizations instead of delegating some of them to the 'dumb' compiler.
Eg function calls in assembler may use registers to pass arguments and don't need to create unnecessary stack frames, but a good compiler can do this as well (think inlining or fastcall). The downside of using assembler is that better performing algorithms are harder to implement (think linear search vs. binary seach, hashtable lookup, ...).
Doing much better than C++ is mostly going to be about making the compiler understand what the programmer means. An example of this might be an instance where a compiler of any language infers that a region of code is independent of its inputs and just computes the result value at compile time.
Another example of this is how C# produces some very high performance code simply because the compiler knows what particular incantations 'mean' and can cleverly use the implementation that produces the highest performance, where a transliteration of the same program into C++ results in needless alloc/delete cycles (hidden by templates) because the compiler is handling the general case instead of the particular case this piece of code is giving.
A final example might be in the Brook/Cuda adaptations of C designed for exotic hardware that isn't so exotic anymore. The language supports the exact primitives (kernel functions) that map to the non von-neuman hardware being compiled for.
Is that why you are using a managed browser? Because it is faster. Or managed OS because it is faster. Nah, hang on, it is the SQL database.. Wait, it must be the game you are playing. Stop, there must be a piece of numerical code Java adn Csharp frankly are useless with. BTW, you have to check what your VM is written it to slag the root language and say it is slow.
What a misconecption, but hey show me a fast managed app so we can all have a laugh. VS? OpenOffice?
Ahh... The good old question - which compiler makes faster code?
It only matters in code that actually spends much time at the bottom of the call stack, i.e. hot spots that don't contain function calls, such as matrix inversion, etc.
(Implied by 1) It only matters in code the compiler actually sees. If your program counter spends all its time in 3rd-party libraries you don't build, it doesn't matter.
In code where it does matter, it all comes down to which compiler makes better ASM, and that's largely a function of how smartly or stupidly the source code is written.
With all these variables, it's hard to distinguish between good compilers.
However, as was said, if you've got a lot of Fortran code to compile, don't re-write it.