Comparison of Boost StateCharts vs. Samek's "Quantum Statecharts" - c++

I've had heavy exposure to Miro Samek's "Quantum Hierarchical State Machine," but I'd like to know how it compares to Boost StateCharts - as told by someone who has worked with both. Any takers?

I know them both, although at different detail levels. But we can start with the differences I've came across, maybe there are more :-) .
Scope
First, the Quantum Platform provides a complete execution framework for UML state machines, whereas boost::statechart only helps the state machine implementations. As such, boost::statechart provides the same mechanism as the event processor of the Quantum Platform (QEP).
UML Conformance
Both approaches are designed to be UML conform. However, the Quantum Platform executes transition actions before exit actions of the respective state. This contradicts the UML, but in practice this is seldom a problem (if the developer is aware of it).
Boost::statechart is designed according to UML 1.4, but as far as I know the execution semantics did not change in UML 2.0 in an incompatible way (please correct me if I'm wrong), so this shouldn't be a problem either.
Supported UML features
Both implementations do not support the full UML state machine feature set. For example, parallel states (aka AND states) are not supported directly in QP. They have to be implemented manually by the user. Boost::statechart does not support internal transitions, because they were introduced in UML 2.0.
I think the exact features that each technique supports are easy to figure out in the documentation, so I don't list them here.
As a fact, both support the most important statechart features.
Other differences
Another difference is that QP is suitable for embedded applications, whereas boost::statechart maybe is, maybe not. The FAQ says "it depends" (see http://www.boost.org/doc/libs/1_44_0/libs/statechart/doc/faq.html#EmbeddedApplications), but to me this is already a big warning sign.
Also, you have to take special measurements to get boost::statechart real-time capable (see FAQ).
So much to the differences that I know, tell me if you find more, I'd be interested!

I have also worked with both, let me elaborate on theDmi's great answer:
Trace capability:
QP also implements a powerful trace capability called QSpy which enables very fine granularity traces with filter capabilities. With boost you have to roll your own and you never get beyond a certain granularity.
Modern C++ Style and Compile time error checking:
While Boost MSM and Statecharts give horrid and extremely long error messages if you mess up (as does all code written by template geniuses I envy), this is far better than runtime error detection. QP uses Q_ASSERT() and similar macros to do some error checking but in general you have to watch yourself more with QP and code is less durable.
I also find the extensive use of the preprocessor in QP takes a bit of getting used to. It may be warranted to use the preprocessor rather than templates, virtual functions etc. because of QPs use in embedded systems where the C++ compilers are often worse and the hardware is less virtual function friendly but sometimes I wish Mr. Samek would make a C, a C++ and a Modern C++ version ;) Rumor has it I'm not the only one who hates the preprocessor.
Scalability:
Boost MSM is not good for anything above 20 states, Statecharts has pretty much no limit on states but the amount of transitions a state can have is limited by the mpl::vector/list constraints. QP is scalable to an insane degree, virtually unlimited states and transitions are possible. It should also be noted that QP state machines can be spread over many many files with few dependencies.
Model driven development:
because of its extreme scalability and flexibility QP is much better suited for Model Driven Development, see this article for a lengthy comparison: http://security.hsr.ch/mse/projects/2011_Code_Generator_for_UML_State_Machines.pdf
Embedded designs:
QP is the only solution for any kind of embedded design in my mind. Its documented down to the bare bones so its easily portable, ports exist for many many common processors and it brings a lot of stuff with it beyond the state machine functionality. I particularly like the raw thread safe queues and memory management. I have never seen an embedded kernel I liked until I tried the RTC Kernel in QP (although it should be noted I have not used it in production code yet).

I am unfamiliar with Boost StateCharts, but something I feel Samek gets wrong is that he associates transition actions with state context. Transition actions should occur between states.
To understand why I don't like this style requires an example:
What if a state has two different transitions out? Then the events are different but the source state would be the same.
In Samek's formalism, transition actions are associated with a state context, so you have to have the same action on both transitions. In this way, Samek does not allow you to express a pure Mealy model.
While I have not provided a comparison to Boost StateCharts, I have provided you with some details on how to critique StateCharts implementations: by analyzing coupling among the various components that make up the implementation.

Related

Tool for model checking large, distributed C++ projects such as KDE?

Is there a tool which can handle model checking large, real-world, mostly-C++, distributed systems, such as KDE?
(KDE is a distributed system in the sense that it uses IPC, although typically all of the processes are on the same machine. Yes, by the way, this is a valid usage of "distributed system" - check Wikipedia.)
The tool would need to be able to deal with intraprocess events and inter-process messages.
(Let's assume that if the tool supports C++, but doesn't support other stuff that KDE uses such as moc, we can hack something together to workaround that.)
I will happily accept less general (e.g. static analysers specialised for finding specific classes of bugs) or more general static analysis alternatives, in lieu of actual model checkers. But I am only interested in tools that can actually handle projects of the size and complexity of KDE.
You're obviously looking for a static analysis tool that can
parse C++ on scale
locate code fragments of interest
extract a model
pass that model to a model checker
report that result to you
A significant problem is that everybody has a different idea about what model they'd like to check.
That alone likely kills your chance of finding exactly what you want, because each model extraction tool
has generally made a choice as to what it wants to capture as a model, and the chances that it matches
what you want precisely are IMHO close to zero.
You aren't clear on what specifically you want to model, but I presume you want to find the communication
primitives and model the process interactions to check for something like deadlock?
The commercial static analysis tool vendors seem like a logical place to look, but I don't think they are there, yet. Coverity would seem like the best bet, but it appears they only have some kind of dynamic analysis for Java threading issues.
This paper claims to do this, but I have not looked at in any detail: Compositional analysis of C/C++ programs
with VeriSoft. Related is [PDF] Computer-Assisted Assume/Guarantee Reasoning with VeriSoft. It appears you have to hand-annotate
the source code to indicate the modelling elements of interest. The Verifysoft tool itself appears to be proprietary to Bell Labs and is likely hard to obtain.
Similarly this one: Distributed Verification of Multi-threaded C++ Programs .
This paper also makes interesting claims, but doesn't process C++ in spite of the title:
Runtime Model Checking of Multithreaded C/C++ Programs.
While all the parts of this are difficult, an issue they all share is parsing C++ (as exemplified by
the previously quoted paper) and finding the code patterns that provide the raw information for the model.
You also need to parse the specific dialect of C++ you are using; its not nice that the C++ compilers all accept different languages. And, as you have observed, processing large C++ codes is necessary. Model checkers (SPIN and friends) are relatively easy to find.
Our DMS Software Reengineering Toolkit provides for general purpose parsing, with customizable pattern matching and fact extraction, and has a robust C++ Front End that handles many dialects of C++ (EDIT Feb 2019: including C++17 in Ansi, GCC and MS flavors). It could likely be configured to find and extract the facts that correspond to the model you care about. But it doesn't do this this off the shelf.
DMS with its C front end have been used to process extremely large C applications (19,000 compilation units!). The C++ front end has been used in anger on a variety of large-scale C++ projects (EDIT Feb 2019: including large scale refactoring of APIs across 3000+ compilation units). Given DMS's general capability, I think it likely capable of handling fairly big chunks of code. YMMV.
Static code analyzers when used against large code base first time usually produce so many warnings and alerts that you won't be able to analyze all of them in reasonable amount of time. It is hard to single out real problems from code that just look suspicious to a tool.
You can try to use automatic invariant discovery tools like "Daikon" that capture perceived invariants at run time. You can validate later if discovered invariants (equivalence of variables "a == b+1" for example) make sense and then insert permanent asserts into your code. This way when invariant is violated as result of your change you will get a warning that perhaps you broke something by your change. This method helps to avoid restructuring or changing your code to add tests and mocks.
The usual way of applying formal techniques to large systems is to modularise them and write specifications for the interfaces of each module. Then you can verify each module independently (while verifying a module, you import the specifications - but not the code - of the other modules it calls). This approach makes verification scalable.

Safe c++ in mission critical realtime apps

I'd want to hear various opinions how to safely use c++ in mission critical realtime applications.
More precisely, it is probably possible to create some macros/templates/class library for safe data manipulation (sealing for overflows, zerodivides produce infinity values or division is possible only for special "nonzero" data types), arrays with bound checking and foreach loops, safe smartpointers (similar to boost shared_ptr, for instance) and even safe multithreading/distributed model (message passing and lightweight processes like ones are defined in Erlang languge).
Then we prohibit some dangerous c/c++ constructions such as raw pointers, some raw types, native "new" operator and native c/c++ arrays ( for application programmer, not for library writer, of course). Ideally, we should create a special preprocessor/checker, at least we must have some formal checking procedure, which can be applyed to sources using some tool or manualy by some person.
So, my questions:
1) Are there any existing libraries/projects that utilize such an idea? (Embedded c++ is apparently not of desired kind) ?
2) Is it a good idea at all or not? Or it may be useful only for prototyping some another hipothetical language? Or it is totally unusable?
3) Any other thoughts (or links) on this matter also welcome
Sorry if this question is not actually a question, offtopic, duplicate, etc.,
but I haven't found more appropriate place to ask it
For good rules on how to write C++ for mission critical real-time applications have a look at the Joint Strike Fighter coding standards. Many of the rules there are based on the MISRA C coding standards, which I believe are proprietary. PC-Lint is a C++ code checker with rule sets like what you want (including the MISRA rules). I believe you can customize your own rules as well.
We use C++ in mission-critical real-time applications, although I suppose we have it easy (in theory) because we have to only provide real-time guarantees as good as the hardware our clients use. Thus, sufficient profiling lets us get by without mlockall() or stack pre-loading or any other RT traditions. As for the language itself, I think everyday modern C++ coding practices (ones that discourage C concepts) are entirely sufficient to write robust applications that can be used in RT contexts, given 21st century hardware.
Unit tests and QA should be the main focus of effort, instead of in-house libraries that duplicate existing language features.
If you're writing critical high-performance realtime S/W in C++, you probably need every microsecond you can get out of the hardware. As such, I wouldn't necessarily suggest implementing all the extra checks such as ones that you mentioned, at least the ones with overhead implications on program execution. You can obviously mask floating point exceptions to prevent divide by zero from crashing the program.
Some observations:
Peer review all code (possibly multiple reviewers). This will go a long way to improving quality without requiring lots of runtime checks.
DO make use of diagnostic tools and non-release-only asserts.
Do make use of simulation systems to test on non-embedded hardware.
C++ was specifically designed without things like bounds checking for performance reasons.
In general I don't suggest arbitrarily restricting the language, although making use of RAII and smart pointers should have minimal overhead and provides a nice benefit.
Someone else pointed out that if you want Ada, just use Ada.

How are exceptions implemented under the hood? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Just about everyone uses them, but many, including me simply take it for granted that they just work.
I am looking for high-quality material. Languages I use are: Java, C, C#, Python, C++, so these are of most interest to me.
Now, C++ is probably a good place to start since you can throw anything in that language.
Also, C is close to assembly. How would one emulate exceptions using pure C constructs and no assembly?
Finally, I heard a rumor that Google employees do not use exceptions for some projects due to speed considerations. Is this just a rumor? How can anything substantial be accomplished without them?
Thank you.
Exceptions are just a specific example of a more general case of advanced non-local flow control constructs. Other examples are:
notifications (a generalization of exceptions, originally from some old Lisp object system, now implemented in e.g. CommonLisp and Ioke),
continuations (a more structured form of GOTO, popular in high-level, higher-order languages),
coroutines (a generalization of subroutines, popular especially in Lua),
generators à la Python (essentially a restricted form of coroutines),
fibers (cooperative light-weight threads) and of course the already mentioned
GOTO.
(I'm sure there's many others I missed.)
An interesting property of these constructs is that they are all roughly equivalent in expressive power: if you have one, you can pretty easily build all the others.
So, how you best implement exceptions depends on what other constructs you have available:
Every CPU has GOTO, therefore you can always fall back to that, if you must.
C has setjmp/longjmp which are basically MacGyver continuations (built out of duct-tape and toothpicks, not quite the real thing, but will at least get you out of the immediate trouble if you don't have something better available).
The JVM and CLI have exceptions of their own, which means that if the exception semantics of your language match Java's/C#'s, you are home free (but if not, then you are screwed).
The Parrot VM as both exceptions and continuations.
Windows has its own framework for exception handling, which language implementors can use to build their own exceptions on top.
A very interesting use case, both of the usage of exceptions and the implementation of exceptions is Microsoft Live Lab's Volta Project. (Now defunct.) The goal of Volta was to provide architectural refactoring for Web applications at the push of a button. So, you could turn your one-tier web application into a two- or three-tier application just by putting some [Browser] or [DB] attributes on your .NET code and the code would then automagically run on the client or in the DB. In order to do that, the .NET code had to be translated to JavaScript source code, obviously.
Now, you could just write an entire VM in JavaScript and run the bytecode unmodified. (Basically, port the CLR from C++ to JavaScript.) There are actually projects that do this (e.g. the HotRuby VM), but this is both inefficient and not very interoperable with other JavaScript code.
So, instead, they wrote a compiler which compiles CIL bytecode to JavaScript sourcecode. However, JavaScript lacks certain features that .NET has (generators, threads, also the two exception models aren't 100% compatible), and more importantly it lacks certain features that compiler writers love (either GOTO or continuations) and that could be used to implement the above-mentioned missing features.
However, JavaScript does have exceptions. So, they used JavaScript Exceptions to implement Volta Continuations and then they used Volta Continuations to implement .NET Exceptions, .NET Generators and even .NET Managed Threads(!!!)
So, to answer your original question:
How are exceptions implemented under the hood?
With Exceptions, ironically! At least in this very specific case, anyway.
Another great example is some of the exception proposals on the Go mailing list, which implement exceptions using Goroutines (something like a mixture of concurrent coroutines ans CSP processes). Yet another example is Haskell, which uses Monads, lazy evaluation, tail call optimization and higher-order functions to implement exceptions. Some modern CPUs also support basic building blocks for exceptions (for example the Vega-3 CPUs that were specifically designed for the Azul Systems Java Compute Accelerators).
Here is a common way C++ exceptions are implemented:
http://www.codesourcery.com/public/cxx-abi/abi-eh.html
It is for the Itanium architecture, but the implementation described here is used in other architectures as well. Note that it is a long document, since C++ exceptions are complicated.
Here is a good description on how LLVM implements exceptions:
http://llvm.org/docs/ExceptionHandling.html
Since LLVM is meant to be a common intermediate representation for many runtimes, the mechanisms described can be applied to many languages.
In his book C Interfaces and Implementations: Techniques for Creating Reusable Software, D. R. Hanson provides a nice implementation of exceptions in pure C using a set of macros and setjmp/longjmp. He provides TRY/RAISE/EXCEPT/FINALLY macros that can emulate pretty much everything C++ exceptions do and more.
The code can be perused here (look at except.h/except.c).
P.S. re your question about Google. Their employees are actually allowed to use exceptions in new code, and the official reason for the ban in old code is because it was already written that way and it doesn't make sense to mix styles.
Personally, I also think that C++ without exceptions isn't the best idea.
C/C++ compilers use the underlying OS facilities for exception handling. Frameworks like .Net or Java also rely, in the VM, on the OS facilities. In Windows for instance, the real heavy lifting is done by SEH, the Structured Exception Handling infrastructure. You should absolutely read the old reference article: A Crash Course on the Depths of Win32™ Structured Exception Handling.
As for the cost of not using exceptions, they are expensive but compared to what? Compared to return error codes? After you factor in the cost of correctness and the quality of code, exceptions will always win for commercial applications. Short of few very critical OS level functions, exceptions are always better overall.
An last but not least there is the anti-pattern of using exceptions for flow control. Exceptions should be exceptional and code that abuses exceptions fro flow control will pay the price in performance.
The best paper ever written on the implementation of exceptions (under the hood) is Exception Handling in CLU by Barbara Liskov and Alan Snyder. I have referred to it every time I've started a new compiler.
For a somewhat higher-level view of an implementation in C using setjmp and longjmp, I recommend Dave Hanson's C Interfaces and Implementations (like Eli Bendersky).
setjmp() and longjmp() usually.
Exception catching does have a non-trivial cost, but for most purposes it's not a big deal.
The key thing an exception implementation needs to handle is how to return to the exception handler once an exception has been thrown. Since you may have made an arbitrary number of nested function calls since the try statement in C++, it must unwind the call stack searching for the handler. However implemented, this must incur the code size cost of maintaining sufficient information in order to perform this operation (and generally means a table of data for calls that can take exceptions). It also means that the dynamic code execution path will be longer than simply returning from functions calls (which is a fairly inexpensive operation on most platforms). There may be other costs as well depending on the implementation.
The relative cost will vary depending on the language used. The higher-level language used, the less likely the code size cost will matter, and the information may be retained regardless of whether exceptions are used.
An application where the use of exceptions (and C++ in general) is often avoided for good reasons is embedded firmware. In typical small bare metal or RTOS platforms, you might have 1MB of code space, or 64K, or even smaller. Some platforms are so small, even C is not practical to use. In this kind of environment, the size impact is relevant because of the cost mentioned above. It also impacts the standard library itself. Embedded toolchain vendors will often produce a library without exception capability, which has a huge impact on code size. Highly optimizing compilers may also analyze the callgraph and optimize away needed call frame information for the unwind operation for considerable space reduction. Exceptions also make it more difficult to analyze hard real-time requirements.
In more typical environments, the code size cost is almost certainly irrelevant and the performance factor is likely key. Whether you use them will depend on your performance requirements and how you want to use them. Using exceptions in non-exceptional cases can make an elegant design, but at a performance cost that may be unacceptable for high performance systems. Implementations and relative cost will vary by platform and compiler, so the best way to truly understand if exceptions are a problem is to analyze your own code's performance.
C++ code at Google (save for some Windows-specific cases) don't use exceptions: cfr the guidelines, short form: "We do not use C++ exceptions". Quoting from the discussion (hit the arrow to expand on the URL):
Our advice against using exceptions is
not predicated on philosophical or
moral grounds, but practical ones.
Because we'd like to use our
open-source projects at Google and
it's difficult to do so if those
projects use exceptions, we need to
advise against exceptions in Google
open-source projects as well. Things
would probably be different if we had
to do it all over again from scratch.
This rule does not apply to Google code in other languages, such as Java and Python.
Regarding performance - sparse use of exceptions will probably have negligible effects, but do not abuse them.
I have personally seen Java code which performed two orders of magnitude worse than it could have (took about x100 the time) because exceptions were used in an important loop instead of more standard if/returns.
Some runtimes like the Objective-C runtime have zero-cost 64-bit exceptions. What that means is that it doesn't cost anything to enter a try block. However, this is quite costly when the exception is thrown. This follows the paradigm of "optimize for the average case" - exceptions are meant to be exceptional, so it is better to make the case when there are no exceptions really fast, even if it comes at the cost of significantly slower exceptions.

Is Communicating Sequential Processes ever used in large multi threaded C++ programs?

I'm currently writing a large multi threaded C++ program (> 50K LOC).
As such I've been motivated to read up alot on various techniques for handling multi-threaded code. One theory I've found to be quite cool is:
http://en.wikipedia.org/wiki/Communicating_sequential_processes
And it's invented by a slightly famous guy, who's made other non-trivial contributions to concurrent programming.
However, is CSP used in practice? Can anyone point to any large application written in a CSP style?
Thanks!
CSP, as a process calculus, is fundamentally a theoretical thing that enables us to formalize and study some aspects of a parallel program.
If you instead want a theory that enables you to build distributed programs, then you should take a look to parallel structured programming.
Parallel structural programming is the base of the current HPC (high-performance computing) research and provides to you a methodology about how to approach and design parallel programs (essentially, flowcharts of communicating computing nodes) and runtime systems to implements them.
A central idea in parallel structured programming is that of algorithmic skeleton, developed initially by Murray Cole. A skeleton is a thing like a parallel design pattern with a cost model associated and (usually) a run-time system that supports it. A skeleton models, study and supports a class of parallel algorithms that have a certain "shape".
As a notable example, mapreduce (made popular by Google) is just a kind of skeleton that address data parallelism, where a computation can be described by a map phase (apply a function f to all elements that compose the input data), and a reduce phase (take all the transformed items and "combine" them using an associative operator +).
I found the idea of parallel structured programming both theoretical sound and practical useful, so I'll suggest to give a look to it.
A word about multi-threading: since skeletons addresses massive parallelism, usually they are implemented in distributed memory instead of shared. Intel has developed a tool, TBB, which address multi-threading and (partially) follows the parallel structured programming framework. It is a C++ library, so probably you can just start using it in your projects.
Yes and no. The basic idea of CSP is used quite a bit. For example, thread-safe queues in one form or another are frequently used as the primary (often only) communication mechanism to build a pipeline out of individual processes (threads).
Hoare being Hoare, however, there's quite a bit more to his original theory than that. He invented a notation for talking about the processes, defined a specific set of signals that can be sent between the processes, and so on. The notation has since been refined in various ways, quite a bit of work put into proving various aspects, and so on.
Application of that relatively formal model of CSP (as opposed to just the general idea) is much less common. It's been used in a few systems where high reliability was considered extremely important, but few programmers appear interested in learning (yet another) formal design notation.
When I've designed systems like this, I've generally used an approach that's less rigorous, but (at least to me) rather easier to understand: a fairly simple diagram, with boxes representing the processes, and arrows representing the lines of communication. I doubt I could really offer much in the way of a proof about most of the designs (and I'll admit I haven't designed a really huge system this way), but it's worked reasonably well nonetheless.
Take a look at the website for a company called Verum. Their ASD technology is based on CSP and is used by companies like Philips Healthcare, Ericsson and NXP semiconductors to build software for all kinds of high-tech equipment and applications.
So to answer your question: Yes, CSP is used on large software projects in real-life.
Full disclosure: I do freelance work for Verum
Answering a very old question, yet it seems important that one
There is Go where CSPs are a fundamental part of the language. In the FAQ to Go, the authors write:
Concurrency and multi-threaded programming have a reputation for difficulty. We believe this is due partly to complex designs such as pthreads and partly to overemphasis on low-level details such as mutexes, condition variables, and memory barriers. Higher-level interfaces enable much simpler code, even if there are still mutexes and such under the covers.
One of the most successful models for providing high-level linguistic support for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. Occam and Erlang are two well known languages that stem from CSP. Go's concurrency primitives derive from a different part of the family tree whose main contribution is the powerful notion of channels as first class objects. Experience with several earlier languages has shown that the CSP model fits well into a procedural language framework.
Projects implemented in Go are:
Docker
Google's download server
Many more
This style is ubiquitous on Unix where many tools are designed to process from standard in to standard out. I don't have any first hand knowledge of large systems that are build that way, but I've seen many small once-off systems that are
for instance this simple command line uses (at least) 3 processes.
cat list-1 list-2 list-3 | sort | uniq > final.list
This system is only moderately sized, but I wrote a protocol processor that strips away and interprets successive layers of protocol in a message that used a style very similar to this. It was an event driven system using something akin to cooperative threading, but I could've used multithreading fairly easily with a couple of added tweaks.
The program is proprietary (unfortunately) so I can't show off the source code.
In my opinion, this style is useful for some things, but usually best mixed with some other techniques. Often there is a core part of your program that represents a processing bottleneck, and applying various concurrency increasing techniques there is likely to yield the biggest gains.
Microsoft had a technology called ActiveMovie (if I remember correctly) that did sequential processing on audio and video streams. Data got passed from one filter to another to go from input to output format (and source/sink). Maybe that's a practical example??
The Wikipedia article looks to me like a lot of funny symbols used to represent somewhat pedestrian concepts. For very large or extensible programs, the formalism can be very important to check how the (sub)processes are allowed to interact.
For a 50,000 line class program, you're probably better off architecting it as you see fit.
In general, following ideas such as these is a good idea in terms of performance. Persistent threads that process data in stages will tend not to contend, and exploit data locality well. Also, it is easy to throttle the threads to avoid data piling up as a fast stage feeds a slow stage: just block the fast one if its output buffer grows too big.
A little bit off-topic but for my thesis I used a tool framework called TERRA/LUNA which aims for software development for Embedded Control Systems but is used heavily for all sorts of software development at my institute (so only academical use here).
TERRA is a graphical CSP and software architecture editor and LUNA is both the name for a C++ library for CSP based constructs and the plugin you'll find in TERRA to generate C++ code from your CSP models.
It becomes very handy in combination with FDR3 (a CSP refinement checker) to detect any sort of (dead/life/etc) lock or even profiling.

Has anyone tried transactional memory for C++?

I was checking out Intel's "whatif" site and their Transactional Memory compiler (each thread has to make atomic commits or rollback the system's memory, like a Database would).
It seems like a promising way to replace locks and mutexes but I can't find many testimonials. Does anyone here have any input?
I have not used Intel's compiler, however, Herb Sutter had some interesting comments on it...
From Sutter Speaks: The Future of Concurrency
Do you see a lot of interest in and usage of transactional memory, or is the concept too difficult for most developers to grasp?
It's not yet possible to answer who's using it because it hasn't been brought to market yet. Intel has a software transactional memory compiler prototype. But if the question is "Is it too hard for developers to use?" the answer is that I certainly hope not. The whole point is it's way easier than locks. It is the only major thing on the research horizon that holds out hope of greatly reducing our use of locks. It will never replace locks completely, but it's our only big hope to replacing them partially.
There are some limitations. In particular, some I/O is inherently not transactional—you can't take an atomic block that prompts the user for his name and read the name from the console, and just automatically abort and retry the block if it conflicts with another transaction; the user can tell the difference if you prompt him twice. Transactional memory is great for stuff that is only touching memory, though.
Every major hardware and software vendor I know of has multiple transactional memory tools in R&D. There are conferences and academic papers on theoretical answers to basic questions. We're not at the Model T stage yet where we can ship it out. You'll probably see early, limited prototypes where you can't do unbounded transactional memory—where you can only read and write, say, 100 memory locations. That's still very useful for enabling more lock-free algorithms, though.
Dr. Dobb's had an article on the concept last year: Transactional Programming by Calum Grant -- http://www.ddj.com/cpp/202802978
It includes some examples, comparisons, and conclusions using his example library.
I've built the combinatorial STM library on top of some functional programming ideas. It doesn't require any compiler support (except it uses C++17), doesn't bring a new syntax. In general, it adopts the interface of the STM library from Haskell.
So, my library has several nice properties:
Monadically combinatorial. Every transaction is a computation inside the custom monad named STML. You can combine monadic transactions into more big monadic transactions.
Transactions are separated from data model. You construct your concurrent data model with transactional variables (TVars) and run transactions over it.
There is retry combinator. It allows you to rerun the transaction. Very useful to build short and understandable transactions.
There are different monadic combinators to express computations shortly.
There is Context. Every computation should be run in some context, not in the global runtime. So you can have many different contexts if you need several independent STM clusters.
The implementation is quite simple conceptually. At least, the reference implementation in Haskell is so, but I had to reinvent several approaches for C++ implementation due to the lack of a good support of Functional Programming.
The library shows very nice stability and robustness, even if we consider it experimental. Moreover, my approach opens a lot of possibilities to improve the library by performance, features, comprehensiveness, etc.
To demonstrate its work, I've solved the Dining Philosophers task. You can find the code in the links below. Sample transaction:
STML<bool> takeFork(const TVar<Fork>& tFork)
{
STML<bool> alreadyTaken = withTVar(tFork, isForkTaken);
STML<Unit> takenByUs = modifyTVar(tFork, setForkTaken);
STML<bool> success = sequence(takenByUs, pure(true));
STML<bool> fail = pure(false);
STML<bool> result = ifThenElse(alreadyTaken, fail, success);
return result;
};
UPDATE
I've wrote a tutorial, you can find it here.
Dining Philosophers task
My C++ STM library
Sun Microsystems have announced that they're releasing a new processor next year, codenamed Rock, that has hardware support for transactional memory. It will have some limitations, but it's a good first step that should make it easier for programmers to replace locks/mutexes with transactions and expect good performance out of it.
For an interesting talk on the subject, given by Mark Moir, one of the researchers at Sun working on Transactional Memory and Rock, check out this link.
For more information and announcements from Sun about Rock and Transactional Memory in general, this link.
The obligatory wikipedia entry :)
Finally, this link, at the University of Wisconsin-Madison, contains a bibliography of most of the research that has been and is being done about Transactional Memory, whether it's hardware related or software related.
In some cases I can see this as being useful and even necessary.
However, even if the processor has special instructions that make this process easier there is still a large overhead compared to a mutex or semaphore. Depending on how it's implemented it may also impact realtime performance (have to either stop interrupts, or prevent them from writing into your shared areas).
My expectation is that if this was implemented, it would only be needed for portions of a given memory space, though, and so the impact could be limited.
-Adam