discrete event simulators for C++ - c++

I am currently looking for a discrete event simulator written for C++. I did not find much on the web written specifically in OO-style; there are some, but outdated. Some others, such as Opnet, Omnet and ns3 are way too complicated for what I need to do. And besides, I need to simulate agent-based algorithms capable of simulating systems of thousands of nodes.
Does anybody know anything suitable for my needs?

Others have good direct answers, but I'm going to suggest an alternative. If I understand you right, you want a system in C++ or such where you can post events that fire in the future, and code is run when those events fire.
I had a project to do like this, and I started out trying to write such an event system in C++ and then quickly realized I had a better solution.
Have you considered writing your program in behavioral Verilog? That may seem strange to write software in a hardware description language, but a Verilog simulator is an event-based system underneath, and behavioral Verilog is a very convenient way to express events, timing, triggers, etc. There is a free Verilog simulator (which is what I used) called Icarus Verilog. If you're not using Ubuntu or some Linux distro with Icarus already in a package, building from source is straightforward.

I would recommend having a second look to OmNet++. At first sight it may look quite complex, but if you look it into more detail you will find that most of the complexity is in the network add-on (the INET Framework). Unless you are going to do a detailed network simulation you do not need the INET.
Using OmNet++ core is not specially difficult and it may be simpler than other similar tools.
You may want to have a look to an intro.
One of the things that makes OmNet++ attractive to me is its scalability. Is possible to run large simulations in a desktop. Besides, it is possible to scale the same simulation to a cluster without rewriting the code.

You should consider SystemC, although I'd also recommend taking a second look at OmNet++.

We use SIMLIB at my school. It is very fast, easy to understand, object oriented, discrete and continuous simulator. It might look outdated but it is still maintained.

There is CSIM from Mesquite Software which supports developing models in C, C++ and Java. However, it is paid-commercial, AFAIK.

Take a look at GBL library. It's written in modern C++ and even supports C++0x features like move semantics and lambda functions. It offers several modeling mechanisms: synchronous and asynchronous event handlers, preemptive threads, and fibers. You can create purely behavioral, cycle accurate, and real-time models, or any mixture of those.

Related

Which C++ libraries should I use for a large parallel computing number-crunching project exploiting third-party applications

Introduction
I want to request a lot of advice on a new programming project I am going to start on my own. I am going to be very precise in what I would like to accomplish and in what my basic requirements are. Therefore this is going to be a long question. Please bear with me.
I am going to split the question into five sections:
Real-world problem
Simulation problem
Requirements and preferences
Additional information
List of advice requests
1. Real-world problem
Skyscrapers and large bridges suffer from dynamic wind loading. This means, when designed incorrectly they can collapse due to wind-induced vibrations (this actually happened in 1940: http://www.youtube.com/watch?v=3mclp9QmCGs). To design such structures correctly, efficient number-crunching software is required for analysis and simulation.
2. Simulation problem
There exists a multitude of software capable of either simulating fluid flows or structural mechanics. Many have already been developed for over 30 years and are proven and mature technologies. Writing a multi-physical program capable of simulating both fluid flows and structural mechanics simultaneously from scratch, is therefore unwise. First of all, you would need years of development before reaching maturity and it is very hard to enter a world which has depended on specific software for over 30 years. But more important...why recreate when you can reuse? Instead of pursuing a monolithic approach, I prefer a partitioned approach where I can reuse existing simulation software.
In the partitioned approach I will use software X to simulate flows and I will use software Y to simulate structures. Then I will write my own coupling algorithm which establishes communication between X and Y and uses them to simulate the multi-physical problem (e.g. wind-induced vibrations of skyscrapers or bridges). The reason I use X and Y and not actual software names is because X and Y are supposed to be black-boxes. In no way is my coupling algorithm to be dependent on the implementation of X and Y. The algorithm will only depend on the output of X and Y. This way an end-user can select which ever X or Y is available to them or which ever X or Y is capable of doing what the end-user wants to achieve.
Because I want to use a black-box partitioned approach, software X knows nothing of Y and vice-versa. But how do I simulate the deformation of a bridge without knowing anything of the surrounding air flow and how to I know in which way the surrounding air flow is perturbed by the structure without knowing anything about its deformation? The answer is simple: start with a guess and use an iterative approach to converge to the correct solution. This approach is however very computationally expensive. To reduce the computational cost the coupling algorithm can be written in a clever way using very efficient technologies, not to be discussed here. All I would like to say is that some heavy linear algebra number-crunching is required.
3. Requirements and preferences
What I need to do is:
establish communication between third-party open-source or proprietary software
perform some heavy number-crunching (linear algebra)
visualise results (2D / 3D plotting and animating)
deliver an interactive analysis and development environment
create an intuitive graphical user interface
What I want my software to be:
open-source
cross-platform
extendable through scripts and/or shared libraries
What I am going to use:
C++ for heavy number-crunching
CPython for programming logic
NumPy / SciPy for some number-crunching in CPython
Matplotlib for results visualisation in CPython
4. Additional information
Facts:
one-man project at start, grow to company if successful
primary OS is a KDE-based Linux distribution
Business model:
Free software and basic documentation.
Paid services and elaborate documentation.
5. List of advice requests
I want to do all number-crunching in C++ by writing many functions which individually perform just a tiny task. The program logic is to be contained in a CPython package which executes the entire simulation while relying on the C++ functions to perform the number-crunching. The C++ / CPython algorithm is to be extended with scripts written in CPython (using NumPy, SciPy, SymPy and Matplotlib) to generate and visualise results from raw numerical data. I want to be able to do parallel computing and I need to communicate with several third party open-source AND proprietary software.
To accomplish all that I am going to need a whole bunch of existing libraries/packages/technologies etc. And to all relevant issues I know what I can use, however I do not know what I should use. The best solution is as always to try everything out and see what works best. However if any experienced user can weed out some of the more unlikely candidates I would gladly receive his or hers advice, suggestion, pro / con list on:
Glueing C++ and CPython (e.g. CTypes, SIP, SWIG etc.)
C++ linear algebra number-crunching library (e.g. Armadillo, Eigen, PETSc etc.)
Graphical interface development library (e.g. Qt, GTK, wxWidgets etc.)
Software communication and parallel computing (e.g. MPICH, OpenMPI, OpenMP etc.)
CPython 2.7.x or CPython 3.x
NOTE: I have summed some options above, but these are only exemplary and not a limitation to. I am open to everything as long as it is written in C, C++, Fortran or Python. Also I do not expect an answer in all five categories enlisted above from one individual. Let the collective knowledge of the community take care of that.
I thank all contributors and wish you all the best of luck in your own endeavors.
You mention parallelism but not how you intend to make this project parallel. This is a much more complex issue than simply choosing a couple libraries. There are several major considerations required before moving forward.
You mention the intended platform briefly, but you also have to consider whether the simulation will be run on a single computer/node or multiple. Considering that you are doing an iterative simulation of a building, you are probably going to require far more compute power than any one computer can provide. This means that, unless you want to go with a hybrid multiple-process, multiple-thread approach, you are limited to a multiple-process model of parallelism. OpenCL and MPI are then each options for your implementation. (note: MPICH and OpenMPI are each just implementations of MPI, and your code should be agnostic of these) Message passing with MPI is a good general model of parallelism however it can be quite difficult for those not used to working with parallel code. My personal experience is with MPI and some hybrid programming so I cannot say much else regarding your choice of parallel model.
A problem that follows from the issue of the parallel model is that it directly impacts the simulation software. I am not entirely certain how separate you are planning on keeping your algorithm from the simulations. If you plan to have your code fork separate process to run the simulations, you will have issues with cross-platform support as you may not have the luxury of being able to run arbitrary simulations in this manner. If you instead intend to run the simulations within your software, the parallel model has to be consistent throughout. Although this puts limitations on the black box strategy, it may make the entire thing that much more feasible.
A good deal has already been said about applicable libraries. I don't have any more to say about specific libraries that hasn't already been said. Just keep in mind that many of the same issues have to be addressed with these as when running the simulations.
TLDR: Parallelism should not be looked over. You need to know which parallel model that you will be using before making decisions on libraries.
Graphical interface development library (e.g. Qt, GTK, wxWidgets etc.)
If your "primary OS is a KDE-based Linux distribution", then QT wins this one hands down.
Logic behind this:-
KDE is writen in QT. A QT app running in KDE is an eagle in the air! It's in its element. With KDE Being target number 1, your QT GUI will most likely work out of the box without users having to download additional gui libs. Your GUI will also look super native.
QT is the most portable of the three. (you mentioned "primary OS", hinting that other platforms might follow). Therefore with qt you can port to Windows, OSX, GNOME, Embeded Linux, Android, Symbian, HAIKU, Solaris ..etc
QT has arguably the best RAD tools of the top three cross platform GUI libs (IMHO). Think QTCreator vs wxSmith vs Anjuta/Glade.
wxWidgets on linux is basically a wapper for GTK+ (v1 to v3) + additional helpers. Tho I preffer it over GTK. It also wraps around X11 and motif but trust me chief, you will not like those ports.
wxWidgets portability is not as seamless as one would think. Each port is a totaly different implementation, each wrapping totally different backends! I once ported a small app that uses wxDataViewCtrl with a custom tree model. SIGSEGVs became the order of the day. So I eventually decided to go with the generic wxDataViewCtrl (that looks funny in GTK+3). I still like wxWidgets tho.
NB: Consider also using the latest web technologies for the C&V part of the MVC (model view controller).HTML5+CSS3+JS can be run by a web-view widget on a desktop app. All the above 3 GUI Libs sport this control (for wx, it's wx2.9.3 and above).
Web technologies:-
Have (arguably) the fastest time-to-market of any GUI lib.
Have (arguably) the most available and affordable developers of any GUI technology today.
Produce the most stunning UIs of any GUI lib.
Produce the least "RIGID" UIs of any GUI lib....you can rotate a gui element e.g. a html table around any axis with fancy animation just by mouse over, without any programming overhead!..no js..no c++..nada!
CPython 2.7.x or CPython 3.x
- CPython might not be well suited to your project's requirements (I think) chiefly because of the mutex monster that is the Global Interpreter Lock (GIL) bottleneck.
Maybe PyPy would be a better python implementation for your project?
By the way, have you also considered:- Javascript on V8 vs. Python (PyPy,CPython et. all)?
Javascript run by V8 can interact with Native Code (c++) sort of like Ctypes with python
I also came across this interesting blog (JS on V8 vs Py).
Possibly, gmp. You can find more details here: http://gmplib.org/
sounds like premature optimization. You need to write code, a lot of code, put lots of prints everywhere to get statistics, then try at least two different approaches to get a benchmark and then make a reasonable decision. There is no replacement for doing the work, anything else is just handwaving.

Implementing an event and periodically driven "script language" in C++?

Background:
I want to create an automation framework in C++ where on the one hand "sensors" and "actors" and on the other "logic engines" can be connected to a "core".
The "sensors" and "actors" might be connected to the machine running the "core", but some might also be accessible via a field bus or via normal computer network. Some might work continuous or periodically (e.g. every 100 milliseconds a new value), others might work event driven (e.g. only when a switch is [de]activated a message will come with the new state).
The "logic engine" would be sort of pluggable into the core and e.g. consist out of embedded well known script languages (Perl, Python, Lua, ...). There will run different little scripts from the users that can subscribe to "sensors" and write to "actors".
The "core" would route the sensor/actor informations to the subscribed scripts and call them. Some just after the event occurred, others periodically as defined in a scheduler.
Additional requirements:
The systems ("server") running this automation application might also be quite
small (500MHz x86 and 256 MB RAM) or if possible even tiny (OpenWRT
based router) as power consumption is an issue
=> efficiency is important
=> multicore support not for the moment, but I'm sure it'll become important soon - so the design has to support it
Some sort of fail save mode has to be possible, e.g. two systems monitoring each other
application / framework will be GPL => all used libraries have to be compatible
the server would run Linux, but cross platform would be nice
The big question:
What is the best architecture for such a kind of application / framework?
My reasoning:
Not to reinvent the wheel I was wondering to use MPI to do all the event handling.
This would allow me to focus on the relevant stuff and not on the message handling, especially when two or more "servers" would work together (watchdog for each other as well as each having a few sensors and actors connected). Each sensor and actor handler as well as the logic engines themself would only be required to implement a predefined MPI based interface and thus be crash save. The core could restart each when it's not responsive anymore.
The additional questions:
Would that be even possible with MPI? (It'd be used a bit out of context...)
Would the overhead of MPI be too big? Should I just write it myself using sockets and threads?
Are there other libraries possible that are better suited in this case?
You should be able to construct your system using MPI, but I think MPI is too much focused on high performance computing. Moreover, since it was designed for C, it does not really fit the object oriented way of programming very much. IMO there are other approaches better suited for your needs:
Boost ASIO might be a good fit for designing your system. It includes both network functionality and helps at event-driven programming (which could be a good way to design your system). You can have a look at Think-Async webpage for some examples on using ASIO for event-driven programming.
You could also use plain threads and borrow the network capabilities from ASIO (without using the event-driven programming parts). If you can use C++11, then you can directly use std::thread and all the other functionality available (mutex, conditional variables, futures, etc.). If you cannot use C++11, you can always use Boost Thread.
Finally, if you really want to go for MPI, you can have a look at Boost MPI. At least you will have a much more C++ friendly way of using MPI.

Node.js or Erlang

I really like these tools when it comes to the concurrency level it can handle.
Erlang/OTP looks like much more stable solution but requires much more learning and a lot of diving into functional language paradigm. And it looks like Erlang/OTP makes it much better when it comes to multi-core CPUs (correct me if I am wrong).
But which should I choose? Which one is better in the short and long term perspective?
My goal is to learn a tool which makes scaling my Web projects under high load easier than traditional languages.
I would give Erlang a try. Even though it will be a steeper learning curve, you will get more out of it since you will be learning a functional programming language. Also, since Erlang is specifically designed to create reliable, highly concurrent systems, you will learn plenty about creating highly scalable services at the same time.
I can't speak for Erlang, but a few things that haven't been mentioned about node:
Node uses Google's V8 engine to actually compile javascript into machine code. So node is actually pretty fast. So that's on top of the speed benefits offered by event-driven programming and non-blocking io.
Node has a pretty active community. Hop onto their IRC group on freenode and you'll see what I mean
I've noticed the above comments push Erlang on the basis that it will be useful to learn a functional programming language. While I agree it's important to expand your skillset and get one of those under your belt, you shouldn't base a project on the fact that you want to learn a new programming style
On the other hand, Javascript is already in a paradigm you feel comfortable writing in! Plus it's javascript, so when you write client side code it will look and feel consistent.
node's community has already pumped out tons of modules! There are modules for redis, mongodb, couch, and what have you. Another good module to look into is Express (think Sinatra for node)
Check out the video on yahoo's blog by Ryan Dahl, the guy who actually wrote node. I think that will help give you a better idea where node is at, and where it's going.
Keep in mind that node still is in late development stages, and so has been undergoing quite a few changes—changes that have broke earlier code. However, supposedly it's at a point where you can expect the API not to change too much more. So if you're looking for something fun, I'd say node is a great choice.
I'm a long-time Erlang programmer, and this question prompted me to take a look at node.js. It looks pretty damn good.
It does appear that you need to spawn multiple processes to take advantage of multiple cores. I can't see anything about setting processor affinity though. You could use taskset on linux, but it probably should be parametrized and set in the program.
I also noticed that the platform support might be a little weaker. Specifically, it looks like you would need to run under Cygwin for Windows support.
Looks good though.
Edit
Node.js now has native support for Windows.
I'm looking at the same two alternatives you are, gotts, for multiple projects.
So far, the best razor I've come up with to decide between them for a given project is whether I need to use Javascript. One existing system I'm looking to migrate is already written in Javascript, so its next version is likely to be done in node.js. Other projects will be done in some Erlang web framework because there is no existing code base to migrate.
Another consideration is that Erlang scales well beyond just multiple cores, it can scale to a whole datacenter. I don't see a built-in mechanism in node.js that lets me send another JS process a message without caring which machine it is on, but that's built right into Erlang at the lowest levels. If your problem isn't big enough to need multiple machines or if it doesn't require multiple cooperating processes, this advantage isn't likely to matter, so you should ignore it.
Erlang is indeed a deep pool to dive into. I would suggest writing a standalone functional program first before you start building web apps. An even easier first step, since you seem comfortable with Javascript, is to try programming JS in a more functional style. If you use jQuery or Prototype, you've already started down this path. Try bouncing between pure functional programming in Erlang or one of its kin (Haskell, F#, Scala...) and functional JS.
Once you're comfortable with functional programming, seek out one of the many Erlang web frameworks; you probably shouldn't be writing your app directly to something low-level like inets at this late stage. Look at something like Nitrogen, for instance.
While I'd personally go for Erlang, I'll admit that I'm a little biased against JavaScript. My advice is that you evaluate few points:
Are you reusing existing code in either of those languages (both in terms of source code, and programmer experience!)
Do you need/want on-the-fly updates without stopping the application (This is where Erlang wins by default - its runtime was designed for that case, and OTP contains all the tools necessary)
How big is the expected traffic, in terms of separate, concurrent operations, not bandwidth?
How "parallel" are the operations you do for each request?
Erlang has really fine-tuned concurrency & network-transparent parallel distributed system. Depending on what exactly is the project, the availability of a mature implementation of such system might outweigh any issues regarding learning a new language. There are also two other languages that work on Erlang VM which you can use, the Ruby/Python-like Reia and Lisp-Flavored Erlang.
Yet another option is to use both, especially with Erlang being used as kind of "hub". I'm unsure if Node.js has Foreign Function Interface system, but if it has, Erlang has C library for external processes to interface with the system just like any other Erlang process.
It looks like Erlang performs better for deployment in a relatively low-end server (512MB 4-core 2.4GHz AMD VM). This is from SyncPad's experience of comparing Erlang vs Node.js implementations of their virtual whiteboard server application.
There is one more language on the same VM that erlang is -> Elixir
It's a very interesting alternative to Erlang, check this one out.
Also it has a fast-growing web framework based on it-> Phoenix Framework
whatsapp could never achieve the level of scalability and reliability without erlang https://www.youtube.com/watch?v=c12cYAUTXXs
I will Prefer Erlang over Node.
If you want concurrency, Node can be substituted by Erlang or Golang because of their light weight processes.
Erlang is not easy to learn so requires a lot of effort but its community is active so can get help from that, this is only the reason why people prefer Node .

Is Communicating Sequential Processes ever used in large multi threaded C++ programs?

I'm currently writing a large multi threaded C++ program (> 50K LOC).
As such I've been motivated to read up alot on various techniques for handling multi-threaded code. One theory I've found to be quite cool is:
http://en.wikipedia.org/wiki/Communicating_sequential_processes
And it's invented by a slightly famous guy, who's made other non-trivial contributions to concurrent programming.
However, is CSP used in practice? Can anyone point to any large application written in a CSP style?
Thanks!
CSP, as a process calculus, is fundamentally a theoretical thing that enables us to formalize and study some aspects of a parallel program.
If you instead want a theory that enables you to build distributed programs, then you should take a look to parallel structured programming.
Parallel structural programming is the base of the current HPC (high-performance computing) research and provides to you a methodology about how to approach and design parallel programs (essentially, flowcharts of communicating computing nodes) and runtime systems to implements them.
A central idea in parallel structured programming is that of algorithmic skeleton, developed initially by Murray Cole. A skeleton is a thing like a parallel design pattern with a cost model associated and (usually) a run-time system that supports it. A skeleton models, study and supports a class of parallel algorithms that have a certain "shape".
As a notable example, mapreduce (made popular by Google) is just a kind of skeleton that address data parallelism, where a computation can be described by a map phase (apply a function f to all elements that compose the input data), and a reduce phase (take all the transformed items and "combine" them using an associative operator +).
I found the idea of parallel structured programming both theoretical sound and practical useful, so I'll suggest to give a look to it.
A word about multi-threading: since skeletons addresses massive parallelism, usually they are implemented in distributed memory instead of shared. Intel has developed a tool, TBB, which address multi-threading and (partially) follows the parallel structured programming framework. It is a C++ library, so probably you can just start using it in your projects.
Yes and no. The basic idea of CSP is used quite a bit. For example, thread-safe queues in one form or another are frequently used as the primary (often only) communication mechanism to build a pipeline out of individual processes (threads).
Hoare being Hoare, however, there's quite a bit more to his original theory than that. He invented a notation for talking about the processes, defined a specific set of signals that can be sent between the processes, and so on. The notation has since been refined in various ways, quite a bit of work put into proving various aspects, and so on.
Application of that relatively formal model of CSP (as opposed to just the general idea) is much less common. It's been used in a few systems where high reliability was considered extremely important, but few programmers appear interested in learning (yet another) formal design notation.
When I've designed systems like this, I've generally used an approach that's less rigorous, but (at least to me) rather easier to understand: a fairly simple diagram, with boxes representing the processes, and arrows representing the lines of communication. I doubt I could really offer much in the way of a proof about most of the designs (and I'll admit I haven't designed a really huge system this way), but it's worked reasonably well nonetheless.
Take a look at the website for a company called Verum. Their ASD technology is based on CSP and is used by companies like Philips Healthcare, Ericsson and NXP semiconductors to build software for all kinds of high-tech equipment and applications.
So to answer your question: Yes, CSP is used on large software projects in real-life.
Full disclosure: I do freelance work for Verum
Answering a very old question, yet it seems important that one
There is Go where CSPs are a fundamental part of the language. In the FAQ to Go, the authors write:
Concurrency and multi-threaded programming have a reputation for difficulty. We believe this is due partly to complex designs such as pthreads and partly to overemphasis on low-level details such as mutexes, condition variables, and memory barriers. Higher-level interfaces enable much simpler code, even if there are still mutexes and such under the covers.
One of the most successful models for providing high-level linguistic support for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. Occam and Erlang are two well known languages that stem from CSP. Go's concurrency primitives derive from a different part of the family tree whose main contribution is the powerful notion of channels as first class objects. Experience with several earlier languages has shown that the CSP model fits well into a procedural language framework.
Projects implemented in Go are:
Docker
Google's download server
Many more
This style is ubiquitous on Unix where many tools are designed to process from standard in to standard out. I don't have any first hand knowledge of large systems that are build that way, but I've seen many small once-off systems that are
for instance this simple command line uses (at least) 3 processes.
cat list-1 list-2 list-3 | sort | uniq > final.list
This system is only moderately sized, but I wrote a protocol processor that strips away and interprets successive layers of protocol in a message that used a style very similar to this. It was an event driven system using something akin to cooperative threading, but I could've used multithreading fairly easily with a couple of added tweaks.
The program is proprietary (unfortunately) so I can't show off the source code.
In my opinion, this style is useful for some things, but usually best mixed with some other techniques. Often there is a core part of your program that represents a processing bottleneck, and applying various concurrency increasing techniques there is likely to yield the biggest gains.
Microsoft had a technology called ActiveMovie (if I remember correctly) that did sequential processing on audio and video streams. Data got passed from one filter to another to go from input to output format (and source/sink). Maybe that's a practical example??
The Wikipedia article looks to me like a lot of funny symbols used to represent somewhat pedestrian concepts. For very large or extensible programs, the formalism can be very important to check how the (sub)processes are allowed to interact.
For a 50,000 line class program, you're probably better off architecting it as you see fit.
In general, following ideas such as these is a good idea in terms of performance. Persistent threads that process data in stages will tend not to contend, and exploit data locality well. Also, it is easy to throttle the threads to avoid data piling up as a fast stage feeds a slow stage: just block the fast one if its output buffer grows too big.
A little bit off-topic but for my thesis I used a tool framework called TERRA/LUNA which aims for software development for Embedded Control Systems but is used heavily for all sorts of software development at my institute (so only academical use here).
TERRA is a graphical CSP and software architecture editor and LUNA is both the name for a C++ library for CSP based constructs and the plugin you'll find in TERRA to generate C++ code from your CSP models.
It becomes very handy in combination with FDR3 (a CSP refinement checker) to detect any sort of (dead/life/etc) lock or even profiling.

can one make concurrent scalable reliable programs in C as in erlang?

a theoretical question. After reading Armstrongs 'programming erlang' book I was wondering the following:
It will take some time to learn Erlang. Let alone master it. It really is fundamentally different in a lot of respects.
So my question: Is it possible to write 'like erlang' or with some 'erlang like framework', which given that you take care not to create functions with sideffects, you can create scaleable reliable apps as well as in Erlang? Maybe with the same msgs sending, loads of 'mini processes' paradigm.
The advantage would be to not throw all your accumulated C/C++ knowledge over the fence.
Any thoughts about this would be welcome
Yes, it is possible, but...
Probably the best answer for this question is given by Robert Virding’s First Rule:
“Any sufficiently complicated
concurrent program in another language
contains an ad hoc,
informally-specified, bug-ridden, slow
implementation of half of Erlang.”
Very good rule is use the right tool for the task. Erlang excels in concurrency and reliability. C/C++ was not designed with these properties in mind.
If you don't want to throw away your C/C++ knowledge and experience and your project allows this kind of division, good approach is to create a mixed solution. Write concurrent, communication and error handling code in Erlang, then add C/C++ parts, which will do CPU and IO bound stuff.
You clearly can - the Erlang/OTP system is largely written in C (and Erlang). The question is 'why would you want to?'
In 'ye olde days' people used to write their own operating system - but why would you want to?
If you elect to use an operating system your unwritten software has certain properties - it can persist to hard disk, it can speak to a network, it can draw on screens, it can run from the command line, it can be invoked in batch mode, etc, etc...
The Erlang/OTP system is 1.5M lines of code which has been demonstrated to give 99.9999999% uptime in large systems (the UK phone system) - that's 31ms downtime a year.
With Erlang/OTP your unwritten software has high reliability, it can hot-swap itself, your unwritten application can failover when a physical computer dies.
Why would you want to rewrite that functionality?
I would break this into 2 questions
Can you write concurrent, scalable C++ applications
Yes. It's certainly possible to create the low level constructs needed in order to achieve this.
Would you want to write concurrent, scalable, C++ applications
Perhaps. But if I was going for a highly concurrent application, I would choose a language that was either designed to fill that void or easily lent itself to doing so (Erlang, F# and possibly C#).
C++ was not designed to build highly concurrent applications. But it can certainly be tweaked into doing so. The cost might be higher than you expect though once you factor in memory management.
Yes, but you will be doing some extra work.
Regarding side effects, consider how the .net/plinq team is approaching. Plinq won't be able to enforce you hand it stuff with no side effects, but it will assume you do so and play by its rules so we get to use a simpler api. Even if the language doesn't have built-in support for it, it will still simplify things as you can break the operations more easily.
What I can do in one Turing complete language I can do in any other Turing complete language.
So I interpret your question to read, is it as easy to write a reliable and scalable application in C++ as it is in Erlang?
The answer to that is highly subjective. For me it is easier to write it in C++ for the following reasons:
I have already done it in C++ (at least three times).
I don't know Erlang.
I have read a great deal about Stackless Python, which feels to me like a highly concurrent message based cooperative multitasking system in python, but of course python is written on top of C.
Having said that. If you already know both languages, and you have the problem well defined, you can then make the best choice based on all the information you have at hand.
the main 'problem' with C (or C++) for writing reliable and easy to extend programs is that in C you can do anything. so, the first step would be to write a simple framework that restricts just a bit. most good programmers do that anyway.
in this case, the restrictions would be mostly to make it easy to define a 'process' within whatever level of isolation you want. fork() has a reputation of being slow, and threads also need significant time to spawn, so you might want to use a cooperative multitasking, which can be far more efficient, and you could even make it preemptive (i think that's what Erlang does). to get multi-core efficiency, set a pool of threads and make all of them complete to run the tasks.
another important part would be to create an appropriate library of immutable data structures, so that using them (instead of the standard lib) your functions would be (mostly) side-effect-free.
then it's just a matter of setting a good API for message passing and futures... not easy, but at least it doesn't seem like changing the language itself.