My new project assignment is to extend existing distributed architecture with a new module for some mathematical calculations with a REST API front-end. The system is written in Java, and ZeroMQ is used for inter-process communication.
I would like to write at least parts of the new module in Clojure. Technically, it will consist of at least 2 submodules, one for calculations per se, another one for sorting and filtering the results of those calculations. Basic requirement is for this system to support distributed computation, so that it can run on as many machines as required for proper performance. Initial advice was to use Apache Storm.
Would Storm work for designing the system with many submodules executing different types of tasks? What other libraries exist in order to make this possible for Clojure-based computation nodes?
If possible, i'd be also very happy to hear your general advice on how to approach this kind of application design with Clojure.
Thanks!
Related
Background:
I want to create an automation framework in C++ where on the one hand "sensors" and "actors" and on the other "logic engines" can be connected to a "core".
The "sensors" and "actors" might be connected to the machine running the "core", but some might also be accessible via a field bus or via normal computer network. Some might work continuous or periodically (e.g. every 100 milliseconds a new value), others might work event driven (e.g. only when a switch is [de]activated a message will come with the new state).
The "logic engine" would be sort of pluggable into the core and e.g. consist out of embedded well known script languages (Perl, Python, Lua, ...). There will run different little scripts from the users that can subscribe to "sensors" and write to "actors".
The "core" would route the sensor/actor informations to the subscribed scripts and call them. Some just after the event occurred, others periodically as defined in a scheduler.
Additional requirements:
The systems ("server") running this automation application might also be quite
small (500MHz x86 and 256 MB RAM) or if possible even tiny (OpenWRT
based router) as power consumption is an issue
=> efficiency is important
=> multicore support not for the moment, but I'm sure it'll become important soon - so the design has to support it
Some sort of fail save mode has to be possible, e.g. two systems monitoring each other
application / framework will be GPL => all used libraries have to be compatible
the server would run Linux, but cross platform would be nice
The big question:
What is the best architecture for such a kind of application / framework?
My reasoning:
Not to reinvent the wheel I was wondering to use MPI to do all the event handling.
This would allow me to focus on the relevant stuff and not on the message handling, especially when two or more "servers" would work together (watchdog for each other as well as each having a few sensors and actors connected). Each sensor and actor handler as well as the logic engines themself would only be required to implement a predefined MPI based interface and thus be crash save. The core could restart each when it's not responsive anymore.
The additional questions:
Would that be even possible with MPI? (It'd be used a bit out of context...)
Would the overhead of MPI be too big? Should I just write it myself using sockets and threads?
Are there other libraries possible that are better suited in this case?
You should be able to construct your system using MPI, but I think MPI is too much focused on high performance computing. Moreover, since it was designed for C, it does not really fit the object oriented way of programming very much. IMO there are other approaches better suited for your needs:
Boost ASIO might be a good fit for designing your system. It includes both network functionality and helps at event-driven programming (which could be a good way to design your system). You can have a look at Think-Async webpage for some examples on using ASIO for event-driven programming.
You could also use plain threads and borrow the network capabilities from ASIO (without using the event-driven programming parts). If you can use C++11, then you can directly use std::thread and all the other functionality available (mutex, conditional variables, futures, etc.). If you cannot use C++11, you can always use Boost Thread.
Finally, if you really want to go for MPI, you can have a look at Boost MPI. At least you will have a much more C++ friendly way of using MPI.
In my workplace (and a lot of other areas), there is a lot of emphasis on building architecture around services. (I am working in an e-commerce startup). However, I think services are implicitly considered as distributed. I am a believer of the first law of distribution - "don't distribute". So, I believe that we should not un-necessarily complicate architecture. It should be an architecture which can evolve. So, one of the ways to approach the problem would be to create well defined namespaces and build code around it, but keep the communication via java api. (this keeps monitoring requirement low, and reliability/availability problems low). This can easily be evolved into a distributed architecture by wrapping modules into web service, as and when, the scale requirements kick-in. So, the question is - what are the cons of writing code as a single application and evolving into distributed services, rather than straight jumping into implementing web services based architecture? Am I right in assuming that services should imply the basic principles of design (abstraction, encapsulation etc), rather than distribution over network?
Distribution requires modularity. However, it requires more than just modularity: it also requires coarse-grained interaction between the modules.
For example, in a single-process ecommerce system, you might have separate modules for managing the user's shopping cart and calculating prices. They might interact by the cart asking the calculator to price an item, then another item, etc. That would be perfectly fine.
However, in a distributed system, that would require a torrent of small method calls, which is inefficient; you might get away with it if you used CORBA for distribution, but with SOAP, you'd be in trouble. Rather, you would want to have the cart ask the calculator to price the whole order in one go. That might be worse from a separation of concerns point of view (why should the calculator have to know about the idea of carts?), but it would be required to make the system perform adequately.
Related to granularity, there's also the problem of modules interacting via interfaces or implementations. With a single process, you can define a set of interfaces through which modules will interact; modules can pass each other objects implementing those interfaces without having to tell each other about the implementations (eg a scheduler module could be passed anything implementing interface Job { void run(); }). Across a network, the requirement for coarse grain means that any objects passed must be passed by value (because passing by reference would entail fine-grained calls back to the passing module - unless you were using mobile code, which you aren't, because nobody is), which means that both modules must know about and agree on the implementations of the objects.
So, while building a single-process system in a modular way makes it easier to implement SOA later, it doesn't make it as simple as wrapping each module in a SOAP interface. At least, not unless you build your system in a coarse-grained manner from the start, which means throwing away a number of sound and helpful good software engineering practices.
We are going to write a concurrent program using Clojure, which is going to extract keywords from a huge amount of incoming mail which will be cross-checked with a database.
One of my teammates has suggested to use Erlang to write this program.
Here I want to note something that I am new to functional programming so I am in a little doubt whether clojure is a good choice for writing this program, or Erlang is more suitable.
Do you really mean concurrent or distributed?
If you mean concurrent (multi-threaded, multi-core etc.), then I'd say Clojure is the natural solution.
Clojure's STM model is perfectly designed for multi-core concurrency since it is very efficient at storing and managing shared state between threads. If you want to understand more, well worth looking at this excellent video.
Clojure STM allows safe mutation of data by concurrent threads. Erlang sidesteps this problem by making everything immutable, which is fine in itself but doesn't help when you genuinely need shared mutable state. If you want shared mutable state in Erlang, you have to implement it with a set of message interactions which is neither efficient nor convenient (that's the price of a nothing shared model....)
You will get inherently better performance with Clojure if you are in a concurrent setting in a large machine, since Clojure doesn't rely on message passing and hence communication between threads can be much more efficient.
If you mean distributed (i.e. many different machines sharing work over a network which are effectively running as isolated processes) then I'd say Erlang is the more natural solution:
Erlang's immutable, nothing-shared, message passing style forces you to write code in a way that can be distributed. So idiomatic Erlang automatically can be distributed across multiple machines and run in a distributed, fault-tolerant setting.
Erlang is therefore very well optimised for this use case, so would be the natural choice and would certainly be the quickest to get working.
Clojure could do it as well, but you will need to do much more work yourself (i.e. you'd either need to implement or choose some form of distributed computing framework) - Clojure does not currently come with such a framework by default.
In the long term, I hope that Clojure develops a distributed computing framework that matches Erlang - then you can have the best of both worlds!
The two languages and runtimes take different approaches to concurrency:
Erlang structures programs as many lightweight processes communicating between one another. In this case, you will probably have a master process sending jobs and data to many workers and more processes to handle the resulting data.
Clojure favors a design where several threads share data and state using common data structures. It sounds particularly suitable for cases where many threads access the same data (read-only) and share little mutable state.
You need to analyze your application to determine which model suits you best. This may also depend on the external tools you use -- for example, the ability of the database to handle concurrent requests.
Another practical consideration is that clojure runs on the JVM where many open source libraries are available.
Clojure is Lisp running on the Java JVM. Erlang is designed from the ground up to be highly fault tolerant and concurrent.
I believe the task is doable with either of these languages and many others as well. Your experience will depend on how well you understand the problem and how well you know the language. If you are new to both, I'd say the problem will be challenging no matter which one you choose.
Have you thought about something like Lucene/Solr? It's great software for indexing and searching documents. I don't know what "cross checking" means for your context, but this might be a good solution to consider.
My approach would be to write a simple test in each language and test the performance of each one. Both languages are somewhat different to C style languages and if you aren't used to them (and you don't have a team that is used to them) you may end up with a maintenance nightmare.
I'd also look at using something like Groovy 1.8. Groovy now includes GPars to enable parallel computing. String and file manipulation in Groovy is very easy indeed.
It depends what you mean by huge.
Strings in erlang are painful..
but:
If huge means tens of distributed machines, than go with erlang and write workers in text friendly languages (python?, perl?). You will have distributed layer on the top with highly concurrent local workers. Each worker would be represented by erlang process. If you need more performance, rewrite your worker into C. In Erlang it is super easy to talk to another languages.
If huge still means one strong machine go with JVM. It is not huge then.
If huge is hundreds of machines, I think you will need something stronger google-like (bigtable, map/reduce) probably on C++ stack. Erlang still OK, however you will need good devs to code it.
I really like these tools when it comes to the concurrency level it can handle.
Erlang/OTP looks like much more stable solution but requires much more learning and a lot of diving into functional language paradigm. And it looks like Erlang/OTP makes it much better when it comes to multi-core CPUs (correct me if I am wrong).
But which should I choose? Which one is better in the short and long term perspective?
My goal is to learn a tool which makes scaling my Web projects under high load easier than traditional languages.
I would give Erlang a try. Even though it will be a steeper learning curve, you will get more out of it since you will be learning a functional programming language. Also, since Erlang is specifically designed to create reliable, highly concurrent systems, you will learn plenty about creating highly scalable services at the same time.
I can't speak for Erlang, but a few things that haven't been mentioned about node:
Node uses Google's V8 engine to actually compile javascript into machine code. So node is actually pretty fast. So that's on top of the speed benefits offered by event-driven programming and non-blocking io.
Node has a pretty active community. Hop onto their IRC group on freenode and you'll see what I mean
I've noticed the above comments push Erlang on the basis that it will be useful to learn a functional programming language. While I agree it's important to expand your skillset and get one of those under your belt, you shouldn't base a project on the fact that you want to learn a new programming style
On the other hand, Javascript is already in a paradigm you feel comfortable writing in! Plus it's javascript, so when you write client side code it will look and feel consistent.
node's community has already pumped out tons of modules! There are modules for redis, mongodb, couch, and what have you. Another good module to look into is Express (think Sinatra for node)
Check out the video on yahoo's blog by Ryan Dahl, the guy who actually wrote node. I think that will help give you a better idea where node is at, and where it's going.
Keep in mind that node still is in late development stages, and so has been undergoing quite a few changes—changes that have broke earlier code. However, supposedly it's at a point where you can expect the API not to change too much more. So if you're looking for something fun, I'd say node is a great choice.
I'm a long-time Erlang programmer, and this question prompted me to take a look at node.js. It looks pretty damn good.
It does appear that you need to spawn multiple processes to take advantage of multiple cores. I can't see anything about setting processor affinity though. You could use taskset on linux, but it probably should be parametrized and set in the program.
I also noticed that the platform support might be a little weaker. Specifically, it looks like you would need to run under Cygwin for Windows support.
Looks good though.
Edit
Node.js now has native support for Windows.
I'm looking at the same two alternatives you are, gotts, for multiple projects.
So far, the best razor I've come up with to decide between them for a given project is whether I need to use Javascript. One existing system I'm looking to migrate is already written in Javascript, so its next version is likely to be done in node.js. Other projects will be done in some Erlang web framework because there is no existing code base to migrate.
Another consideration is that Erlang scales well beyond just multiple cores, it can scale to a whole datacenter. I don't see a built-in mechanism in node.js that lets me send another JS process a message without caring which machine it is on, but that's built right into Erlang at the lowest levels. If your problem isn't big enough to need multiple machines or if it doesn't require multiple cooperating processes, this advantage isn't likely to matter, so you should ignore it.
Erlang is indeed a deep pool to dive into. I would suggest writing a standalone functional program first before you start building web apps. An even easier first step, since you seem comfortable with Javascript, is to try programming JS in a more functional style. If you use jQuery or Prototype, you've already started down this path. Try bouncing between pure functional programming in Erlang or one of its kin (Haskell, F#, Scala...) and functional JS.
Once you're comfortable with functional programming, seek out one of the many Erlang web frameworks; you probably shouldn't be writing your app directly to something low-level like inets at this late stage. Look at something like Nitrogen, for instance.
While I'd personally go for Erlang, I'll admit that I'm a little biased against JavaScript. My advice is that you evaluate few points:
Are you reusing existing code in either of those languages (both in terms of source code, and programmer experience!)
Do you need/want on-the-fly updates without stopping the application (This is where Erlang wins by default - its runtime was designed for that case, and OTP contains all the tools necessary)
How big is the expected traffic, in terms of separate, concurrent operations, not bandwidth?
How "parallel" are the operations you do for each request?
Erlang has really fine-tuned concurrency & network-transparent parallel distributed system. Depending on what exactly is the project, the availability of a mature implementation of such system might outweigh any issues regarding learning a new language. There are also two other languages that work on Erlang VM which you can use, the Ruby/Python-like Reia and Lisp-Flavored Erlang.
Yet another option is to use both, especially with Erlang being used as kind of "hub". I'm unsure if Node.js has Foreign Function Interface system, but if it has, Erlang has C library for external processes to interface with the system just like any other Erlang process.
It looks like Erlang performs better for deployment in a relatively low-end server (512MB 4-core 2.4GHz AMD VM). This is from SyncPad's experience of comparing Erlang vs Node.js implementations of their virtual whiteboard server application.
There is one more language on the same VM that erlang is -> Elixir
It's a very interesting alternative to Erlang, check this one out.
Also it has a fast-growing web framework based on it-> Phoenix Framework
whatsapp could never achieve the level of scalability and reliability without erlang https://www.youtube.com/watch?v=c12cYAUTXXs
I will Prefer Erlang over Node.
If you want concurrency, Node can be substituted by Erlang or Golang because of their light weight processes.
Erlang is not easy to learn so requires a lot of effort but its community is active so can get help from that, this is only the reason why people prefer Node .
Is there a good C++ framework to implement XA distributed transactions?
With the term "good" I mean usable, simple (doesn't imply "easy"), well-structured.
Due to study reasons, at the moment I'm proceeding with a personal implementation, following X/Open XA specification.
Thank you in advance.
I am not aware of an open-source or free transaction monitor that has any degree of maturity, although This link does have some fan-out. The incumbent commercial ones are BEA's Tuxedo, Tibco's Enterprise Message Service (really a transactional message queue manager like IBM's MQ) and Transarc's Encina (now owned by IBM). These systems are all very expensive.
If you want to make your own (and incidentally make a bit of a name for yourself by filling a void in the open-source software space) get a copy of Grey and Reuter.
This is the definitive work on transaction processing systems architecture, written by two of the foremost experts in the field.
Interestingly, they claim that one can implement a working TP monitor in around 10,000 lines of C. This actually sounds quite reasonable, as what it does is not all that complex. On occasion I have been tempted to try.
Essentially you need to make a distributed transaction coordinator that runs as a daemon process. You will need to get the resource manager protocol working from it, so starting with this as a prototype is probably a good start. If you can get it to independently roll back or commit a transaction you have the basis of the TM-RM interface.
The XA API as defined in the spec is the API to control the transaction manager. Strictly speaking, you don't need to make a 3-tier architecture to use distributed transactions of this sort, but they are more or less pointless without a TP monitor. How you communicate from the front-end to the middle-tier can be left as an exercise for the reader. You are probably best off using an existing ORB, of which there are several good open-source implementations available.
Depending on whether you want to make the DTC and the app server separate processes (which is possibly desirable for stability but not strictly necessary) you could also use ACE as a basis for the DTC server.
If you want to make a high-performance middle-tier server, check out Douglas Schmidt's ACE framework. This comes with an ORB called TAO, and is flexible enough to allow you to use more or less any threading model that takes your fancy. Using this is a trade-off between learning it and the effort of writing your own and debugging all the synchronisation and concurrancy issues.
Maybe quite late for your task, but it can be useful for other users: LIXA is not a "framework", but it provides an implementation for the TX Transaction Demarcation specification and supports most of the XA features.
The TX specification is for C and COBOL languages, but the integration of the C version inside a C++ project should be effortless.
Other option is open source Enduro/X distributed transaction processing framework which allows to write simple C/C++ services which may operate with resource managers (e.g. databases) and gives capability to commit or abort works done by several different executables on same/different physical servers worked with different resources/databases.
Internally XA 2PC is used there.