As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
There seems to be a few leading horses in the "what is the best language for developing multi-machine distributed concurrent apps": Go, Erlang, Clojure, Scala, and possibly others such as Termite/Gambit Scheme, Haskell et al. I've researched quite a bit, and from what I can tell, Erlang seems to get more approval for truly distributed concurrent, i.e., separate networked machine, apps. As I read somewhere, Clojure's concurrency was meant, first and foremost, to center on same-machine multi-core app writing. Has Clojure come up with more of a multi-machine distributed strategy? Or is this an unfortunate trade-off, i.e., good same-machine multi-core strategy at the expense of a good multi-machine strategy ... and vice-versa?
Clojure's build in concurrency tools solve several different roles for coordinated and uncoordinated opperations in a single address space. Terracotta extends the single address space to more than one computer, and beyond a single address space the actor model seems to be most popular
The Akka-clojure provides a nice interface for distributed actors in Clojure.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking for a program which takes up very little disk space, does not require much memory or cpu power, while it is capable of running a clojure web app.
I'm planning to run it on a Raspberry PI.
http-kit is probably the best choice:
It is very lightweight and efficient (less than 100k .jar file with zero dependencies apart from Clojure itself)
It is also fully Ring compatible so you can use it with most of the regular Clojure web libraries (e.g. Compojure).
It has great performance and scalability (apparently achieving over 600k concurrent connections on a PC)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
With respect to this question, is it always necessary to have perfectly concurrent-safe web-application or can a developer afford to let some possible concurrent issues untreated (e.g. when money is not involved) because probability of them happening is very low anyway. What is the best practice?
NOTE: If I say concurrent issues I mean the issues raising from overlapping executions of scripts. I do not mean the multi-user issues like classic lost-update with the timestamp solution cause probability of these things is imho significant and I am pretty sure here that the best practice is to always treat them.
If you're going to run your code on a web server, you should always write your code in such a way that multiple copies of it can run at the same time.
It's really not that difficult. In most cases database transactions around state modifying operations is all that is needed (or using a lock file, or any of the other well-known solutions to this problem).
Note: if all you're doing is reading data then there are no concurrency issues.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I had a requirement to store millions of records in which all are unique with multiple columns.
for example
eventcode description count
526 blocked 100
5230 xxx 20
....
and I want the following requirements while fetching sorting on count column, filtering on columns.
So I thought of using Hbase but I googled up and known that hypertable is faster.
So I am bit confused to know it.
please help me regarding this.
Note: I want to use C++ for transactions (reading, writing).
BIG disclaimer: i work for hypertable.
We have created a benchmark a while ago which you can read here: http://hypertable.com//why_hypertable/hypertable_vs_hbase_2/
Conclusion: Hypertable is faster, usually twice as fast.
Performance actually was the reason why hypertable was founded. Back then some guys were sitting together and discussing an open source implementation of Google's bigtable architecture. They did not agree on the programming language (java vs. c++ - the disagreement was about performance). As a result, one group founded hypertable (a C++ implementation) and the other group started working on hbase (in java).
If you do not trust benchmarks then you will have to run your own; both systems are open source and free to use. If you have questions about hypertable or run into problems while evaluating it then feel free to drop me a mail (or use the mailing list - all questions are getting answered.)
Btw - hypertable does not (yet) support sorting. You will have to implement this in your client application.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was reading in a textbook on c++ about IOstreams, and I came across this:
Whenever you want to store information
on the computer for longer than the
running time of a program, the usual
approach is to collect the data into a
logically cohesive whole and store it
on a permanent storage medium as a
file.
(Quoted from Programming Abstractions in C++)
Is there an UNUSUAL approach to storing data?
Pushing across to a server, operating systems (experimental) that let you freeze parts of RAM etc.
This is a very vague question, and really, has no good answer.
i guess if you store it at some place in the RAM, and hope for it to be there when you run your program again, that would be an unusual way of storing :-)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Yesterday me and my friend we've had nice conversation about IT and he asked me WHY Java EE is so widely used when it comes to build complicated IT systems? From my point of view advantages are easily visible, but he is IT manager with a lot of Microsoft experience (and little Java exp.), so I would like to hear your voice. And I'll give him a link of course.
I don't want new .NET - Java war - just - why Java EE :)
Java's advantage is that it is a popular platform (i.e. lots of developers know it) that's relatively easy to use, runs on multiple operating systems, and is fairly capable. So you can get stuff done with it. It's not always the best tool for the job but most of the time it's an adequate tool that's low-risk, and lots of the time it is among the best choices you can make for your task. Business isn't about the best computer technology, it's about return on investment, and Java lets you get a decent return on your developer investment.
Most complex systems are distributed. Distributed computing is difficult. Jave EE is an attempt to mask the complexities (scalability with services like JMS, distributed transactions, distributed scope management, etc.) and allow the programming to remain focused on the business problem not the technical one.