Importance of concurrency issues in web applications [closed] - concurrency

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
With respect to this question, is it always necessary to have perfectly concurrent-safe web-application or can a developer afford to let some possible concurrent issues untreated (e.g. when money is not involved) because probability of them happening is very low anyway. What is the best practice?
NOTE: If I say concurrent issues I mean the issues raising from overlapping executions of scripts. I do not mean the multi-user issues like classic lost-update with the timestamp solution cause probability of these things is imho significant and I am pretty sure here that the best practice is to always treat them.

If you're going to run your code on a web server, you should always write your code in such a way that multiple copies of it can run at the same time.
It's really not that difficult. In most cases database transactions around state modifying operations is all that is needed (or using a lock file, or any of the other well-known solutions to this problem).
Note: if all you're doing is reading data then there are no concurrency issues.

Related

Looking for the smallest app container which is capable of running a clojure-powered website [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm looking for a program which takes up very little disk space, does not require much memory or cpu power, while it is capable of running a clojure web app.
I'm planning to run it on a Raspberry PI.
http-kit is probably the best choice:
It is very lightweight and efficient (less than 100k .jar file with zero dependencies apart from Clojure itself)
It is also fully Ring compatible so you can use it with most of the regular Clojure web libraries (e.g. Compojure).
It has great performance and scalability (apparently achieving over 600k concurrent connections on a PC)

Logging framework for C++ [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I apologize to take a topic which is widely discussed before - but I find none of the discussions clearly tell which one to use ultimately. My requirements for a logging framework in my C++ project are
Thread safe.
Should support multiple targets.
Log rotation possible.
A way to identify module's implicitly.
I have been using boost log for some time in a small c++ project and it worked well. But when I took to a large C++ project - I found supporting multiple targets(I mean multiple files for the same project) is a nightmare, No way to implicitly mention which module is logging and above all the compile time has increased at-least 40%.
Now I am looking at alternate framework and think log4cplus and logog seems fill all my requirements. Wanted to get an expert opinion on which would suit the above criteria rather than getting in a soup again after using the library for some time.

which is one is faster Hbase or Hypertable? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I had a requirement to store millions of records in which all are unique with multiple columns.
for example
eventcode description count
526 blocked 100
5230 xxx 20
....
and I want the following requirements while fetching sorting on count column, filtering on columns.
So I thought of using Hbase but I googled up and known that hypertable is faster.
So I am bit confused to know it.
please help me regarding this.
Note: I want to use C++ for transactions (reading, writing).
BIG disclaimer: i work for hypertable.
We have created a benchmark a while ago which you can read here: http://hypertable.com//why_hypertable/hypertable_vs_hbase_2/
Conclusion: Hypertable is faster, usually twice as fast.
Performance actually was the reason why hypertable was founded. Back then some guys were sitting together and discussing an open source implementation of Google's bigtable architecture. They did not agree on the programming language (java vs. c++ - the disagreement was about performance). As a result, one group founded hypertable (a C++ implementation) and the other group started working on hbase (in java).
If you do not trust benchmarks then you will have to run your own; both systems are open source and free to use. If you have questions about hypertable or run into problems while evaluating it then feel free to drop me a mail (or use the mailing list - all questions are getting answered.)
Btw - hypertable does not (yet) support sorting. You will have to implement this in your client application.

when do not use open source code? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Are there some cases where it may not be a good idea to use the code of an open source project even though your company might allow you to do so?
Some cases that I think might be valid are:
The code may be implemented in a different languages.
It is not portable
It may need some other close-source libraries
What might be some other reasons?
Yes, some open-source licenses may require you to expose your source code, e.g GPL.
http://encodable.com/tech/blog/2006/02/25/Why_the_GPL_is_Incompatible_with_Commercial_Software
When security is involved and you do not have access to the actual code so you never (truly) know what you are using.
Beta code may not be appropriate in a production system.
if the library has a web page and there hasn't been any activity on it for a long long time. Either the code is perfect or no one is looking at the code anymore and no bug fixes are being applied.

*unusual* approach to memory saving? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was reading in a textbook on c++ about IOstreams, and I came across this:
Whenever you want to store information
on the computer for longer than the
running time of a program, the usual
approach is to collect the data into a
logically cohesive whole and store it
on a permanent storage medium as a
file.
(Quoted from Programming Abstractions in C++)
Is there an UNUSUAL approach to storing data?
Pushing across to a server, operating systems (experimental) that let you freeze parts of RAM etc.
This is a very vague question, and really, has no good answer.
i guess if you store it at some place in the RAM, and hope for it to be there when you run your program again, that would be an unusual way of storing :-)