how is it possible that runs entirely in-memory instead of on disk [closed] - in-memory-database

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
VoltDB runs entirely in-memory instead of on disk. I am wondering how is this possible if say our data is increasing and increasing yet ram is a limited resource, wouldn't it start to hit some bottlenecks real soon?

In-memory databases are usually designed to be used as clusters. To scale as the size of the database grows, you have to increase one of these so the database fully fits in memory:
The memory of the server.
The number of servers in the database cluster.

I don't know anything about this DB, but it's possible to do this. You just need a lot of computers(nodes). If you need more "space" add another (pair, triple of) node(s)...

With an in-memory database you'll need enough physical RAM to hold the state of your application. You can certainly move stale/static data along to a long-term data store for reporting and analysis.

Only if your data is growing faster than memory prices are going down. Most databases are much smaller than the maximum ram size of a modern workstation if they are normalized and excluding blobs/images/files.

Related

When should I use C++ AMP [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When should I use the C++ AMP (or shouldn't use it)?
What is an overhead of AMP? How long it takes to copy data to GPU memory and back? What is a minimal data size when AMP starts to decrease performance?
Copying data isn't that big an overhead as long as you're not doing it too much. Copying a few large chunks of data is fine once in a while. Games usually copy over instance data for each object on each frame, for example, and this can kill performance if overdone, but is usually fine. Notably, they don't copy over stuff like 3D geometry, and that would destroy you.
The use case, generally, is for simple (think, at maximum, an FSM) computations on a large amount of data, where each datum is treated individually.
As for performance, well, a profile is the only way to be sure. GPUs are quite different beasts and the minimum data size really depends on the computation at hand, and the data spread. For example, GPUs don't really like it when multiple threads don't branch the same way.

Importance of concurrency issues in web applications [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
With respect to this question, is it always necessary to have perfectly concurrent-safe web-application or can a developer afford to let some possible concurrent issues untreated (e.g. when money is not involved) because probability of them happening is very low anyway. What is the best practice?
NOTE: If I say concurrent issues I mean the issues raising from overlapping executions of scripts. I do not mean the multi-user issues like classic lost-update with the timestamp solution cause probability of these things is imho significant and I am pretty sure here that the best practice is to always treat them.
If you're going to run your code on a web server, you should always write your code in such a way that multiple copies of it can run at the same time.
It's really not that difficult. In most cases database transactions around state modifying operations is all that is needed (or using a lock file, or any of the other well-known solutions to this problem).
Note: if all you're doing is reading data then there are no concurrency issues.

which is one is faster Hbase or Hypertable? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I had a requirement to store millions of records in which all are unique with multiple columns.
for example
eventcode description count
526 blocked 100
5230 xxx 20
....
and I want the following requirements while fetching sorting on count column, filtering on columns.
So I thought of using Hbase but I googled up and known that hypertable is faster.
So I am bit confused to know it.
please help me regarding this.
Note: I want to use C++ for transactions (reading, writing).
BIG disclaimer: i work for hypertable.
We have created a benchmark a while ago which you can read here: http://hypertable.com//why_hypertable/hypertable_vs_hbase_2/
Conclusion: Hypertable is faster, usually twice as fast.
Performance actually was the reason why hypertable was founded. Back then some guys were sitting together and discussing an open source implementation of Google's bigtable architecture. They did not agree on the programming language (java vs. c++ - the disagreement was about performance). As a result, one group founded hypertable (a C++ implementation) and the other group started working on hbase (in java).
If you do not trust benchmarks then you will have to run your own; both systems are open source and free to use. If you have questions about hypertable or run into problems while evaluating it then feel free to drop me a mail (or use the mailing list - all questions are getting answered.)
Btw - hypertable does not (yet) support sorting. You will have to implement this in your client application.

*unusual* approach to memory saving? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was reading in a textbook on c++ about IOstreams, and I came across this:
Whenever you want to store information
on the computer for longer than the
running time of a program, the usual
approach is to collect the data into a
logically cohesive whole and store it
on a permanent storage medium as a
file.
(Quoted from Programming Abstractions in C++)
Is there an UNUSUAL approach to storing data?
Pushing across to a server, operating systems (experimental) that let you freeze parts of RAM etc.
This is a very vague question, and really, has no good answer.
i guess if you store it at some place in the RAM, and hope for it to be there when you run your program again, that would be an unusual way of storing :-)

Realtime currency webservice [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Anyone know of a real-time currency rate webservice with frequent update (multiple pr. min.). Needed for a small android app I'm building, so needs to be free.
I would recommend European Central Bank. They provide a daily xml feed, nice and simple.
http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml
You can try Yahoo. It is free and easy to use.
For example, to convert from GBP to EUR: http://download.finance.yahoo.com/d/quotes.csv?s=GBPEUR=X&f=sl1d1t1ba&e=.csv
gives you data in csv format which can easily be parsed.
Have you tried with http://openexchangerates.org ?
You can use it's The Forever Free Plan with following features provided.
Hourly rate updates
Daily historical data
1,000 API requests per month
Real time financial services are usually not available for free. You will find a lot of delayed services ( tipically 15 min ) for free, but you have to make sure licensing allow you to use it in your own application as well.
you have some webs where you can take this information. The problem... usually you need pay for this service. Same examples :
http://www.xignite.com
http://fxtrade.oanda.co.uk/trade-forex/api/
http://forex-automatic-trading-systems.com/developers/api.htm
http://forexfeed.net/developer/php-forex-data-feed-api
I would suggest Mondor's web service, especially the WebAPI one. Not free, but could be about 5$ per year, depending from your architecture. Anyway, here's the link: http://mondor.org
Edit: apparently, they have free keys now, or I missed them previously.