inmemory datastructure - c++

i have a distributed application. here are set of processes , spread accross mutiple computers , communicating each other. i have a data structure , which is modified among these proceses . and this is not stored in database .
Now the question is how do i maintain the same view of the this data structure , accross all processes
i.e., at any point of time all process should see the same data structure

You say that you don't have a database. That's a shame, because database authors have solved your problem. You would need to incorporate the equivalent technology in your project. And obviously, the fastest and most simple way to incorporate the technology of databases is to incorporate a database.

Redis is designed to solve your problem. It is a key-value store for sharing between programs running on different machines but sharing the data. It is a server you run somewhere, and your programs all connect to this server using the client library it provides.
You can also use a database such as mysql but with in-memory tables.
If your data-structure does not fit into the key-value or relational models very well, you have the same kind of situation as multi-player games. It is non-trivial to sync multi-player games but it can be done and here is an excellent introduction as to how: gafferongames.com

I would recommend something like the Data Distribution Services platform for something like this (open source version is OpenDDS). Their key selling point is that it is designed to propagate changes to data to all interested in such changes. And performance isn't bad either.
Commercial implementations of this protocol are used in a variety of real-time systems, mostly military grade applications.
More options to consider, distributed caches (such as memcached) - though I've not played with this myself - it looks quite straight forward to get up and running.

Related

Achieving high concurrency in app that performs reading/writing to database?

I'm working to design a middle layer for an application that will receive up to ~5000 requests every few seconds and need to retrieve information from a database. I've been looking at use the Play Framework (I use scala for my REST api design) as they say its fully async and built on Akka. However, the main bottleneck of any solution seems to happen during read/writes to the database. Many Database cannot support simultaneous read/writes from a database of such a scale. How is such high concurrency achieved then for an app like this? I would guess Facebook/Twitter/ (name other big company) may have achieved this for their Applications as millions of people may be using them concurrently.
As Tim's comment was saying caching may or may not be able to help in your case. If not I would also recommend looking into horizontally scalable databases, for example cockroachdb if you want a transactional SQL db. Otherwise there are many no-sql choices a la mongodb etc. And if you really want to stick to traditional SQL systems you'll have to vertically scale your servers (buy the most expensive hardware) and work with read-replicas.
A huge component is your data model and query access pattern. If each query is incrementing a shared counter that has to be synchronized there will be a ton of contention, but if each query is touch completely separate data on the other end the spectrum than there will be a lot less contention.
I think there are a couple of dimensions I would consider:
Data Schema and Access Patterns (discussed above)
Language Choice
This is important becaues if you were in a web server context and were using prefork by default each process may have its own connection to the database. In an environment like python or ruby you may need hundreds of processes to handle your load. Contrast this with akka or another async networking based runtime (node, python gevent/asyncio, go, etc) where a single instance with a small thread pool can handle a large number of requests. Each have their tradeoffs.
Distributed Systems
Depending on your data schema and access patterns 5000 requests per second to a RDBMS is completely achievable. It would probably require relatively beefy hardware but but I'v personally done it a number of times. Getting to larger scales requires more computers in order to distribute the work/load. If your workload is right heavy and you can support potentially stale reads, a read replica is one option. With another machine in the mix reads are distributed over 2 machines but writes are still directed at a single machine (leader). Caching is another option.
At much higher workloads some sort of partitioning needs to occur in order to overcome the constraints of a single machine. https://github.com/vitessio/vitess
Many of the big contenders have solutions to horizontally scaling their databases. This has many drawbacks as well and will require careful planning.
The one thing I'd recommend is that if 5000 requests per second is projected for the near future, start with the minimal amount of hardware necessary (single instance) query patterns and operation get exponentially more complicated with a distributed database.

SQLite, iCloud, and maybe Core Data—which to use for storing files and sharing them with all of the user's devices?

I've been tasked with porting personal face recognition software to iOS and Mac OS X as well as helping keep the basic SDK and much of the software as cross-platform as possible. One of the things one of my associates and I want to do is store data on the user's face in an SQL database (probably SQLite). We would also like to allow users to put their data on iCloud so they don't have to train each of their devices separately to recognize them. What's bugging me is how to do both these tasks, and I'm confronted with enough choices to feel overwhelmed. (I am still new to some of the technologies involved.)
For implementing SQL, I could embed SQLite directly in my program and write code for it, or I could use Core Data and have it talk to SQLite for me. (The database is not meant to be shared, so this is OK. And SQL is not fun.) However Core Data is anything but portable (not to mention not intended for a model encoded as C++ objects), while writing directly for SQL would mean we could reuse more code on other platforms.
Things get messier when factoring in iCloud, which has something like five or six possible ways of integrating it with a program. The only method I have definitively ruled out so far is iCloud key-value storage. (At the very least, there's a good chance a user would get into trouble with the 1 MB limit, and it is clearly not intended for anything as complex as I'm dealing with.) Core Data can integrate with iCloud through UIManagedDocument or NSPersistentStore, but, again, that means less in the way of reusable code. I can use SQLite together with UIDocument or NSDocument, but what I am trying to do seems to be not quite what these objects were intended for. The files I am dealing with are essentially large preference files, not meant for end-users to interact with directly; UIDocument and NSDocument seem to be meant for user-viewable and -editable files. And then there are iCloud Drive and CloudKit, which are still in beta. (On the other hand, these two are due to be released fairly soon. Considering that iOS users tend to upgrade to the latest version of the system software quickly, arguments about using either of these based on how many devices they will be able to run on should quickly become weak and obsolete.)
Can anyone recommend which way is best suited for my purposes? Thanks in advance.
Aaron Solomon Adelman
First off, you don't want to try and share the SQLite file directly. That's extremely likely to corrupt the file, because SQLite wasn't built with that kind of use in mind.
However:
You could use SQLite for local on-device storage only and use a separate API to send data back and forth. Apple's CloudKit would probably be a good choice if you can require iOS 8+. Numerous third party solutions exist (for example, Parse). You'll have to write your own code to translate between SQLite and the network API. Your SQLite schema and your data files would be portable to other platforms, and maybe some of the code if you use SQLite's direct API instead of an Objective-C wrapper (and I highly recommend using either FMDB or PLDatabase if you use SQLite).
Core Data does have built-in support for iCloud, which probably makes it a viable option. Your comment that "...I could use Core Data and have it talk to SQLite for me." suggests you might have somewhat misunderstood Core Data. Core Data is not a SQLite wrapper; it presents a completely different API, and uses its own schema. You can't really take a Core Data persistent store file and use it on other platforms unless you want to spend some time reverse-engineering the schema. Also, using Core Data with iCloud does not require the use of UIManagedDocument, though it does still require a lot of other Core Data-specific classes.
If you want to be able to sync data across multiple devices which are not all Apple devices then you need a third party API. None of Apple's cloud APIs will be useful here. There are many providers that can help out with this. For local data storage, either SQLite or Core Data would work, but you should look at the third party services and see what storage option(s) they support, then try to work with them.
The best approach depends on your needs. If you expect to copy data files from an iOS app to other platforms, SQLite is good. You'll still have a lot of platform-specific code, the savings there are much less. If you don't plan to move data files around like that, Core Data is probably easier to deal with.

How should a Windows 8 Metro Application connect to a central database?

How should a Windows 8 Metro application connect to a central database?
I've read about local storage, but I haven't read anything about connecting to a central database.
Obviously, this architectural design decision needs to support the disconnected scenario.
WCF web services seem to make sense.
But even if they do make sense, should we really create separate methods for all read/write operations?
Or are OData WCF services the way to go?
It seems like tablet software architecture should be able to borrow a lot from smartphone software architecture (but I am new to both).
Has Microsoft made any recommendations in its app samples?
It appears that others are asking similar questions on the Microsoft Developer Forums.
Here is what I've found:
According to Tim Heuer:
...You cannot directly have a SQL db embedded in your app or use
something like ADO.NET. This is more of an async/services
infrastructure. So if your data was exposed via services, then of
course you could connect that way. There are some other light-weight
methods you could use for local storage as well using things like the
Windows.Storage namespace (which is similar to Isolated Storage in
.NET).
Morten Nielsen agrees:
You can use HttpClient to download pretty much anything from the web.
Why don't you configure your WCF service to return data as JSON, and
use the DataContractJsonSerializer to deserialize the results?
Also, Tim Heuer cautions:
...Please note that while awesome, the SQLWinRT project on codeplex is a
wrapper to communicate with the classic SQLite engine...which uses
APIs that would not pass store validation currently.
Generic Object Storage Helper for WinRT and WinRTFile Based Database seem to have some promise.
But Daniel Stolt raises some good points:
It's awesome that there is good support for building OData clients and
other REST clients - but this only addresses the online scenario. The
"structured" part of Windows.Storage is a very limited model,
essentially limited to name/value pairs, insufficient for all but the
most basic scenarios. Yes there is local file storage, which is great
of course. But forcing every app developer out there to build her own
DBMS on top of local file storage will simply not cut it, especially
with all of System.Data having been removed from the profile. If local
file storage was sufficient for most device apps, then things like
SQLCE would have no purpose today already. And SQLCE clearly has a
purpose, and has played a very important role for occasionally
connected device apps for a very long time. There is also a tremendous
need for synchronization with a server-side database such as SQL
Azure, mostly to be able to roam data between devices. Yes there is
the roaming storage model in WinRT, but it shares the same limitations
of local storage mentioned above, and on top of this is very limited
in capacity (currently 30KB if memory serves). It is simply
insufficient for all but the simplest roaming data needs. Again,
forcing every app developer to design and implement her own
synchronization solution is very bad. You can do much better to enable
developers.
Many people are disappointed that the System.Data namespace is not supported in WinRT.
Richard Bethell said:
I don't even have words for this. This is astonishing. Leave aside for
the moment they want to force you to abstract to middleware for
database connectivity - I don't agree, but I can quasi understand a
rationale for that. I can even see pathways for developing like that.
But no System.Data.... at all? Do you even understand what you've done
to us?
What System.Data can do, outside of just having providers for Sql,
OleDb and other custom providers like Oracle, is provide a rich
abstraction of XML datasets that allow you to very quickly build a
data oriented Service Oriented Architecture.
For instance, I can easily create a web service using SOAP or WCF that
returns DataSets or DataTables, and then consume those objects easily
and directly. Being able to do this allows very rapid construction of
n-tier architectures, even without direct data connections available.
Without System.Data, and the power of DataViews, DataTables, etc. this
gets a lot harder. Sure you can custom create structs, put data in
there, and serve up structs, and use Linq to do whatever sorting,
filtering, etc. you want to do.... but it ends up being twice the
work, and makes code reuse a lot harder. And it means using our
existing service oriented architecture is impossible (without a big
overhaul.)
The withdrawal of System.Data is as big a thing for developers to deal
with as the loss of the Printer object in VB6 to vb.net 1.0 was. What
is harder to understand in this case is why it is necessary -
re-enabling it in the Metro profile can't possibly be a technical
difficulty of the product, can it?
It is valuable enough that I would seriously consider including Mono's
System.Data classes as part of any app I create (which would obviously
have to be open source.)
I think that this is another of those "it depends" questions...
The first and most obvious issue is that it very much depends on the context in which the application is running as to whether, to take the first case "Obviously...support...disconnected" is actually true - if the app is an internal corporate app then quite possibly not in that case no db == not work.
Secondly you could look (hmm, rash... one assumes you could look, this could be a bad assumption) at database synchronisation between a local SQL database and the remote db and so on and so forth.
Taking a step back... yes - you're absolutely right, look at it as being the same as phone or silverlight (although I don't know if there is yet RIA support) - but the thing is at this point its very hard to be prescriptive because given a general purpose platform one can therefore write applications to suit all sorts of purposes.
Not a hugely helpful answer really - but a start.
Having read #Jim G's answer it seems that I should probably withdrawn mine?

Distributing state across many machines

I'm trying to write up a tool that requires knowledge of the state of other machines in a cluster (local LAN). This is for a network failover/high availability system similar to VRRP and corosync/openais, but I wish to contain more information (such as near real-time speed/performance characteristics) so devices can make more intelligent choices. This means using a protocol more complicated than a predetermine weight-based mechanism: by allowing all clustered machines to see the state of each other, they can communally agree on which is the most suitable to be the master device.
From my searches, I haven't found any (C, C++ or JavaME) libraries that offer a distributed state mechanism. Ideally, I'm looking for something that broadcasts/multicasts each individual machines state periodically so participating machines can build up a global state table and all can see who the master should be. State in this case is arbitrary key/value pairs.
I'd rather not re-invent any wheels so am curious to know if anyone here can point me in the right direction?
If I were you I'd investigate memcached (memcached.org) or one of the nosql variants.
It sounds like Apache ZooKeeper might be a good match. It's distributed, hierarchical key-value store. To quote their Overview page:
ZooKeeper was designed to store coordination data: status information, configuration, location information, etc.
Here's an example of a simple Leader Election recipie, although it would require adaptation to determine a leader by some weighted criterion.
I'm not sure if there is any application for your purpose or not.
But I know that you can write a simple program with MPI library and broadcast any information that you want.
all client's can send their state to root node, and the root node then broadcast the message.
functions that you need for this are:
MPI_Bcast
MPI_Send
MPI_Recv
there is lots of tutorial on C++/MPI on net, just google it!

At what point is it worth using a database?

I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using Qt to interface with our controller.
We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the Qt library to access and modify it.
I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful.
This is not about putting a database in an embedded project. It is also not a business type application where larger databases are commonly used. I am designing a GUI for a single user on a desktop to interface with a micro-controller for monitoring and configuration purposes.
I decided to go with SQLite. You can do some very interesting things with data that I didn't really consider an option when first starting this project.
A database is worthwhile when:
Your application evolves to some
form of data driven execution.
You're spending time designing and
developing external data storage
structures.
Sharing data between applications or
organizations (including individual
people)
The data is no longer short and
simple.
Data Duplication
Evolution to Data Driven Execution
When the data is changing but the execution is not, this is a sign of a data driven program or parts of the program are data driven. A set of configuration options is a sign of a data driven function, but the whole application may not be data driven. In any case, a database can help manage the data. (The database library or application does not have to be huge like Oracle, but can be lean and mean like SQLite).
Design & Development of External Data Structures
Posting questions to Stack Overflow about serialization or converting trees and lists to use files is a good indication your program has graduated to using a database. Also, if you are spending any amount of time designing algorithms to store data in a file or designing the data in a file is a good time to research the usage of a database.
Sharing Data
Whether your application is sharing data with another application, another organization or another person, a database can assist. By using a database, data consistency is easier to achieve. One of the big issues in problem investigation is that teams are not using the same data. The customer may use one set of data; the validation team another and development using a different set of data. A database makes versioning the data easier and allows entities to use the same data.
Complex Data
Programs start out using small tables of hard coded data. This evolves into using dynamic data with maps, trees and lists. Sometimes the data expands from simple two columns to 8 or more. Database theory and databases can ease the complexity of organizing data. Let the database worry about managing the data and free up your application and your development time. After all, how the data is managed is not as important as to the quality of the data and it's accessibility.
Data Duplication
Often times, when data grows, there is an ever growing attraction for duplicate data. Databases and database theory can minimize the duplication of data. Databases can be configured to warn against duplications.
Moving to using a database has many factors to be considered. Some include but are not limited to: data complexity, data duplication (including parts of the data), project deadlines, development costs and licensing issues. If your program can run more efficiently with a database, then do so. A database may also save development time (and money). There are other tasks that you and your application can be performing than managing data. Leave data management up to the experts.
What you are describing doesn't sound like a typical business application, and many of the answers already posted here assume that this is the kind of application you are talking about, so let me offer a different perspective.
Whether or not you use a database for 700 items is going to depend greatly on the nature of the data.
I would say that, about 90% of the time at this scale, you will benefit from a light-weight database like SQLite, provided that:
The data may potentially grow substantially larger than what you are describing,
The data may be shared by more than one user,
You may need to run queries against the data (which I don't think you're doing right now), and
The data can easily be described in table form.
The other 10% of the time, your data will be highly structured, hierarchical, object-based, and doesn't neatly fit into the table model of a database or Excel table. If this is the case, consider using XML files.
I know developers instinctively like to throw databases at problems like this, but if you are currently using Excel data to design user interfaces (or display configuration settings), rather than display a customer record, XML may be a better fit. XML is more expressive than either Excel or database tables, and can be easily manipulated with a simple text editor.
XML parsers and data binders for C++ are easy to find.
I recommend you to introduce a Database in your app, your application will gain flexibility and will be easier to maintain and to improve with new features in the future.
I would start with a lightweight file based db like Sqlite.
With a well designed db you'll have:
Reduced data redundancy
Greater data integrity
Improved data security
Last but not least, using a database will save you from the Excel import/update/export Hell!
Reasons for using a database:
Concurrent writes. It's easy to achieve concurrency in databases
Easy querying. SQL queries tend to be much concise than procedural code to search data. UPDATEs, INSERT INTOs can also do lots of stuff with very little code
Integrity. Constraints are very easy to define and are enforced without writing code. If you have a non-null constraint, you can rest assured that the value won't be null, no need to write checks anywhere. If you have a foreign key constraint in place, you won't have "dangling references".
Performance over large datasets. Indexing is very simple to add to an SQL database
Reasons for not using a database:
It tends to be an extra dependency (although there exist very lightweight databases- I like H2 for Java, for instance)
Data not well suited to a relational schema. Things that are basically key/value maps. XML (although databases often support XPath, etc.).
Sometimes files are more convenient. They can be diff'ed, merged, edited with a plain text editor, etc. Sometimes spreadsheets can be more practical (you don't have to build an editor- you can use a spreadsheet program)
Your data is already somewhere else
When you have a lot of data that you're not sure how they will be exploited in the future.
For example you might want to add an SQLite database in an embedded application that need to register statistics that you're not sure how will be used. Later you send the full database for injection in a bigger one running on a central server and those data can easily be exploited, using requests.
In fact, if your application's purpose is to "gather data" then having a database is a must have.
I see quite a few requirements that well met by databases:
1). Ad hoc queries. Find me all the {X} that meet criteria Y
2). Data with structure that can benefit from normalisation - factoring out common values into separate "tables". You can save space and reduce the possibility of inconsistency this way. Once you've done this then those ad-hoc queries start to be really useful.
3). Large data volumes. Professional database are very good at making good use of resoruces, clever query optmisations and paging strategies. Trying to write this stuff yourself is a real challenge.
You're clearly not needing that last one, but the other two, maybe do apply to you.
Don't forget that the appropriate database can be quite different depending on your requirements (and don't forget that a text file could be used as a database if you're requirements are simple enough - for example, config files are just a specific kind of database). Such parameters might be:
number of records
size of data items
does the database need to be shared with other devices? Concurrently?
how complex are the relationships between the various pieces of data
is the database read only (created at build time and not changed, for example)?
does the database need to be updated by multiple entities concurrently?
do you need to support complex queries?
For a database with 700 entries, an in-memory sorted array loaded from a text file might well be appropriate. But I could also see the need for an embedded SQL database or maybe having the controller request data from the database over a network connection depending on what the various requirements (and resource limitations) are.
There isn't a specific point at which a database is worthwhile. Instead I usually ask the following questions:
Is the amount of data the application uses/creates growing?
Is the upper limit of this data growth unknown (or unclear)?
Will the application need to aggregate or filter this data?
Could there be future uses of the data that may not be obvious right now?
Is performance of data retrieval and/or storage important?
Are there (or could there be) multiple users of the application who share data?
If I answer 'Yes' to most of these questions I almost always choose a database (as opposed to other options such as XML/ini/CSV/Excel/text files or the filesystem).
Also, if the application will have many users who could be accessing the data concurrently, I'll lean towards a full database server (MySQL, SQl Server, Oracle etc).
But often in a single user (or small concurrency) situation, a local database such as SQLite cannot be beaten for portability and ease of deployment.
To add a negative: not suitable for real-time processing, due to non-deterministic latency. However, It would be quite ample for looking up and setting operating parameters, for instance during startup. I would not put database accesses on critical time paths.
You don't need a database if you have a few thousand rows in one or two tables to handle in a single user app (for the embedded point).
If it is for multiple users (concurrent access, locking) or the need of transactions you definitly should consider a database.
Handling complex datastructures in normalized tables and maintain integrety, or a huge amount of data would be another indication you should use a database.
It sounds like your application is running on a desktop computer and simply communicating to the embedded device.
As such using a database is much more feasible. Using one on an embedded platform is a much more complex issue.
On the desktop front I use a database when there is the need to store new information continuously and the need to extract that information in a relational way. What i don't use databases for is storing static information, information i read once at load and thats it. The exception is when the application has many users and there is the need to storage this information on a per user basis.
It sounds be to me like your collecting information from your embedded device, storing it somehow, then using it later to display via a GUI.
This is a good case for using a database, especially if you can architect the system such that there is a data collection daemon that manages the continuous communication with the embedded device. This app can then just write the data into the database. When the GUI is launched it can extract the data for display.
Using the database will also ease your GUI develop if you need to display different views, such as "show me all the entries between 2 dates". With a database you just ask it for the correct values to display with a proper SQL query and the GUI displays whatever comes back allowing you to decouple much of the "business logic" code from the GUI.
We are also facing a similar situation. We have set of data coming from different test setups and it is currently being dumped into excel sheets, processed using Perl or VBA.
We found out this method had lot of problems:
i. Managing data using excel sheets is quite cumbersome. After some time you have a whole lot of excel sheets and there is no easy way to retrieve required data from it.
ii. People start sending the excel sheets to and fro for comments and review through e-mails. E-Mail becomes the primary mode of managing the comments related to the data. These comments are lost at a later point of time and there is no way of retrieving it back.
iii. Multiple copies of the files get created and changes in one copy are not reflected in the other - there is no versioning.
This is for the same reasons we have decided to move to a database based solution and are currently working on it. Let me summaries what we are trying to do:
i. The database is in a central server accessible by PC in all the test setups.
ii. All the data goes into a temporary location (local hard disk in files) as soon as it is generated. From the files, it is pushed into database by a process running in the background (so even if there is a network problem, data will be present in the local files system).
iii. We have a web based application which allows users to log in and access data in the format they want. The portal will allow them to add comment, generate different kind of reports, share it with other users after review etc. It will also have the ability to export data into excel sheet, just in case you need to take it with you.
Let know if this can be better implemented.
"At what point is it worth using a database?"
If and when you've got data to manage ?