what's the efficiency of qsqlite data base in Qt? - c++

I am very new to database and I am trying to implement a offline map viewer. What would be the efficiency of the qsqldatabase?
To make it extreme, for example, is it possible to download all satellite image of all the detail levels of US from the google's map server and store it in a local sqlite database and still perform real time query based on my current gps location?

The Qt Database driver for SQLite uses SQLite internally (surprise!). So the question is more like: Is SQLite the right database to use? My answer: I would not use it to store geographical data, consider to look for a database which is optimized for this task.
If this is not an option; SQLite is really efficient. First check if your data is within the limits. Do not forget to create indexes and analyze the database. Then it should be able to handle your task. Here I assume you just want to get an image by its geographical position (but other solutions can be a lot faster because your data is sortable — if I remember correctly SQLite is not optimized for that).
As you will store large blobs, you may want to have a look at the Internal Versus External BLOBs in SQLite document. Maybe this gives you the answer already.

Related

Bulk geocoding for associating addresses to shapefiles [here-api]

I was wondering if I can use the here-api to geocode addresses (bulk way) and store the results. I need to store them on a python data frame to afterwards merge them with a shape file. However, in the TOS HERE says that you cannot store the results from geocoding.
I've been in your situation. Let me list the options that we've considered.
You could fetch address information dynamically using their Geocoding API. Advantage is that you'll always have up-to-date information, and you don't need to query places that you'll never use anyway.
I'm assuming that your shape file is based on HERE data. You could still try to use OpenStreetMaps. You'll get some inaccuracies if the map data differs, but it's free, and not as restrictive as HERE.
You could opt to buy the HERE map data, to work around the TOS restrictions
By the way, we went for an option 4: switching to OpenStreetMaps entirely, but it sounds like you don't have that luxery, because you have to work with existing shapefiles that you want to "enrich".

How to keep the SQLite database file in RAM?

Okay I know this question feels weird, but let me present my problem. I am building Qt based application with SQLite as my database. I noticed a few things. Whenever you are performing operations like manipulating row one by one directly on the sqlite file, it seems slow. That is because it is doing I/O operations on a file which is stored in hard drive. But when I use SSD instead of HDD, the speed is considerably improved, that is because SSD has high IO speed. But if I load a table into QSqlTableModel, we can make all the changes and save it the speed is good. That is because in one query the data is fetched from sqlite file and stored in RAM memory. And So the IO operations are less. So it got me thinking is it possible to save the sqlite file in RAM when my application launches, perform all my opeartions and then when user chooses to close, at that instant i can save the file to HDD? One might think why don't I just use qsqltablemodel itself, but there are some cases for me which involves creating tables and deleting tables, which qt doesnt support out of box, we need to execute query for that. So if anyone can point me if there's a way to achieve this in Qt, that would be great!

Is small changes in large documents a thing document databases is good for?

Sometimes documents with it's free form structure is attractive for storing data (in contrast to a relational database). But one problem is persistence in combination with making small changes to the data, since the entire document has to be rewritten to disk.
So my question is, are "document databases" especially made to solve this?
UPDATE
I think I understand the concept of "document oriented databases" better now. It's obviously not documents of any kind but each implementation uses it's own format, such as for instance JSON. And then the answer to my question also becomes obvious. If the entire JSON-structure had to be rewritten to disk after each change to keep it persisted, it wouldn't be a very good database.
If the entire JSON-structure had to be rewritten to disk after each change to keep it persisted, it wouldn't be a very good database.
I would say this is not true of any document database I know of. For example, Mongo doesn't store documents as JSON, it stores them as BSON (http://en.wikipedia.org/wiki/BSON).
Also databases like Mongo will store documents in RAM and persist them to disk later.
In fact many document databases will follow that pattern of storing documents in main memory and then writing them to disk.
But the fact that a given document database will write data to disk - and the fact that some documents might get changed a lot - does not mean the database is non-performant. I wouldn't disregard document databases based on speculation.

Raw Binary Tree Database or MongoDb/MySQL/Etc?

I will be storing terabytes of information, before indexes, and after compression methods.
Should I code up a Binary Tree Database by hand using sort files etc, or use something like MongoDB or even something like MySQL?
I am worried about (space) cost per record with things like MySQL and other DB's that are around. I also know that some databases even allow for compression, but they convert to read only tables. These tables/records need to be accessed and overwritten with new data fairly often. I think if I were to code something in C++ I'd be able to keep the cost of space per record to a minimum.
What should I do?
There are new non-relational databases that are becoming popular these days, that specialize in managing large-scale data.
Check out Hadoop or Cassandra, both of these are at the Apache Project.

What is the easiest way to persist maps/structs in Clojure?

The obvious way is to load up JDBC support from Clojure Contrib and write some function to translate a map/struct to a table. One drawback of this is that it isn't very flexible; changes to your structure will require DDL changes. This implies either writing DDL generation (tough) or hand-coding migrations (boring).
What alternatives exist? Answers must be ACID, ruling out serializing to a file, etc.
FleetDB is a database implemented in Clojure. It has a very natural syntax for working with maps/structs, e.g. to insert:
(client ["insert" "accounts" {"id" 1, "owner" "Eve", "credits" 100}])
Then select
(client ["select" "accounts" {"where" ["=" "id" 1]}])
http://fleetdb.org/
One option for persisting maps in Clojure that still uses a relation database is to store the map data in an opaque blob. If you need the ability to search for records you can store indexes in separate tables. For example you can read how FriendFeed is storing schemaless data on top of MySQL - http://bret.appspot.com/entry/how-friendfeed-uses-mysql
Another option is to use the Entity-Attribute-Value model (EAV) for storing data in a database. You can read more about EAV on Wikipedia (I'd post a link but I'm a new user and can only post one link).
Yet another option is to use BerkeleyDB for Java - it's a native Java solution providing ACID and record level locking. (Same problem with posting a link).
Using CouchDB's Java-client lib and clojure.contrib.json.read/write works reasonably well for me. CouchDB's consistency guarantees may not be strong enough for your purposes, though.
Clj-record is an implementation of active record in clojure that may be of interest to you.
You could try one of the Java-based graph databases, such as Neo4J. It might be easy to code up a hashmap interface to make it reasonably transparent.
MongoDB and it's framework congomongo (lein: [congomongo "0.1.3-SNAPSHOT"]) works for me. It's incredible nice with the schemaless databases, and congomongo is quite easy to get along with. MongoDB adds an _id-field in every document to keep it identified, and there is quite good transparency between clojure-maps and mongo-maps.
https://github.com/somnium/congomongo
EDIT: I would not use MongoDB today. I would suggest you use transit. I would use JSON if the backend (Postgres etc) support it or the msgpack coding if you want to have a more compact binary encoding.