I have an application written in C++ which uses an SQLite database to store information. I need a way of assigning a version number to the database. By this I mean, I need to be able to assign a version number to the state of the database, and if a new 'state' (version) is available, then I need to update the current database state to match the state of the updated version.
I am wondering whether it would be good practice to store the information required for this to happen in a table. I would need the version number, and then some way of storing the tables and their columns related to each version number. This would allow me to make comparisons etc.
I realise that this question Set a version to a SQLite database file is related, however it doesn't quite answer my question, as I am unsure if my approach is correct, and if so, how I go about achieving this.
All help is much appreciated.
Use PRAGMA user_version to read and store an integer value in the database file.
When the version in your code and database file are the same, do nothing. When they are different, upgrade/downgrade accordingly and update the version number.
Related
Are there any tried/true methods of managing your own sequential integer field w/o using SQL Server's built in Identity Specification? I'm thinking this has to have been done many times over and my google skills are just failing me tonight.
My first thought is to use a separate table to manage the IDs and use a trigger on the target table to manage setting the ID. Concurrency issues are obviously important, but insert performance is not critical in this case.
And here are some gotchas I know I need to look out for:
Need to make sure the same ID isn't doled out more than once when
multiple processes run simultaneously.
Need to make sure any solution to 1) doesn't cause deadlocks
Need to make sure the trigger works properly when multiple records are
inserted in a single statement; not only for one record at a time.
Need to make sure the trigger only sets the ID when it is not already
specified.
The reason for the last bullet point (and the whole reason I want to do this without an Identity Specification field in the first place) is because I want to seed multiple environments at different starting points and I want to be able to copy data between each of them so that the ID for a given record remains the same between environments (and I have to use integers; I cannot use GUIDs).
(Also yes, I could set identity insert on/off to copy data and still use a regular Identity Specification field but then it reseeds it after every insert. I could then use DBCC CHECKIDENT to reseed it back to where it was, but I feel the risk with this solution is too great. It only takes one time for someone to make a mistake and then when we realize it, it would be a real pain to repair the data... probably enough pain that it would have made more sense just to do what I'm doing now in the first place).
SQL Server 2012 introduced the concept of a SEQUENCE database object - something like an "identity" column, but separate from a table.
You can create and use sequence from your code, you can use the values in various place, and more.
See these links for more information:
Sequence numbers (MS Docs)
CREATE SEQUENCE statement (MS Docs)
SQL Server SEQUENCE basics (Red Gate - Joe Celko)
I am very new to database and I am trying to implement a offline map viewer. What would be the efficiency of the qsqldatabase?
To make it extreme, for example, is it possible to download all satellite image of all the detail levels of US from the google's map server and store it in a local sqlite database and still perform real time query based on my current gps location?
The Qt Database driver for SQLite uses SQLite internally (surprise!). So the question is more like: Is SQLite the right database to use? My answer: I would not use it to store geographical data, consider to look for a database which is optimized for this task.
If this is not an option; SQLite is really efficient. First check if your data is within the limits. Do not forget to create indexes and analyze the database. Then it should be able to handle your task. Here I assume you just want to get an image by its geographical position (but other solutions can be a lot faster because your data is sortable — if I remember correctly SQLite is not optimized for that).
As you will store large blobs, you may want to have a look at the Internal Versus External BLOBs in SQLite document. Maybe this gives you the answer already.
I am currently developping a windows application who test railroad equipments to find any defaults.
Utility A => OK
Utility B => NOK
...
This application will check the given equipment and generate a report.
This report needs to be written once, and no further modifications are allowed since this file can be used as working proof for the equipment.
My first idea was ta use pdf files (haru lib looks great), but pdf can also be modified.
I told myself that I could obsfuscate the report, and implement a homemade reader inside my application, but whatever way I store it, the file would always be possibly accessed and modified right?
So I'm running out of ideas.
Sorry if my approach and my problem appear naive but it's an intership.
Thanks for any help.
Edit: I could also add checksums for files after I generated them, and keep a "checksums record file", and implement a checksums comparison tool for verification? just thought about this.
I believe the answer to your question is to use any format whatosever, and use a digital signature anybody can verify, e.g., create a gnupg, get that key signed by the people who require to check your documents, upload it to one of the key servers, and use it to sign the documents. You can publish the documents, and have a link to your public key available for verification; for critical cases someone verifying must be trust your signature (i.e., trust somebody who signed your key).
People's lives depend on the state of train inspections. Therefore, I find it hard to believe that someone expects you to solve this problem only using free-as-in-beer components.
Adobe supports a strong digital signature model. If you buy into their technology base, you can create PDF's that are digitally signed, and are therefore tamper-evident, as the consumer can check for the signature.
You can, as someone else pointed out, use GNUpg, or for that matter OpenSSL, to implement your own signature scheme, but railroad regulators are somewhat less likely to figure out how to work with it.
I would store reports in an encrypted/protected datastore.
When a user accesses a report (requests a copy, the original is of course always in the database and cannot be modified), it includes the text "Report #XXXXX". If you want to validate the report, retrive a new copy from the system using the Report ID.
I'm new to ElasticSearch, so this is probably something quite trivial, but I haven't figured out anything better that fetching everything, processing with a script and updating the registers one by one.
I want to make something like a simple SQL update:
UPDATE RECORD SET SOMEFIELD = SOMEXPRESSION
My intent is to replace the actual bogus data with some data that makes more sense (so the expression is basically randomly choosing from a pool of valid values).
There are a couple of open issues about making possible to update documents by query.
The technical challenge is that lucene (the text search engine library that elasticsearch uses under the hood) segments are read only. You can never modify an existing document. What you need to do is delete the old version of the document (which by the way will only be marked as deleted till a segment merge happens) and index the new one. That's what the existing update api does. Therefore, an update by query might take a long time and lead to issues, that's why it's not released yet. A mechanism that allows to interrupt running queries would be a nice to have too for this case.
But there's the update by query plugin that exposes exactly that feature. Just beware of the potential risks before using it.
The obvious way is to load up JDBC support from Clojure Contrib and write some function to translate a map/struct to a table. One drawback of this is that it isn't very flexible; changes to your structure will require DDL changes. This implies either writing DDL generation (tough) or hand-coding migrations (boring).
What alternatives exist? Answers must be ACID, ruling out serializing to a file, etc.
FleetDB is a database implemented in Clojure. It has a very natural syntax for working with maps/structs, e.g. to insert:
(client ["insert" "accounts" {"id" 1, "owner" "Eve", "credits" 100}])
Then select
(client ["select" "accounts" {"where" ["=" "id" 1]}])
http://fleetdb.org/
One option for persisting maps in Clojure that still uses a relation database is to store the map data in an opaque blob. If you need the ability to search for records you can store indexes in separate tables. For example you can read how FriendFeed is storing schemaless data on top of MySQL - http://bret.appspot.com/entry/how-friendfeed-uses-mysql
Another option is to use the Entity-Attribute-Value model (EAV) for storing data in a database. You can read more about EAV on Wikipedia (I'd post a link but I'm a new user and can only post one link).
Yet another option is to use BerkeleyDB for Java - it's a native Java solution providing ACID and record level locking. (Same problem with posting a link).
Using CouchDB's Java-client lib and clojure.contrib.json.read/write works reasonably well for me. CouchDB's consistency guarantees may not be strong enough for your purposes, though.
Clj-record is an implementation of active record in clojure that may be of interest to you.
You could try one of the Java-based graph databases, such as Neo4J. It might be easy to code up a hashmap interface to make it reasonably transparent.
MongoDB and it's framework congomongo (lein: [congomongo "0.1.3-SNAPSHOT"]) works for me. It's incredible nice with the schemaless databases, and congomongo is quite easy to get along with. MongoDB adds an _id-field in every document to keep it identified, and there is quite good transparency between clojure-maps and mongo-maps.
https://github.com/somnium/congomongo
EDIT: I would not use MongoDB today. I would suggest you use transit. I would use JSON if the backend (Postgres etc) support it or the msgpack coding if you want to have a more compact binary encoding.