I am writing a time tracking Windows application in C++ that uses sqlite3 engine to store its data. For my purpose it would be nice to share the database file across the local network (in a Windows network share folder) among several copies of my application, so that multiple users of the software could share data.
Is there a mechanism to do that with SQLite?
"nice to share the database file across the local network" You really don't want to do that. It will end up being more trouble than it's worth. In ideal circumstances it works, although the performance sucks a bit. In non-ideal circumstances, it will block forever without giving you any idea why and what's at fault.
It's much easier to partition your system into a server and a client. They can both run within the same application. When the application starts, it checks if there are any servers on the local network, and if there aren't, it starts one. It then connects to the server.
That's what Filemaker at least used to do 20 years ago, and it worked pretty well. Should be a breeze to implement using modern frameworks today (say Qt or boost).
Related
I am currently writing a client-server app for the iOS platform. The client is written in Obj-C, and the server uses C++ on OSX11.9. Since I intend to run the server software on an Ubuntu dedicated server, I am trying my best to keep the serverside code portable.
To store data about users and user-game-relations I intend to use an SQL database (most likely MySQL or possibly PostgreSQL since I'm familiar with those). I know that it is possible to read from/write to the database through a filedescriptor just like I do in my TCP module, but I wish to utilize a higher-level SQL communications API to make the programming process quicker.
Can anyone recommend me a good open source/free SQL API for *NIX C++? Any help would be appreciated. Thanks in advance!
You have several options here:
Use native database SDK. They are usually distributed along with the database installation or as separate downloads/packets. The upside is you can get maximum speed out of it. Downside is that you'll be limited by your initial choice - no switching afterwards without rewriting part of application.
Use a C++ ORM (example: ODB). This gives you DB independence along with some tasty features, at the cost of slightly reduced speed.
unixODBC supports both MySQL and PostgreSQL. Take a look at it.
Here's my scenario. I am writing a web app for a client that needs to be portable, i.e. they need to plug it into different PCs (Windows) and have it simply work. Life would have been easier if they could just put it up on a domain, but no can do in this case, cause internet access might not always be available. So, I am trying out Railo Express with Jetty (http://www.getrailo.org/index.cfm/download/) which has everything I need. I actually managed to install (well, copy and configure really) the package on a USB stick, created a new site in the "/webapps" folder and wired that up, then downloaded the drivers for SQLITE and got that connected and working just fine.
This is not going to be a very intense web app at all, or does it need many users connected to it (max 2-3 at a time). I use Bootstrap and other than a Dashboard with a couple of graphs, all the pages are basically forms and read/write to the SQLITE db.
So, while everything seems to work do you think this is a viable solution? It seems to work fine, but will I run into any issues, like perhaps performance or compatibility issues with the different PCs the client might be using? And is there a better way of doing this?
EDIT:
Thanks for replying guys. Here's some more info to hopefully clear things out. I should have been more specific as to why use a portable web app. The app is for a car wash business to log the business going through. There is basically one computer at the counter where things will be accessed from (and the USB will be attached here), and possibly one iPod at the entrance where cars going in will be logged by the attendant (and will connect to the local computer via wireless). The reason for portability? They want to take the stick home with them and review stats, so it's either a full installation on the computer and a backup on the stick (extra work), or just everything on the stick. The reason for not simply going online and making things easier for everyone: tricky internet reception, which would mean downtime of the app.
From your descriptions it looks like a simple and not very intensive application. Based on my experience with Railo Express, I think you have the power needed to run this.
What I would do is to install the application on the computer at the counter since that is the main hub (you mention the iPad connecting wireless). Use the stick as a backup and before they take it home, make sure the stick is updated with data. You might also consider designed the app so that there is separation between writing data and consuming it (e.g. people at home running reports).
Will the app on the stick run at home, most likely it will work, or if you run into some problems will not be hard to fix.
I was wondering if justhost.com would be good enough to host a Wt C++ website/app on. It does allow FTP and SSH access as http://richelbilderbeek.nl/CppWtDeployGlobalHosted.htm tells me a host should, but I am just looking to get more input, or if you know of a better host?
I'd also ask them if you can install libraries on there, if not you'd have to compile yourself a giant static app, which could be a bit of annoying restriction.
It looks to me like their site is basically designed to host standard php style apps more than anything.
I use slicehost and Rackspace Cloud Servers.
The thing is they are full VPS's and give you full root access.
I would go with a true VPS plan, rather than a chroot style shared hosting plan, with ssh access added on top. The main problem would be neighbouring bloated applications using all the shared resources and giving you inconsistent performance.
Also with full root access, you can set up your app to start on boot, and sort out your own DB backup plan etc..
You still can get neighbours slowing you down on VPS accounts, but it's much reduced.
One thing I like with Witty is that my app running with 100 threads, even with the cheapest VPS plan it runs consistently and smooth up to 50 concurrent users (tested using load impact) with hardly any load on the machine at all.
My general pro c++ statement: Some c# and java people say c++ is only really useful for embedded, low powered hardware. I'd like to add that it's also useful for VPSs. Although hardware power is always growing, with virtualization, there's always more cheaper lower powered plans coming out that c++ is perfect for.
I used to run php, perl and python web servers on VPSs but my C++ witty app really does leave them all in the dust performance wise. The idea being you can pay less per month to host a c++ Web site that scales really well, rather than rails or other interpreted or byte compiled languages.
Also, I used to use a larger, 4 GB Slice to do my compiling until I bought myself a decent 6 core home box. The 256 MB (the smallest plan) is no good for compiling, but excellent for running.
As a project for my database classes, I built a simple object-oriented database (coded in C++). The DB manages concurrency by using a gateway file, which grants read/write access to the entire DB. To access the same DB across different machines, you use shared folders.
I built a little quizzing application on top of that. Everything works fine on a single system with multiple users as well as on a 3 computer network on my home. But when its run on my University's network, I keep getting inconsistent data corruption in the form of bad CRCs (in my database, not the disk), file headers being inconsistent with file data, and other weird errors, which I'm unable to track down.
The network is problematic - sometimes some nodes on the network become unreachable, and sometimes copying a file takes across the n/w takes an inordinate amount of time.
Occasionally, I get an error message 'Windows delayed write failed', so I'm thinking the problems are being caused by problems with file sharing across the network. From some analysis its seems that data is being cached, and so I don't know really know whether a disk write is successful.
Does anyone have any experience in using shared files as databases? I want to know whether using shared files is reliable, and whether I should be looking at bugs in my code as the cause of the problems.
Thanks.
No, it's not reliable. It was the reason why CVS disabled mode that used shared files for repository. The solution is to create a server (e.g. a simple TCP/IP server).
I’m thinking about building an offline-enabled web application.
The architecture I’m considering is as follows:
Web server (remote) <--> Web server/cache (local) <--> Browser/Prism
The advantages I envision for this model are:
Deployment is web-based, with all the advantages of this approach
Offline-enabled
UI (html/js) synchronization is a non-issue
Data synchronization can be mostly automated
as long as I stay within a RESTful paradigm
I can break this as required but manual synchronization would largely remain surgical
The local web server is started as a service; I can run arbitrary code, including behind-the-scene data synchronization
I have complete control of the data (location, no size limit, no possibility of user deleting unknowingly)
Prism with an extension could allow to keep the javascript closed source
Any thoughts on this architecture? Why should I / shouldn’t I use it? I'm particularly looking for success/horror stories.
The long version
Notes:
Users are not very computer-literate.
For instance, even superficially
explaining how Gears works is totally
out of the question.
I WILL be held liable if data is loss, even if it’s really the users fault (short of him deleting random directories on his machine)
I can require users to install something on their machine. It doesn’t have to be 100% web-based and/or run in a sandbox
The common solutions to this problem don’t feel adequate somehow. Here is a short analysis of each.
Gears/HTML5:
no control over data, can be deleted
by users without any warning
no
control over location of data (not
uniform across browsers and
platforms)
users need to open application in browser for synchronization to happen; no automatic, behind-the-scene synchronization
different browsers are treated differently, no uniform view of data on a single machine
limited disk space available
synchronization is completely manual, sql-based storage makes this a pain (would be less complicated if sql tables were completely replicated but it’s not so in my case). This is a very complex problem.
my code would be almost completely open sourced (html/js)
Adobe AIR:
some of the above
no server-side includes (!)
can run in the background, but not windowless
manual synchronization
web caching seems complicated
feels like a kludge somehow, I’ve had trouble installing on some machines
My requirements are:
Web-based (must). For a number of
reasons, sharing data between users
for instance.
Offline (must). The application must be fully usable offline (w/ some rare exceptions).
Quick development (must). I’m a single developer going against players with far more business resources.
Closed source (nice to have). Yes, I understand the open source model. However, at this point I don’t want competitors to copy me too easily. Again, they have more resources so they could take my hard work and make it better in less time than I could myself. Obviously, they can still copy me developing their own code -- that is fine.
Horror stories from a CRM product:
If your application is heavily used, storing a complete copy of its data on a user's machine is unfeasible.
If your application features data that can be updated by many users, replication is not simple. If three users with local changes synch, who wins?
In reality, this isn't really what users want. They want real-time access to the most current data from anywhere. We had better luck offering a mobile interface to a single source of truth.
The part about running the local Web server as a service appears unwise. Besides the fact that you are tied to certain operating environments that are available in the client, you are also imposing an additional burden of managing the server, on the end user. Additionally, the local Web server itself cannot be deployed in a Web-based model.
All in all, I am not too thrilled by the prospect of a real "local Web server". There is a certain bias to it, no doubt since I have proposed embedded Web servers that run inside a Web browser as part of my proposal for seamless off-line Web storage. See BITSY 0.5.0 (http://www.oracle.com/technology/tech/feeds/spec/bitsy.html)
I wonder how essential your requirement to prevent data loss at any cost is. What happens when you are offline and the disk crashes? Or there is a loss of device? In general, you want the local cache to be the least farther ahead of the server, but be prepared to tolerate loss of data to the extent that the server is behind the client. This may involve some amount of contractual negotiation or training. In practice this may not be a deal-breaker.
The only way to do this reliably is to offer some sort of "check out and lock" at the record level. When a user is going remote they must check out the records they want to work with. This check out copied the data to a local DB and prevents the record in the central DB from being modified while the record is checked out.
When the roaming user reconnects and check their locked records back in the data is updated on the central DB and unlocked.