I am working on a cross platform that needs to use a database to store information. I was thinking because MySQL is opensource, would it be possible to remove the networking components from MySQL so that the program can directly interact with it. Is this possible, or should i just ship the install with a copy of mysql server with all the settings predefined and use a connector.
SQLite has what you need. http://www.sqlite.org/
I think in theory you could do that, but I'm not sure if the amount of work would be worth it and the chances of breaking something would be pretty high. I would just ship mySQL with your application.
Or use sqllite as suggested by someone else.
It could be possible, but I am not sure it is worth it (or else, use something like sqlite or even gdbm).
MySQL is quite robust (thousands of developers, millions of users) so in practice you should consider it won't crash.
Your own application might be less robust. It probably would crash. Then having MySQL still running ensures you that the data are in a sane state.
And you might perhaps be later interested in having some other (perhaps external) application doing SQL requests to your MySQL database, or give the ability to have the MySQL database on a remote server.
Related
In my Visual C++ application, I want to allocate a lot of objects, which will use up all available memory in the system. To solve this problem, I decide to store the objects in database. I just have 3 candidates: MySQL, PostgreSQL, and SQLite. But don’t know which one is more appropriate.
What I need is:
Store objects in the database instead of memory.
Fast to find the objects via a key.
Light-weight so the RDBMS will not require a lot of system resources, including both the memory and disk spaces.
No server required.
Easy to deploy.
Which one should be best for my needs? Of course, if you have any other better alternatives, then just tell me.
SQLite provides a detailed doc how when it should be used. But MySQL and PostgreSQL does not so it is a little difficult to choose as I am not familiar with these two. Thanks.
I'd use SQLite. It doesn't require a service and is cross platform. It is easy to deploy and is light-weight. It supports transaction. It's in the public domain.
Your questions:
Store objects in the database instead of memory.
Any database can do this, that's the definition of a database.
Fast to find the objects via a key.
Also standard functionality, if you can't find your data, what's the point of using a database.
Light-weight so the RDBMS will not require a lot of system
resources, including both the memory and disk spaces.
That's mostly in your hands, bad queries generate a lot of overhead. No matter what brand of database (or software language) you use.
No server required.
Do you mean "hardware" or "client-server model" ? Both MySQL and PostgreSQL are services in a client-server model. SQLite works best for a single client.
Easy to deploy.
All 3 databases are easy to deploy, but SQLite is the easiest one. It's not a server like the others.
It looks like SQLite is the best fit, but also check your other requirements, the ones you didn't mention: performance, reliability, backup, failover, etc. etc. And do you needs an RDBMS for this kind of work? A C++ object in memory is very different from a bunch of records in a couple of databases that can be accessed by using SQL.
I am currently writing a client-server app for the iOS platform. The client is written in Obj-C, and the server uses C++ on OSX11.9. Since I intend to run the server software on an Ubuntu dedicated server, I am trying my best to keep the serverside code portable.
To store data about users and user-game-relations I intend to use an SQL database (most likely MySQL or possibly PostgreSQL since I'm familiar with those). I know that it is possible to read from/write to the database through a filedescriptor just like I do in my TCP module, but I wish to utilize a higher-level SQL communications API to make the programming process quicker.
Can anyone recommend me a good open source/free SQL API for *NIX C++? Any help would be appreciated. Thanks in advance!
You have several options here:
Use native database SDK. They are usually distributed along with the database installation or as separate downloads/packets. The upside is you can get maximum speed out of it. Downside is that you'll be limited by your initial choice - no switching afterwards without rewriting part of application.
Use a C++ ORM (example: ODB). This gives you DB independence along with some tasty features, at the cost of slightly reduced speed.
unixODBC supports both MySQL and PostgreSQL. Take a look at it.
I’m setting up AppFabric and I’m wondering if using xml (instead of SQL Express) for the “Caching Service Configuration Provider” has any impact on performance or may lead to other problems eventually? To keep dependencies (and things that can go wrong) to a minimum, using a plain xml file seems like a simpler solution.
XML is fine in non-HA scenarios; make sure the share is available to all account contexts on all hosts and you're good to go. Performance is a non-issue -- configuration is only checked/utilized at certain times, like startup, or adding/removing a host. SQL Server configuration is really targeted at higher availability (though itself is subject to crashing the service when SQL Server becomes unavailable, sillily enough.)
Incidentally, disk filestore will almost always be faster than DB access for this sort of work.
I'm wondering what kind of persistence solutions are there for C++ with a SQL database? In addition to doing things with custom SQL (and encapsulating the data access to DAOs or something similar), are there some other (more general) solutions?
Like some general libraries or frameworks (something like Hibernate & co for Java and .NET) or something else? (Something that I haven't even thought of can also be welcome to be suggested)
EDIT: Yep, I was searching more for an ORM solution or something similar to handle sql queries and the relationships between tables and objects than for the db engine itself. Thanks for all the answers anyway!
SQLite is great: it's fast, stable, proven, and easy to use and integrate.
There is also Metakit although the learning curve is a bit steep. But I've used it with success in a professional project.
It sounds like you are looking for some ORM so that you don't have to bother with hand written SQL code.
There is a post here that goes over ORM solutions for C++.
You also did not mention the type of application you are writing, if it is a desktop application, mobile application, server application.
Mobile: You are best off using SQLite as your database engine because it can be embedded and has a small footprint.
Desktop App: You should still consider using SQLite here, but you also have the option with most desktop applications to have an always on connection to the internet in which case you may want to provide a network server for this task. I suggest using Apache + MySQL + PHP and using a lightweight ORM such as Outlet ORM, and then using standard HTTP post calls to access your resources.
Server App: You have many more options here but I still suggest using Apache + MySQL + PHP + ORM because I find it is much easier to maintain this layer in a script language than in C++.
MySQL Connector/C++ is a C++ implementation of JDBC 4.0
The reference customers who use MySQL Connector/C++ are:
- OpenOffice - MySQL Workbench
Learn more: http://forums.mysql.com/read.php?167,221298
SQLite + Hiberlite is a nice and promising project. though I hope to see it more actively developed. see http : // code.google.com/p/hiberlite/
I use MYSQL or SQLite.
MYSQL: Provides a server based DB that your application must dynamically connect to.
SQLite:Provides an in memory or file base DB.
Using the in memory DB is useful for quick development as setting up and configuring a DB server just for a single project is a big task. But once you have a DB server up and running it's just as easy to sue that.
In memory DB is useful for holding small DB such as configuration etc.
While for larger data sets a DB server is probably more practical.
Download from here: http://dev.mysql.com/
Download from here: http://www.sqlite.org/
I have my first app, not that big, but it is the first step. (next big one on the way)
Now if I want to put it on my own Linode VPS, I have to configure mod_python or mod_wsgi, as well as memcache, Ngix, mySQL or Postgresql, etc. to make it work. If I put it GAE, All I have to do is convert the models to use GAE's API.
What I like about GAE is scaling. (if they can really do it)
Then I'd only worry about developing my apps and doing SEO work on them instead of worrying about load share/balance, cache, db / IO redundancy, etc.
I don't want to do any porting later on. (I have to decide now and stick with it)
So, if you have any experience on this, what do you recommend:
1- Use VPS(s) for everthing
2- Use VPS(s) plus Amazon S3
3- Use VPS(s) plus Amazon S3 & SimpleDB
4- Use GAE
Also: Would I be able to get away with not having JOIN rights when using the BigTable?
Note: I don't have any spatial need now, but for a location table I might need that later on.
I'd like to know what do you think!
There's business risk and technical risk.
Business risk is that you might have to move hosts later for some external reason. VPS's, EC2, etc require more upfront investment, but keep you independent. Tools like Chef can help with the configuration effort.
Technical risk is that your application may not be easily implemented on the platform. Since most VPS options allow you to install arbitrary software, they minimize this, again at the cost of more configuration effort on your part. AFAIK, the largest constraint GAE enforces on you is it's difficult to do long running background tasks. (Working without JOINs and other aspects of de-normalized data requires a different way of thinking, but this approach is fairly common in web applications no matter where they run once the SQL database is larger than a single host can support.)
If you can live with both these risks, GAE would appear to save you a substantial amount of effort. If you cannot live with these risks, you should tailor your own environment.
As an aside, I find S3 to be worth it no matter your environment. It's far simpler than ensuring your local server static file storage is reliably backed up, and you never have to worry about capacity. It's best if you use it for data that is uploaded but rarely overwritten or deleted (think facebook photo albums).
I don't want to do any porting later on. (I have to decide now and stick with it)
If that's the case, wouldn't you prefer to control deployment from the outset? It could be a great pain to port back from GAE later down the line if you hit its limits (whether they be technological limits or simply business decisions by Google that run counter to your plans for the future of your app).
Also configuring mod_wsgi, installing postgres etc. isn't particularly difficult, and you don't have to worry about things like load balancing and db redundancy for a while yet.
If it were me, I'd prefer the long-term certainty of a traditional server over the quick win of GAE. It all depends on your vision for the app, however.
I may be biased, but if you can live with GAE's limitations it really saves you a lot of work and worry about system administration issues (and to some extent scaling) -- plus, it's free as long as your resource consumption is low (basically meaning your traffic is low).
Can you do without joins? I don't know, as I don't know your app -- I'm a SQL fanatic, myself, yet for simple enough needs I haven't found it too hard to adapt. As I see it, the main limitation of non-relational DBs is that they're nowhere as nice as relational ones for "ad hoc" queries... you typically have to write a lot of procedural code instead of a nice SELECT or two:-(. But, that's more of a "data mining later" issue than one connected with serving your web app -- probably best solved by regularly bulk-downloading data from the web app's online storage to a "data warehouse" kind of setup, anyway, even if such storage was relational in the first place;-).
Before deciding, it might be worth a quick prototype adaptation of your app to GAE. You might run into stoppers that force the decision. Possible stopper issues include
Your schema doesn't make the transition to BigTable
You're depending on some C-based library that GAE doesn't support
You have a few long-running requests that exceed the thresholds that GAE imposes
The answer depends on the complexity and nature of your model layer, really. If it's complex or tightly bound to the rest of your code, porting is likely to be a significant effort. If it's fairly straightforward, or easy to tear out and replace, I would say go for it.
These days, I mostly write new code for GAE, but the fact that I can simply deploy with a single command has really lowered the barrier I feel towards writing cool new apps. Not having to worry about deployment and hosting is quite liberating.
All I have to do is convert the models to use GAE's API.
I am sorry, you are totally mistaken.
You also need to rewrite all the views code that uses the ORM. There are no joins. So you have to deal with and write a lot of procedural code instead of the nifty SQL that provides U whatever you want.
Querying is slow. You need to override save method of each model to store additional information of that model which may take a lot of time to compute when need. You also need to work on memcache to make the queries fast enough.
And then, Guido has said Django 1.1 is going to be included in a future version of Appengine. I am hoping they will have an out of the box generic ORM to BigTable mapper.
That said, if your app is simple without many joins needed, you could use the appengine patch project to use the current version of django on Appengine. Here is how.