I am going to be using C/C++, and would like to know the best way to talk to a MySQL server. Should I use the library that comes with the server installation? Are they any good libraries I should consider other than the official one?
MySQL++
That depends a bit on what you want to do.
First, check out libraries that provide connectivity to more than on DBMS platform. For example, Qt makes it very easy to connect to MySQL, MS SQL Server and a bunch of others, and change the database driver (connection type) at runtime - with just a few lines of code.
MySQL-specific libraries are fine, but bear in mind that you are locking yourself down to one DB implementation - if you ever need to change in the future it's gonna be a whole lot of work - even if you design your code such that the DB-specific stuff is behind a facade. Why not use a library that provides connectivity to multiple platforms, and save yourself the trouble?
OTL is a solid cross-DBMS solution for C++ that my project has been using for years. We use it to talk to SQL Server (via ODBC) and Oracle (via OCI). It's fairly easy to drive, and comes with a large number of examples across all the supported databases.
There is nothing wrong with MySQL's own client-libraries. If you are willing to settle for reduced functionality, you can buy yourself some extra portability by using ODBC, UDBC, apr_dbd, or some other database-abstraction library (such as the OTL offered already).
This will make switching a back-end easier, but, as I mentioned, at the expense of offering less functionality compared to that of the native client. Because DB-vendors differ, the abstraction-libraries can only really offer the functions common to all (or most) of the back-ends. Whether you prefer to optimize for a particular DB or would rather make it easier to switch back-ends, is up to you (and, perhaps, your manager).
Related
Is there a standard set of integration tests for Mongo drivers (Connection Library)?
I have a Mongo library in C++ that I want to validate (and if possible performance test). Is there a standard set of tests (preferable with data) that I can use to validate that the library is sending and receiving the correct data to Mongo?
Is it a library or driver you are talking about?
"Driver Library"?
It provides the type to link to Mongo Server.
The types then provides methods to send/receive data etc.
And a few higher level concepts that I am experimenting with:
Handling all the on-wire protocol so that you don't need to.
You link against a library (So headers/library).
If you are developing a new C++ driver I hope you have some good reasons to not use the officially provided and maintained driver from MongoDB.
No; I don't have a good reason (other than I want to try).
But I do have reasons:
Experimenting with C++17 features that I have not used much.
I have a library that automatically serializes C++ objects to BSON with no extra code.
This seems like a perfect test project to validate it against.
I have a library that allows non-blocking use of streams that (with co-routines) allows me to write/read from multiple connections efficiently with a single thread.
This seems like a perfect test project to validate it against.
the officially provided and maintained driver from MongoDB
Sure the official one is going to have a lot more support and probably be much higher quality than my one-man band version. But you make that statement as if people should only use the official version of a library.
I disagree with that premise entirely as it locks us into the same way of thinking with no ability to have radical shifts in how we interact with the data. I want to experiment and see if I can have a more efficient interaction with the data and maybe (just maybe this will lead to the official version saying hey that's a nice idea let us implement that in the official drivers).
Competition is a way to spur new ideas (though we are probably a long way from competing).
But to get to that point I need a way to show that my driver behaves at least to a certain standard. Which means that it would be nice to have some way of validating. Here are a thousand JSON objects stream those to mongo and validate you get "XX" behavior or result.
The MongoDB C++ driver is developed open source and so are the tests to be found on GitHub:
https://github.com/mongodb/mongo-cxx-driver/tree/master/src/mongocxx/test
Your question could need some clarification:
Is it a library or driver you are talking about? If you are using the official MongoDB C++ Driver, why would you think it would send wrong results? Do you think your Library on top of the driver changes your values? Shouldn't be a test of the library independent of the driver be enough?
If you are developing a new C++ driver I hope you have some good reasons to not use the officially provided and maintained driver from MongoDB.
I've been spending more and more time writing DB Wrappers for Oracle access. This seems to be quite generic procedure, and I was wondering is there already are code generators that generate access routes to Oracle PL/SQL Stored Procedures in C++?
I'm looking for a configurable generation tool that would be capable of managing connection and handle multiple threads if needed. I'm aware of OCI/OCCI and Oracle C++ extension, but I'm looking for a pure self-contained C++ accessor generation tool.
Any advice welcome.
Thank You!
You might also want to have a look at:
http://orclib.sourceforge.net
http://otl.sourceforge.net/
http://www.codeguru.com/cpp/data/mfc_database/oracle/article.php/c4305
We use the SQLAPI (http://www.sqlapi.com/) for all of our C++ development w/ Oracle. It I think is a more efficient wrapper for the OCI (though as another person pointed out, the OCCI is prety good). Another advantage for the SQLAPI is that it also supports other database platforms. We use it for MySQL as well and having that abstraction layer in between our application and database layers certainly simplifies things quite a bit.
I'm designing a training program in C++ that will be distributed to a large number of facilities, most of which won't have much in the way of an IT staff. The program connects via a TCP connection to a central database which stores various pieces of data for research and evaluation purposes.
The problem I have is that I would like to make the transmission secure, and the most commonly recommended way to do that seems to be OpenSSL - which seems all well and good, but I've got a problem. As I understand it, OpenSSL must be installed specifically on each of the systems. The facilities won't have the expertise required to compile and install the source on their systems, the computers will be sufficiently varied (all Windows boxes, but of varying make and quality) to rule out distributing a specifically-compiled binary, and continent-wide distribution makes it impossible for my team to personally set it up.
Does anyone have a recommendation for how to solve this problem? Am I simply incorrect in my assumptions, and one can distribute it without installation? If not, is there a more practical alternative?
As long as all your machines are XP+, with two versions of OpenSSL you should be ready, one for 32bits and one for 64bits. Just provide two separate installers and that should be it. There's no need to compile for each machine.
Just remember to include the Visual C++ redistributable package in your installer as well.
If you have to support ancient Windows versions, it gets a bit more complex but not that much.
Actually, OpenSSL seems like a good option based on what you described.
From what I understand of OpenSSL, it is a library written in C (with wrappers around it for other languages), meaning that you can include it in the code base of whatever it is you are writing.
I'm pretty sure that it is not a program that has to be installed, so I think that you shouldn't have to worry about that.
You might also like to experiment with IPSEC- if you are concerned with distribution of binaries etc to client machines, IPSEC could be interesting solution. Since virtually all Windows boxes support it, all you have to do is to configure IPSEC policy on DB server - by making it as "required" this way, all the data between client machines and DB server will be encrypted.
what do you suggest as a cross platform "almost all encompassing" abstraction toolkit/library, not necessarily gui oriented?
the project should at some point include an extremely minimal web server and a "db" of some sort (basically to have indexes/btrees, maybe relations, so a rdbms is desiderable but avoidable if necessarily, sql might be overkill)
i was thinking about qt, boost, tokyo cabinet and/or sqlite; what else? what is "best suited"?
i would like to keep platform customization and overall execution footprint at minimum...
thank you in advance
For a minimal webserver, I think you're fine using Boost.Asio and sqlite -- it's quite portable, and should have everything you need. Remember that the C/C++ runtimes also provide portable abstractions for many things, so be sure to check those first (especially if a minimum overhead is required -- it might be simply easier to use C runtime functions than Boost.Filesystem).
You can also look at Firebird as a cross platform database
You should definitely take a look at Poco.
For my own similar purposes I use mongoose for web serving and sqlite for the database. Both are very high-quality products, but unfortunately are written in C. However, they are very simple to embed in C++ applications, and I have written simple C++ wrappers for both of them.
What options exist for accessing different databases from C++?
Put differently, what alternatives are there to ADO?
What are the pros and cons?
Microsoft ODBC.
The MFC ODBC classes such as CDatabase.
OleDB (via COM).
And you can always go through the per-RDBMS native libraries (for example, the SQL Server native library)
DAO (don't).
3rd party ORM providers.
I would recommend going through ODBC or OleDB by default. Native libraries really restrict you, DAO is no fun, there aren't a lot of great 3rd-party ORM for C++/Windows.
Although this question and its answers are several years old, they are still valuable for people like me that cruise by on an evaluation trip. For this reason, I would like to add the Qt C++ framework's QtSql module as an option for database connectivity.
Note that I am familiar with Qt in general, but have no experience with QtSql in particular.
Pros (just a few that should also apply if you just choose Qt for its QtSql module): Qt is cross-platform. In my experience, Qt is well-designed, pretty intuitive to use, and extremely well documented. It has been around for a long time, is maintained by an active community and backed by Nokia, so it won't become unavailable over night. Since 2009, Qt has been licensed under the LGPL, so it is a real no-cost option even for commercial applications.
Cons: Qt is not small. You will introduce new types such as QString to your project. Qt is licenced under the LGPL, so you need to acknowledge its use even in commercial apps.
One thing - if speed is important and your code doesn't need to be portable, then it may be worth it to use the native libraries.
I don't know much about SQL Server, but I do know that the Oracle OCI calls are faster than using ODBC. But, they tie you to Oracle's version of SQL. It would make sense for SQL Server to be the same way.
There is the POCO Data library, which supports ODBC, MySQL and SQLite. Part of the free open source POCO C++ Libraries.