I am developing a large scale application in C++. The application also maintains a database (currently I am using MySQL), I use OTL for database connectivity. Now what i want to do is to provide support for using database from multiple vendors. E.g User A use MySQL and user B use PostGres. I think upon implementing it in C++ but didn't come up with any possible solution due to lack of experience.
What I want to achieve, is something like that:
There would be a separate VC Project that deals with database and suppose it contains following files:
DataAccessLayer.cpp //This will main entry point of the project
Product.cpp //This deals with product table
Customer.cpp // This deals with Customer table
Orders.cpp //This deals with Orders table
. . . and many more // I want to have one cpp file per Database table`
And we will use the above project in our code like this
DataAccessLayer oDataAccessLayer;
oDataAccessLayer.Connect(); // This will connect to specified database, it might b some abstract class and have concrete class for each supported DB
oDataAccessLayer.Products.Search(//Some parameters here e.g prod id to b search);//I don't want to write search query again again for each database, This function will execute the query in specific database
oDataAccessLayer.Customers.Add(//Parameter)//Same is the case here I don't want to write ADd query for each supported database
oDataAccessLayer.Disconnect();
I don't want the whole code I just need some sample code or related article to study.
Your question is a bit ambiguous. However, I will base my answer on the perspective that you want your code to access one physical database, which is configurable to MySQL, PostGres, Oracle, etc.
Generate generic SQL statements.
Instead of accessing the MySQL library for inserting records, build a SQL statement and let the connector execute the statement. This reduces the interface to one small point: the connector. I have my Field and Record classes set up to use the Visitor pattern. I create Visitors to build the SQL statements. This enables my database component to be more generic.
Abstract the DB Connector.
Create your own Connector object as a Facade around the DB manufacturer's connector. Make the methods generic, such as passing a string containing SQL text. Next, change your components to require passing of this facade during construction or when accessing the database. Finally, create an instance of this facade to communicate with specific database applications before using any of your components.
Suggestions
I have found that having the Record class contain the table name, I could eliminate the need for a Table class (which models the database table). Appending and loading is handled by my database Manager class. In my recent progress, I implemented functionality for prepared statements and BLOB fields.
Suppositions:
Suppose i have following two tables in my database
Project
Docs
Docs
And currently i am supporting only following two databases:
1) Post Gres
2) Oracle
Implementation:
I implement this logic in following three phases
1) Data Base Adapter: Only this class is exposed to outer world and this contians the Pointer to Each QueryBuilder, these pointer will be initialized in the constructor of this class on the basis of DB type(which is passed as an argument to constructor)
2) DB Connector: This class will be responsible for DB handling (establish connection, disconnect etc.)
3) Query Builder: This section contains a basse class for each table in DB and also contains concrete calss for each each specific database
Sample Code (Synopsis Only)
DBAdapter.h
DBAdapter(enum DBType, string ConnectionString);//constructor
IntializeDBConnectAndQueryBuilder();
Project *pProject;
Docs *pDocs;
//----------------------------------------------------------------------------------------
DBAdapter.cpp
DBAdapter(enum DBType, string ConnectionString)//constructor
{
swithc (DBType)
{
case enPostGres:
pProject = new Project_PostGres();
pDocs = new Docs_PostGres();
break;
};
}
DB Connector Section:
DBConnector.h
virtual Connect()=0; //pure virtual
IsConnected()=0;//pure virtual
Disconnect()=0; //pure virtual
//------------------------------------------------------------------------------------------
DBConnector_PostGres.cpp: DBConnector
Connect()
{ //Implementation specific to postgres goes here}
IsConnected()
{ //Implementation specific to postgres goes here}
Disconnect()
{ //Implementation specific to postgres goes here}
//------------------------------------------------------------------------------------------
DBConnector_OracleGres.cpp: DBConnector
Connect()
{ //Implementation specific to oracle goes here}
IsConnected()
{ //Implementation specific to oracle goes here}
Disconnect()
{ //Implementation specific to oracle goes here}
//------------------------------------------------------------------------------------------
Query Builder Section:
Project.h
virtual int AddProject();
virtual int DeleteProject();
virtual int IsProjectTableExist();
DBConnector *pDBConnector; // This pointer will be initalized in all concrete classes
//------------------------------------------------------------------------------------------
Project.cpp
int AddProject()
{
//Query for adding Project is same for all databases
}
int DeleteProject()
{
//Query for deleting Project is same for all databases
}
//------------------------------------------------------------------------------------------
Project_PostGres.cpp : Project
Project_PostGres() // Constructor
{
pDBConnector = new DBConnector_PostGres();
}
int IsProjectTableExist()
{
//Query specific to postgres goes here
}
Project
//------------------------------------------------------------------------------------------
Project_Oracle.cpp : Project
{
//Query specific to oracle goes here
}
//. . . and so on for other databases
//------------------------------------------------------------------------------------------
//Same number of files with same logic will also be created for Docs Table
------------------------------------------------------------------------------------------
Related
I'm learning C++ and have a question about classes and wrappers. I'm writing an application for a raspberry pi. I have a class called SensorClass whose methods read data from various sensors attached to the board.
class SensorClass {
public:
SensorClass();
virtual ~SensorClass();
int getTemperature();
int getPressue();
;
I want to write the data to a local sqlite database when it is read. On the SQLite website there are a number of wrapper classes.
SQLite wrappers
I'm wondering if I should use one of these to for example insert data into the database when it has been read.
I'm thinking then I would be separating the code and just calling for example the SQLite insert method in the getTemperature() function. Would this be a good idea? Which wrapper should I use?
Example SQlite wrapper class
Alternatively I could hard code the database operations in the getTemperature() method like this.
int SensorClass::getTemperature(){
// read temperature
//insert into database
/* Create SQL statement */
sql = "INSERT INTO DATAPOINTS (Temperature) " \
"VALUES (15); " \
/* Execute SQL statement */
rc = sqlite3_exec(db, sql, callback, 0, &zErrMsg);
}
Thanks for your advice
It would generally be better to separate the two things. i.e. make the sensor class do the job on sensing stuff well and only that.
Then have a separate class that does the job of logging sensor data to the database well. You may find it is better to insert entire rows into the database in one go. And you may also decide that you want to only log data periodically at a fixed sampling rate.
Then in your main application loop / via an event driven timer, you can do measurements and record data as separate steps.
e.g.
void APP_tick(void)
{
SensorValues values = sensors.readValues();
logger.writeValues(values);
}
By separating responsibility, you can then change the logger class out easily - you may end up deciding that you don't want to use a database and would rather just log the data into flat files in order to use less disk space and improve performance.
If using SQLite then you might find it worthwhile using prepared statements to avoid having to compile the SQL query every time you execute it (which is expensive in CPU terms and you are running this on a fairly limited system).
I'm coding a long-running, multi-threaded server in C++. It receives requests on a socket, does database lookups and returns responses on a socket.
The server reads various run information from a configuration file, including database connectivity parameters. I have to use a database abstraction class from the company's code library. I don't want to wait until trying to do the DB search to lazy instantiate the DB connection (due to not shown complexity and the need for error exit at startup if DB connection cannot be made).
My problem is how to get the database connection information down into the search class without doing any number of "ugly" or bad OOP things that would technically work. I want to learn how to do this right way.
Is there a good design pattern for doing this? Should I be using the "Parameterize from Above" pattern? Am I missing some simpler Composition pattern?
// Read config file.
// Open DB connection using config values.
Server::process_request(string request, string response) {
try {
Process process(request);
if (process.do_parse(response)) {
return REQ_OK;
} else {
// handle error
}
} catch (..,) {
// handle exceptions
}
}
class Process : public GenericRequest {
public:
Process(string *input) : generic_process(input) {};
bool do_parse(string &output);
}
bool Process::do_parse(string &output) {
// Parse the input request.
Search search; // database search object
search.init( search parameters from parsing above );
output = format_response(search.get_results());
}
class Search {
// must use the Database library connection handle.
}
How do I get the DB connection from the Server class at top into the Search class instance at the bottom of the pseudo-code above?
It seems that the problem you are trying to solve is one of objects dependency, and is well solved using dependency injection.
Your class Process requires an instance of Search, which must be configured somehow. Instead of having instances of Process allocating their own Search instance, it would be easier to have them receive a ready made one at construction time. The Process class won't have to know about the Search configuration details, and thus an unecessary dependency is avoided.
But then the problem cascades up to whichever object must create a Process, because now this one has to know that configuration detail! In your situation, it is not really a problem, since the Server class is the one creating Process instances, and it happens to know the configuration details for Search.
However, a better solution is to implement a specialized class - for instance DBService, which will encapsulate the DB details acquired from the configuration step, and provide a method to get ready made Search instances. With this setup, no other objects will depend on the Search class for its construction and configuration. As an added benefit, you can easily implement and inject a DBService mockup object which will help you build test cases.
class DBSearch {
/* implement/extends the Search interface/class wrt DB */
};
class DBService {
/* constructor reads up configuration details somehow: command line, file */
Search *newSearch(){
return new DBSearch(config); // search object specialized on db
}
};
The code above somewhat illustrates the solution. Note that the newSearch method is not constrained to build only a Search instance, but may build any object specializing that class (as for example the class DBSearch above). The dependency is there almost removed from Process, which now only needs to know about the interface of Search it really manipulates.
The central element of good OOP design highlighted here is reducing coupling between objects to reduce the amount of work needed when modifying or enhancing parts of the application,
Please look up for dependency injection on SO for more information on that OOP design pattern.
Using doctrine 2.1 (and zend framework 1.11, not that it matters for this matter), how can I do post persist and post update actions, that involves re-saving to the db?
For example, creating a unique token based on the just generated primary key' id, or generating a thumbnail for an uploaded image (which actually doesn't require re-saving to the db, but still) ?
EDIT - let's explain, shall we ?
The above is actually a question regarding two scenarios. Both scenarios relate to the following state:
Let's say I have a User entity. When the object is flushed after it has been marked to be persisted, it'll have the normal auto-generated id of mysql - meaning running numbers normally beginning at 1, 2, 3, etc..
Each user can upload an image - which he will be able to use in the application - which will have a record in the db as well. So I have another entity called Image. Each Image entity also has an auto-generated id - same methodology as the user id.
Now - here is the scenarios:
When a user uploads an image, I want to generate a thumbnail for that image right after it is saved to the db. This should happen for every new or updated image.
Since we're trying to stay smart, I don't want the code to generate the thumbnail to be written like this:
$image = new Image();
...
$entityManager->persist($image);
$entityManager->flush();
callToFunctionThatGeneratesThumbnailOnImage($image);
but rather I want it to occur automatically on the persisting of the object (well, flush of the persisted object), like the prePersist or preUpdate methods.
Since the user uploaded an image, he get's a link to it. It will probably look something like: http://www.mysite.com/showImage?id=[IMAGEID].
This allows anyone to just change the imageid in this link, and see other user's images.
So in order to prevent such a thing, I want to generate a unique token for every image. Since it doesn't really need to be sophisticated, I thought about using the md5 value of the image id, with some salt.
But for that, I need to have the id of that image - which I'll only have after flushing the persisted object - then generate the md5, and then saving it again to the db.
Understand that the links for the images are supposed to be publicly accessible so I can't just allow an authenticated user to view them by some kind of permission rules.
You probably know already about Doctrine events. What you could do:
Use the postPersist event handler. That one occurs after the DB insert, so the auto generated ids are available.
The EventManager class can help you with this:
class MyEventListener
{
public function postPersist(LifecycleEventArgs $eventArgs)
{
// in a listener you have the entity instance and the
// EntityManager available via the event arguments
$entity = $eventArgs->getEntity();
$em = $eventArgs->getEntityManager();
if ($entity instanceof User) {
// do some stuff
}
}
}
$eventManager = $em->getEventManager():
$eventManager->addEventListener(Events::postPersist, new MyEventListener());
Be sure to check e. g. if the User already has an Image, otherwise if you call flush in the event listener, you might be caught in an endless loop.
Of course you could also make your User class aware of that image creation operation with an inline postPersist eventHandler and add #HasLifecycleCallbacks in your mapping and then always flush at the end of the request e. g. in a shutdown function, but in my opinion this kind of stuff belongs in a separate listener. YMMV.
If you need the entity id before flushing, just after creating the object, another approach is to generate the ids for the entities within your application, e. g. using uuids.
Now you can do something like:
class Entity {
public function __construct()
{
$this->id = uuid_create();
}
}
Now you have an id already set when you just do:
$e = new Entity();
And you only need to call EntityManager::flush at the end of the request
In the end, I listened to #Arms who commented on the question.
I started using a service layer for doing such things.
So now, I have a method in the service layer which creates the Image entity. After it calls the persist and flush, it calls the method that generates the thumbnail.
The Service Layer pattern is a good solution for such things.
I am thinking of using SQLite as a backend DB for a C++ applicatiojn I am writing. I have read the relevant docs on both teh trolltech site and sqlite, but the information seems a little disjointed, there is no simple snippet that shows a complete CRUD example.
I want to write a set of helper functions to allow me to execute CRUD actions in SQLite easily, from my app.
The following smippet is pseudocode for the helper functions I envisage writing. I would be grateful for suggestions on how to "fill up" the stub functions. One thing that is particularly frustrating is that there is no clear mention in any of the docs, on the relationship between a query and the database on which the query is being run - thus suggesting some kind of default connection/table.
In my application, I need to be able to explicitly specify the database on which queries are run, so it would be useful if any answers spell out how to explicitly specify the database/table involved in a query (or other database action for that matter).
My pseudocode follows below:
#include <boost/shared_ptr.hh>
typedef boost::shared_ptr<QSqlDatabase> dbPtr;
dbPtr createConnection(const QString& conn_type = "QSQLITE", const QString& dbname = ":memory:")
{
dbPtr db (new QSQlDatabase::QSqlDatabase());
if (db.get())
{
db->addDatabase(conn_type);
db->setDatabaseName(dbname);
if (!db.get()->open)
db.reset();
}
return db;
}
bool runQuery(const Qstring& sql)
{
//How does SQLite know which database to run this SQL statement against ?
//How to iterate over the results of the run query?
}
bool runPreparedStmtQuery(const QString query_name, const QString& params)
{
//How does SQLite know which database to run this SQL statement against ?
//How do I pass parameters (say a comma delimited list to a prepared statement ?
//How to iterate over the results of the run query?
}
bool doBulkInsertWithTran(const Qstring& tablename, const MyDataRows& rows)
{
//How does SQLite know which database to run this SQL statement against ?
//How to start/commit|rollback
}
In case what I'm asking is not clear, I am asking what would be the correct wat to implement each of the above functions (possibly with the exception of the first - unless it can be bettered of course).
[Edit]
Clarified question by removing requirement to explicitly specify a table (this is already done in the SQL query - I forgot. Thanks for pointing that out Tom
By default, Qt uses the application's default database to run queries against. That is the database that was added using the default connection name. See the Qt documentation for more information. I am not sure what you mean by the default database table, since the table to operate on is normally specified in the query itself?
To answer your question, here is an implementation for one of your methods. Note that instead of returning a bool I would return a QSqlQuery instance instead to be able to iterate over the results of a query.
QSqlQuery runQuery(const Qstring& sql)
{
// Implicitly uses the database that was added using QSqlDatabase::addDatabase()
// using the default connection name.
QSqlQuery query(sql);
query.exec();
return query;
}
You would use this as follows:
QSqlDatabase db = QSqlDatabase::addDatabase("QSQLITE");
db.setHostName("localhost");
db.setDatabaseName("data.db");
if (!db.open())
{
raise ...
}
QSqlQuery query = runQuery("SELECT * FROM user;");
while (query.next())
{
...
}
Note that it is also possible to explicitly specify for which database a query should be run by explicitly specifying the relevant QSqlDatabase instance as the second parameter for the QSqlQuery constructor:
QSqlDatabase myDb;
...
QSqlQuery query = QSqlQuery("SELECT * FROM user;", myDb);
...
I have a C++ application that uses ADO to talk to an Oracle database. I'm updating the application to support an offline documents. I've decided to implement SQLite for the local side.
I've implemented a wrapper around the ADO classes that will call the appropriate code. However, ADO's way of adding/editing/deleting rows is a bit difficult to implement for SQLite.
For ADO I'd write something like:
CADODatabase db;
CADORecordset rs( &db );
db.Open( "connection string" );
rs.Open( "select * from table1 where table1key=123" );
if (!rs.IsEOF())
{
int value;
rs.GetFieldValue( "field", value );
if (value == 456)
{
rs.Edit();
rs.SetFieldValue( "field", 456 );
rs.Update();
}
}
rs.Close();
db.Close();
For this simple example I realize that I could have just issued an update, but the real code is considerable more complex.
How would I get calls between the Edit() and Update() to actually update the data? My first thought is to have the Edit() construct a separate query and the Update() actually run it, but I'm not sure what fields will be changed nor what keys from the table to limit an update query to.
" but I'm not sure what fields will be changed nor what keys from the table to limit an update query to."
How about just selecting ROWID with the rest of the fields and then building an update based on that ?