I'm learning C++ and have a question about classes and wrappers. I'm writing an application for a raspberry pi. I have a class called SensorClass whose methods read data from various sensors attached to the board.
class SensorClass {
public:
SensorClass();
virtual ~SensorClass();
int getTemperature();
int getPressue();
;
I want to write the data to a local sqlite database when it is read. On the SQLite website there are a number of wrapper classes.
SQLite wrappers
I'm wondering if I should use one of these to for example insert data into the database when it has been read.
I'm thinking then I would be separating the code and just calling for example the SQLite insert method in the getTemperature() function. Would this be a good idea? Which wrapper should I use?
Example SQlite wrapper class
Alternatively I could hard code the database operations in the getTemperature() method like this.
int SensorClass::getTemperature(){
// read temperature
//insert into database
/* Create SQL statement */
sql = "INSERT INTO DATAPOINTS (Temperature) " \
"VALUES (15); " \
/* Execute SQL statement */
rc = sqlite3_exec(db, sql, callback, 0, &zErrMsg);
}
Thanks for your advice
It would generally be better to separate the two things. i.e. make the sensor class do the job on sensing stuff well and only that.
Then have a separate class that does the job of logging sensor data to the database well. You may find it is better to insert entire rows into the database in one go. And you may also decide that you want to only log data periodically at a fixed sampling rate.
Then in your main application loop / via an event driven timer, you can do measurements and record data as separate steps.
e.g.
void APP_tick(void)
{
SensorValues values = sensors.readValues();
logger.writeValues(values);
}
By separating responsibility, you can then change the logger class out easily - you may end up deciding that you don't want to use a database and would rather just log the data into flat files in order to use less disk space and improve performance.
If using SQLite then you might find it worthwhile using prepared statements to avoid having to compile the SQL query every time you execute it (which is expensive in CPU terms and you are running this on a fairly limited system).
Related
For the past couple years, I've been maintaining a large C++ application (v100) that utilizes some form of non-ADO database connections, but it works great.
During this time, getting a resultset from the database is quite simple. I instantiate the return class, with the database object, then Open a query.
CUpdates cUpdates(GetDatabase());
CString strQuery = "SELECT * FROM Updates";
cUpdates.Open(-1, strQuery);
Just that simple, cUpdates is filled with records.
NOW however, we want to execute a stored procedure, and return the results from it. But no matter what I try, even changing 'EXEC' to 'CALL', the call fails. Is there a similar simple method for executing a stored procedure, and returning the results, without having to totally rewrite how the application handles the database connection and returning of data?
strQuery.Format("EXEC dbo.[GetUpdates_ComputerName] '%s', %d, %d", GetWorkstationName(), m_bRetainUpdates, m_bScheduleUpdate);
cUpdates.Open(-1, strQuery); //FAILS ON EXEC
(I have tested the EXEC statement in SSMS, and it works fine)
We do also use another sql command, for strictly executing statements, but I see no way of returning data with it. Maybe there is a similar command I don't know of?
GetDatabase()->ExecuteSQL(strQuery);
note: for the record, I am C# developer (since 1.0 beta). My only experience in c++ has been learning on the fly over the past 2 years, occasionally maintaining a few of our massive systems.
It would seem that CRecordset cannot handle an EXEC statement inside of it. So we converted the new stored procedure to a Tabular Function, so I can use a SELECT instead... which works properly. (though I'd rather use the stored procedure)
I want to bulk-import Doctrine entities from an XML file.
The XML file can be very large (up to 1 million entities), so I can't persist all my entities the traditional way:
$em->beginTransaction();
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
}
$em->flush();
$em->commit();
I would soon exceed my memory limit, and Doctrine is not really designed to handle that many managed entities.
I don't need to track changes to the persisted entities, just to persist them; therefore I don't want them to be managed by the EntityManager.
Is it possible to persist entities without getting them managed by the EntityManager?
The first option that comes to my mind is to detach it immediately after persisting it:
$em->beginTransaction();
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
$em->flush($entity);
$em->detach($entity);
}
$em->commit();
But this is quite expensive in Doctrine, and would slow down the import.
The other option would be to directly insert the data into the database using the Connection object and a prepared statement, but I like the abstraction of the entity and would ideally like to store the object directly.
Instead of using detach and flush after each insert, you can call clear (which detaches all entities from the manager) and flush in batches, which should be significantly faster:
Bulk inserts in Doctrine are best performed in batches, taking
advantage of the transactional write-behind behavior of an
EntityManager. The following code shows an example for inserting 10000
objects with a batch size of 20. You may need to experiment with the
batch size to find the size that works best for you. Larger batch
sizes mean more prepared statement reuse internally but also mean more
work during flush.
https://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/reference/batch-processing.html
If possible, I recommend avoiding transactions for bulk operations as they tend to slow things down:
//$em->beginTransaction();
$i = 0;
while ($entity = $xmlReader->readNextEntity()) {
$em->persist($entity);
if(++$i % 20 == 0) {
$em->flush();
$em->clear(); // detaches all entities
}
}
$em->flush(); //Persist objects that did not make up an entire batch
$em->clear();
//$em->commit();
We have a data set that grows while the application is processing the data set. After a long discussion we have come to the decision that we do not want blocking or asynchronous APIs at this time, and we will periodically query our data store.
We thought of two options to design an API for querying our storage:
A query method returns a snapshot of the data and a flag indicating weather we might have more data. When we finish iterating over the last returned snapshot, we query again to get another snapshot for the rest of the data.
A query method returns a "live" iterator over the data, and when this iterator advances it returns one of the following options: Data is available, No more data, Might have more data.
We are using C++ and we borrowed the .NET style enumerator API for reasons which are out of scope for this question. Here is some code to demonstrate the two options. Which option would you prefer?
/* ======== FIRST OPTION ============== */
// similar to the familier .NET enumerator.
class IFooEnumerator
{
// true --> A data element may be accessed using the Current() method
// false --> End of sequence. Calling Current() is an invalid operation.
virtual bool MoveNext() = 0;
virtual Foo Current() const = 0;
virtual ~IFooEnumerator() {}
};
enum class Availability
{
EndOfData,
MightHaveMoreData,
};
class IDataProvider
{
// Query params allow specifying the ID of the starting element. Here is the intended usage pattern:
// 1. Call GetFoo() without specifying a starting point.
// 2. Process all elements returned by IFooEnumerator until it ends.
// 3. Check the availability.
// 3.1 MightHaveMoreDataLater --> Invoke GetFoo() again after some time by specifying the last processed element as the starting point
// and repeat steps (2) and (3)
// 3.2 EndOfData --> The data set will not grow any more and we know that we have finished processing.
virtual std::tuple<std::unique_ptr<IFooEnumerator>, Availability> GetFoo(query-params) = 0;
};
/* ====== SECOND OPTION ====== */
enum class Availability
{
HasData,
MightHaveMoreData,
EndOfData,
};
class IGrowingFooEnumerator
{
// HasData:
// We might access the current data element by invoking Current()
// EndOfData:
// The data set has finished growing and no more data elements will arrive later
// MightHaveMoreData:
// The data set will grow and we need to continue calling MoveNext() periodically (preferably after a short delay)
// until we get a "HasData" or "EndOfData" result.
virtual Availability MoveNext() = 0;
virtual Foo Current() const = 0;
virtual ~IFooEnumerator() {}
};
class IDataProvider
{
std::unique_ptr<IGrowingFooEnumerator> GetFoo(query-params) = 0;
};
Update
Given the current answers, I have some clarification. The debate is mainly over the interface - its expressiveness and intuitiveness in representing queries for a growing data-set that at some point in time will stop growing. The implementation of both interfaces is possible without race conditions (at-least we believe so) because of the following properties:
The 1st option can be implemented correctly if the pair of the iterator + the flag represent a snapshot of the system at the time of querying. Getting snapshot semantics is a non-issue, as we use database transactions.
The 2nd option can be implemented given a correct implementation of the 1st option. The "MoveNext()" of the 2nd option will, internally, use something like the 1st option and re-issue the query if needed.
The data-set can change from "Might have more data" to "End of data", but not vice versa. So if we, wrongly, return "Might have more data" because of a race condition, we just get a small performance overhead because we need to query again, and the next time we will receive "End of data".
"Invoke GetFoo() again after some time by specifying the last processed element as the starting point"
How are you planning to do that? If it's using the earlier-returned IFooEnumerator, then functionally the two options are equivalent. Otherwise, letting the caller destroy the "enumerator" then however-long afterwards call GetFoo() to continue iteration means you're losing your ability to monitor the client's ongoing interest in the query results. It might be that right now you have no need for that, but I think it's poor design to exclude the ability to track state throughout the overall result processing.
It really depends on many things whether the overall system will at all work (not going into details about your actual implementation):
No matter how you twist it, there will be a race condition between checking for "Is there more data" and more data being added to the system. Which means that it's possibly pointless to try to capture the last few data items?
You probably need to limit the number of repeated runs for "is there more data", or you could end up in an endless loop of "new data came in while processing the last lot".
How easy it is to know if data has been updated - if all the updates are "new items" with new ID's that are sequentially higher, you can simply query "Is there data above X", where X is your last ID. But if you are, for example, counting how many items in the data has property Y set to value A, and data may be updated anywhere in the database at the time (e.g. a database of where taxis are at present, that gets updated via GPS every few seconds and has thousands of cars, it may be hard to determine which cars have had updates since last time you read the database).
As to your implementation, in option 2, I'm not sure what you mean by the MightHaveMoreData state - either it has, or it hasn't, right? Repeated polling for more data is a bad design in this case - given that you will never be able to say 100% certain that there hasn't been "new data" provided in the time it took from fetching the last data until it was processed and acted on (displayed, used to buy shares on the stock market, stopped the train or whatever it is that you want to do once you have processed your new data).
Read-write lock could help. Many readers have simultaneous access to data set, and only one writer.
The idea is simple:
-when you need read-only access, reader uses "read-block", which could be shared with other reads and exclusive with writers;
-when you need write access, writer uses write-lock which is exclusive for both readers and writers;
I'm coding a long-running, multi-threaded server in C++. It receives requests on a socket, does database lookups and returns responses on a socket.
The server reads various run information from a configuration file, including database connectivity parameters. I have to use a database abstraction class from the company's code library. I don't want to wait until trying to do the DB search to lazy instantiate the DB connection (due to not shown complexity and the need for error exit at startup if DB connection cannot be made).
My problem is how to get the database connection information down into the search class without doing any number of "ugly" or bad OOP things that would technically work. I want to learn how to do this right way.
Is there a good design pattern for doing this? Should I be using the "Parameterize from Above" pattern? Am I missing some simpler Composition pattern?
// Read config file.
// Open DB connection using config values.
Server::process_request(string request, string response) {
try {
Process process(request);
if (process.do_parse(response)) {
return REQ_OK;
} else {
// handle error
}
} catch (..,) {
// handle exceptions
}
}
class Process : public GenericRequest {
public:
Process(string *input) : generic_process(input) {};
bool do_parse(string &output);
}
bool Process::do_parse(string &output) {
// Parse the input request.
Search search; // database search object
search.init( search parameters from parsing above );
output = format_response(search.get_results());
}
class Search {
// must use the Database library connection handle.
}
How do I get the DB connection from the Server class at top into the Search class instance at the bottom of the pseudo-code above?
It seems that the problem you are trying to solve is one of objects dependency, and is well solved using dependency injection.
Your class Process requires an instance of Search, which must be configured somehow. Instead of having instances of Process allocating their own Search instance, it would be easier to have them receive a ready made one at construction time. The Process class won't have to know about the Search configuration details, and thus an unecessary dependency is avoided.
But then the problem cascades up to whichever object must create a Process, because now this one has to know that configuration detail! In your situation, it is not really a problem, since the Server class is the one creating Process instances, and it happens to know the configuration details for Search.
However, a better solution is to implement a specialized class - for instance DBService, which will encapsulate the DB details acquired from the configuration step, and provide a method to get ready made Search instances. With this setup, no other objects will depend on the Search class for its construction and configuration. As an added benefit, you can easily implement and inject a DBService mockup object which will help you build test cases.
class DBSearch {
/* implement/extends the Search interface/class wrt DB */
};
class DBService {
/* constructor reads up configuration details somehow: command line, file */
Search *newSearch(){
return new DBSearch(config); // search object specialized on db
}
};
The code above somewhat illustrates the solution. Note that the newSearch method is not constrained to build only a Search instance, but may build any object specializing that class (as for example the class DBSearch above). The dependency is there almost removed from Process, which now only needs to know about the interface of Search it really manipulates.
The central element of good OOP design highlighted here is reducing coupling between objects to reduce the amount of work needed when modifying or enhancing parts of the application,
Please look up for dependency injection on SO for more information on that OOP design pattern.
I have a C++ application that uses ADO to talk to an Oracle database. I'm updating the application to support an offline documents. I've decided to implement SQLite for the local side.
I've implemented a wrapper around the ADO classes that will call the appropriate code. However, ADO's way of adding/editing/deleting rows is a bit difficult to implement for SQLite.
For ADO I'd write something like:
CADODatabase db;
CADORecordset rs( &db );
db.Open( "connection string" );
rs.Open( "select * from table1 where table1key=123" );
if (!rs.IsEOF())
{
int value;
rs.GetFieldValue( "field", value );
if (value == 456)
{
rs.Edit();
rs.SetFieldValue( "field", 456 );
rs.Update();
}
}
rs.Close();
db.Close();
For this simple example I realize that I could have just issued an update, but the real code is considerable more complex.
How would I get calls between the Edit() and Update() to actually update the data? My first thought is to have the Edit() construct a separate query and the Update() actually run it, but I'm not sure what fields will be changed nor what keys from the table to limit an update query to.
" but I'm not sure what fields will be changed nor what keys from the table to limit an update query to."
How about just selecting ROWID with the rest of the fields and then building an update based on that ?