Designing class for handling multiple communication protocols handling - c++

I am developing a C++ application which should handle multiple communication protocols (Ethernet, Serial, etc.). Each of the communication protocols is handled as a specific class.
In order to expose as little as possible information about the internal structure and organization of the said classes and protocols, I would like to somehow wrap all of this functionality and provide somewhat generic API for sending data over a selected protocol.
Basically, what API should provide (the parameters are not restricted to this, but is the general idea):
bool sendData(uint8_t* buffer, const size_t& bufferSize);
void receiveData(uint8_t* dataBuffer, size_t& bufferSize);
What is the best way to create a generic API for the said functionality, and if possible involve some design pattern?
Regards.

What is the best way to create a generic API for the said
functionality, and if possible involve some design pattern?
The Strategy Pattern looks suitable in this scenario.
First, define an interface for all your distinct communication startegies, Communication:
class Communication {
public:
virtual ~CommunicationStrategy() = default;
virtual bool sendData(uint8_t* buffer, const size_t& bufferSize) = 0;
virtual void receiveData(uint8_t* dataBuffer, size_t& bufferSize) = 0;
};
Then, your concrete implementations – i.e., strategies – should derive from this interface:
class EthernetCommunication: public Communication {
public:
// ...
bool sendData(uint8_t*, const size_t&) override;
void receiveData(uint8_t*, size_t&) override;
};
class SerialCommunication: public Communication {
public:
// ...
bool sendData(uint8_t*, const size_t&) override;
void receiveData(uint8_t*, size_t&) override;
};
class CarrierPigeon: public Communication {
public:
// ...
bool sendData(uint8_t*, const size_t&) override;
void receiveData(uint8_t*, size_t&) override;
};
The client code will work with a (pointer to) Communication – i.e., the interface – rather than directly with a particular implementation like EthernetCommunication, SerialCommunication, or CarrierPigeon. Thus, the code follows the "program to an interface, not to an implementation" advice. For example, you may have a factory function like:
std::unique_ptr<Communication> CreateCommunication();
This factory function returns one of the strategies above. Which strategy to return can be determined at run time.
std::unique_ptr<Communication> com = CreateCommunication();
// send data regardless of a particular communication strategy
com->sendData(buffer, bufferSize);
This way, the code above isn't coupled to any particular implementation, but only to the interface Communication which is common to all the different possible communication strategies.
If the different communication strategies don't need per-instance data, just having two callbacks instead of an object will do:
using data_sender_t = bool (*)(uint8_t*, const size_t&);
using data_receiver_t = void (*)(uint8_t*, size_t&);
// set these function pointers to the strategy to use
data_sender_t data_sender;
data_receiver_t data_receiver;

Related

which is better, dynamic binding or interface class when implementing a callback interface

In my case, I have to give a callback interface, there're 2 solutions:
case 1, interface class
class interface {
public:
virtual void callback(param_t params);
};
case 2:
class dynamic_binding_interface {
public:
std::function<void(param_t params)> callback;
};
what I'm worried about case 2 is:
in my case, I don't need to change the implementation of callback, so it's acctually one-time bind
it could be difficult to debug, since std::function cannot hold param names
the point is I'm worried about case 1 is:
complicated inheritance in the future
I must create a new class to implement interface each time
so, any suggestions? thanks in advance
Using a callback via an abstract class (interface), and using std::function have each advantages and disadvantages. It's also a matter of opinion and preference.
Having said that, I'll try to demonstrate 2 extreme cases where I believe it is better suited to use this or the other.
Using an interface:
The main advantage of using an interface, is that you can have all the callback in one place. If you some server class with some methods (the incoming interface), it's convenient to think of the callbacks (where the server notifies of certain events) as an out going interface and treat them as one entity. It also allows, with the usage of pure virtual methods, to force the client to implement them (and therefore be aware of the various notifications).
Typical example:
struct SomeServerCallback
{
virtual void NotifyX() = 0;
virtual void NotifyY() = 0;
virtual void NotifyZ() = 0;
// ...
};
class SomeServer
{
public:
SomeServer(SomeServerCallback * pCallback) : m_pCallback(pCallback) {}
void Do1() { /*...*/ }
void Do2() { /*...*/ }
// ...
protected:
SomeServerCallback * m_pCallback;
};
Using a std::function:
On the other hand, if you have another server class that does not need to notify the client of various events, but does need a callback for printing messages, it can be more convenient to use std::function, rather then define an interface and force clients to derive from it. It allows the client to choose how to define the callback (using a lambda, std::bind with a class method etc.).
Typical example for this case:
#include <functional>
#include <string>
class SomeOtherServer
{
public:
using MyPrintCallback = std::function<void(std::string const&)>;
SomeOtherServer(MyPrintCallback printCallback) : m_printCallback(printCallback) {}
void Do1() { /*...*/ }
void Do2() { /*...*/ }
// ...
protected:
MyPrintCallback m_printCallback;
};
Using these solutions involves different overheads (performance-wise and others).
But I believe none of them is in principle prefered over the other.

Wrapper class design and dependency injection

I have a simple FTP class that take care of downloading and uploading through cURL libraries:
class FTPClient
{
public:
explicit FTPClient(const std::string& strIPAddress);
virtual ~FTPClient();
bool DownloadFile(const std::string& strRemoteFile, const std::string& strLocalFile);
bool UploadFile(const std::string& strLocalFile, const std::string& strRemoteFile);
private:
static size_t WriteToFileCallBack(void *ptr, size_t size, size_t nmemb, FILE *stream);
static size_t ReadFromFileCallback(void* ptr, size_t size, size_t nmemb, FILE *stream);
std::string m_strUser;
std::string m_strPass;
std::string m_strIPAddress;
std::string m_strPort;
mutable CURL* m_objCurlSession;
};
I've asked some advices on how it could be implemented and structured better since it's the base and core for a project and it's going to be used in many parts.
I've been told to use a cURLWrapper class to wrap all of the cURL calls (curl_easy_setopt(..)), but then I've been told to create an Interface for the FTP class, a cURLWrapper that just calls the FTP methods and then a concrete class.. but still it's too abstract for me and don't understand the best way to implement it and which path to follow..
How would you approach this small structure?
Define a simple interface for your FTP class:
class IFTPClient
{
public:
virtual ~IFTPClient();
virtual bool DownloadFile(const std::string& strRemoteFile, const std::string& strLocalFile) = 0;
virtual bool UploadFile(const std::string& strLocalFile, const std::string& strRemoteFile) = 0;
};
I assume your static callback methods are calling back into some class instance rather than into a singleton? That's fine then. Derive your class from the interface:
class FTPClient:IFTPClient
{
...
I notice that you have the IP address passed into the constructor and have other parameters (user name, password, port, etc.) defined elsewhere. That does not appear to be quite consistent yet. You would need to refactor that so that these parameters can be set through interface methods or add those to the upload/download methods.
Construct your FTPClient object(s) before you need it elsewhere and then only pass ("inject") the interface into objects that want to use the FTPClient. For unit testing without the use of the actual FTPClient, construct a mock object derived from the same interface and inject that into other objects instead.
Other objects simply make use of the functionality exposed in the interface and don't need to know or worry about its internal implementation; if you decide to use curl or something else is then entirely up to FTPClient.
That's in in a nutshell; you may want to search for dependency injection and frameworks on the Internet but you don't need a framework to follow dependency injection principles and, in my opinion, they can be overkill for simple projects.

Designing C++ classes with partly common implementations

I am designing a C++ module. This module can receive 3 different types of requests: Request-A, Request-B and Request-C.
For each type, I have a corresponding handler class: RequestHandler-A, RequestHandler-B and RequestHandler-C (all of these implement the IRequestHandler interface).
Each handler has to carry out certain actions to fulfill its request.
For example, RequestHandler-A needs to perform these in sequence:
Action-1
Action-2
Action-3
Action-4
Action-5
RequestHandler-B needs to perform these in sequence:
Action-1
Action-3
Action-5
RequestHandler-C needs to perform these in sequence:
Action-4
Action-5
The result of one action is used by the next one.
I am struggling to design these classes so that common action implementations are reused across handlers.
Are there any design patterns that can be applied here? Maybe Template method pattern could be a possibility but I am not sure.
Any suggestions would be greatly appreciated.
PS: to make things more interesting, there is also a requirement where, if Action-2 fails, we should retry it with different data. But maybe I am thinking too far ahead.
"Common implementations" means that your solution does not have anything to do with inheritance. Inheritance is for interface reuse, not implementation reuse.
You find that you have common code, just use shared functions:
void action1();
void action2();
void action3();
void action4();
void action5();
struct RequestHandlerA : IRequestHandler {
virtual void handle( Request *r ) {
action1();
action2();
action3();
}
};
struct RequestHandlerB : IRequestHandler {
virtual void handle( Request *r ) {
action2();
action3();
action4();
}
};
struct RequestHandlerC : IRequestHandler {
virtual void handle( Request *r ) {
action3();
action4();
action5();
}
};
Assuming that the common function are just internal helpers, you probably want to make them static (or use an anonymous namespace) to get internal linkage.
Are you looking for something like this?
#include <iostream>
using namespace std;
class Interface{
public:
void exec(){
//prepare things up
vExec();
//check everything is ok
};
virtual ~Interface(){}
protected:
virtual void vExec() = 0;
virtual void Action0() = 0;
virtual void Action1(){}
void Action2(){}
};
void Interface::Action0(){
}
void Action3(){}
class HandlerA : public Interface{
protected:
virtual void vExec(){
Action0();
Action1();
Action3();
}
virtual void Action0(){
}
};
class HandlerB : public Interface{
protected:
virtual void vExec(){
Action0();
Action1();
Action2();
Action3();
}
virtual void Action0(){
Interface::Action0();
}
};
int main()
{
Interface* handler = new HandlerA();
handler->exec();
HandlerB b;
b.exec();
delete handler;
}
As you can see the actions can be virtual members, non-virtual members, free functions, or whatever you might think of, depending on what you need.
The "additional" feature of feeding the actions with different data can be performed in exec() (if it is generic) or in vExec (if it is handler specific). If you give us more details I can modify the example accordingly.
Also, you can make vExec public and get rid of exec. The one in the example is just a practice I like most (making interface non-virtual and virtual functions non-public).
You can have one base class which implements the 5 actions and have the handlers derive from it.
If the actions are sufficiently isolated from each other, you can probably separate them out into individual functions or classes too and just have the handler call those.
Have you considered the Chain Of Command design pattern?
http://en.wikipedia.org/wiki/Command_pattern
It is a time proven pattern that promotes loose coupling among handler objects and the requests(commands) they receive.
What you could do is translate the request objects to act as Command Objects. You then specify which type of Commands each of your Handler's can undertake. You can then pass the command to the Handlers and have them pass the command forward if they cannot handle them. If a handler can handle the action, then the command is processed through each of its respective Actions. You can then have each logical action reside within the Handler as objects themselves, utilizing composition.

Best way to expose API from a library

I am designing a Win32 library to parse the contents of the file (Columns and Values) and store it internally in a datastructure (Map). Now i need to expose API's so that the consumer can call those API's to get the results.
The file may have different formats eg FM1, FM2 etc. The consumer may query like
FM1Provider.GetRecords("XYZ");
FM2Provider.GetRecords("XYZ");
What i am planning to do is to have a CParser class that does all the parsing and expose the class.
CParser
{
bool LoadFile(string strFile);
Map<string,string> GetFM1Records(string key);
Map<string,string> GetFM1Records(string key);
};
or
class CResultProvider
{
virtual Map<string,string> GetRecords(string key)=0;
}
class CFM1ResultProvider : public CResultProvider
{
Map<string,string> GetRecords(string key);
}
class CFM2ResultProvider : public CResultProvider
{
Map<string,string> GetRecords(string key);
}
CParser
{
bool LoadFile(string strFile);
CResultProvider GetFM1ResultProvider();
CResultProvider GetFM1ResultProvider();
};
Please suggest me which one of these approaches are correct and scalable considering i am developing a library.
Your component seems to be dealing with two problems: parsing and storing. It is a good design practise to separate these into different components so that they can be used independently.
I would suggest you provide the parser only with callbacks for parsed data. This way the user of it can choose the most suitable container for her application, or may choose to apply and discard read data without storing.
E.g.:
namespace my_lib {
struct ParserCb {
virtual void on_column(std::string const& column) = 0;
virtual void on_value(std::string const& value) = 0;
protected:
~ParserCb() {} // no ownership through this interface
};
void parse(char const* filename, ParserCb& cb);
} // my_lib
BTW, prefer using namespaces instead of prefixing your classes with C.
Assuming the client would only have to call GetRecords once, and then work with the map, the first approach I prefer the first approach because it is simpler.
If the client has to reload the map in different places in his code, the second approach is preferable, because it enables the client to write his code against one interface (CResultProvider). Thus, he can easily switch the file format simply by selecting a different implementation (there should be exactly one place in his code where the implementation is chosen).

Inheritance hierarchy vs. multiple inheritance (C++)

Well, I was thinking about a design decision for the past few days and since I still cannot favor one over the other I thought maybe someone else has an idea.
The situation is the following: I have a couple of different interface classes abstracting several communication devices. Since those devices differ in their nature they also differ in the interface and thus are not really related. Lets call them IFooDevice and IBarDevice. More device types may be added over time. The language is C++.
Since other components (called clients from now on) might want to use one or more of those devices, I decided to provide a DeviceManager class to handle access to all available devices at runtime. Since the number of device types might increase, I would like to treat all devices equally (from the managers point of view). However, clients will request a certain device type (or devices based on some properties).
I thought of two possible solutions:
The first would be some kind of interitance hierarchy. All devices would subclass a common interface IDevice which would provide the (virtual) methods necessary for management and device query (like getProperties(), hasProperties(), ...). The DeviceManager then has a collection of pointers to IDevice and at some point a cast from Base to Derived would be necessary - either with a template method in the manager or after the request on the client's side.
From a design point of view, I think it would be more elegant to seperate the concerns of managing a device and the interface of the specific device itself. Thus it would lead to two unrelated interfaces: IManagedDevice and e.g. IFooDevice. A real device would need to inherit from both in order to "be" of a specific device type and to be managaeble. The manager would only manage pointers to IManagedDevice. However, at some point there will be the need to cast between now unrelated classes (e.g. from IManagedDevice to IFooDevice) if a client wants to use a device provided by the manager.
Do I have to choose the lesser of two evils here? And if so which one would it be? Or do I miss something?
Edit:
About the "managing" part. The idea is to have library providing a variety of communication devices different (client) applications can use and share. Managing merely comes down to the storage of instances, methods for registering a new device and looking up a certain device. The responsibility for choosing the "right" device for the task is up to the client side because it knows best which requirements it puts on the communication. In order to reuse and thus share available devices (and by that I mean real instances and not just classes) I need a central access point to all available devices. I'm not too fond of the manager itself but it's the only thing I could come up to in that case.
I think the visitor pattern is a better choice for this.
I think what Tom suggested might be altered a bit to suit your needs:
class IManagedDevice
{
IDevice* myDevice;
/* Functions for managing devices... */
};
In this case IDevice is an empty interface that all devices inherit from. It gives no real benefit, just make the class hierarchy handling slightly more bearable.
Then, you can have then ask for the specific device (IFooDevice or IBarDevice), probably via some sort of device type ID.
If all you need is to have a common code to manage the devices, and then pass each device to the appropriate place I think you can get away with something like this:
class IDevice
{
virtual void Handle() = 0;
};
class IFooDevice : public IDevice
{
virtual void Handle()
{
this->doFoo();
}
virtual void doFoo() = 0;
}
class IBarDevice : public IDevice
{
virtual void Handle()
{
this->doBar();
}
virtual void doBar() = 0;
}
With the manager calling the Handle function.
I think I'd go for a simple solution of having a base class for Device that takes care of registering the device in the global device list and then static methods for looking them up. Something like:
struct Device
{
static Device *first; // Pointer to first available device
Device *prev, *next; // Links for the doubly-linked list of devices
Device() : prev(0), next(first)
{
if (next) next->prev = this;
first = this;
}
virtual ~Device()
{
if (next) next->prev = prev;
if (prev) prev->next = next; else first = next;
}
private:
// Taboo - the following are not implemented
Device(const Device&);
Device& operator=(const Device&);
};
Then you can just derive all devices from Device and them will be automatically placed in the global list on construction and removed from the global list on destruction.
All your clients will be able to visit the list of all devices by starting from Device::first and following device->next. By doing a dynamic_cast<NeededDeviceType*>(device) clients can check if the device is compatible with what they need.
Of course any method that is implemented in every device type (e.g. a description string, a locking method to ensure exclusive use by one client and the like) can be exported also at the Device level.
when communicating with devices I separated the the device and the communication manager completely.
I had a simple communication manager that was based on Boost.Asio. The interface was something like
/** An interface to basic communication with a decive.*/
class coms_manager
{
public:
virtual
~coms_manager();
/** Send a command. */
virtual
void
send(const std::string& cmd) = 0;
/** Receive a command.
* #param buffsize The number of bytes to receive.
* #param size_exactly True if exactly buffsize bytes are to be received. If false, then fewer bytes may be received.
*/
virtual
std::string
recv( const unsigned long& buffsize = 128,
const bool& size_exactly = false) = 0 ;
/** Timed receive command.
* #param buffsize The number of bytes to receive.
* #param seconds The number of seconds in the timeout.
* #param size_exactly True if exactly buffsize bytes are to be received. If false, then fewer bytes may be received.
*/
virtual
std::string
timed_recv( const unsigned long& buffsize = 128,
const double& seconds = 5,
const bool& size_exactly = false) = 0;
};
I then implemented this interface for tcp (ethernet) and serial communications.
class serial_manager : public coms_manager {};
class ethernet_manager : public coms_manager {};
Each of the devices then contained (or pointed to) (rather than inherited) a coms_manager object
For example:
class Oscilloscope
{
void send(const std::string& cmd)
{
m_ComsPtr->send(cmd);
}
private:
coms_manager* m_ComsPtr;
};
You can then swap around the communication method by changing what the pointer points to.
For me, this didn't make much sense (the Oscilloscope was EITHER attached via serial OR via ethernet and so I actually opted for
template<class Manager>
class Oscilloscope
{
void send(const std::string& cmd)
{
m_Coms.send(cmd);
}
private:
Manager m_Coms;
};
and I now use
Oscilloscope<serial_manager> O1(/dev/tty1); // the serial port
Oscilloscope<ethernet_manager> O2(10.0.0.10); //The ip address
which makes more sense.
As for your suggestion as to have a generic device interface. I started with that too, but then wasn't sure of its utility - I always wanted to know exactly what equipment I was sending a command to, I neither needed nor wanted to work through an abstract interface.
At a first glance, the first approach seems fine for me if all devices need to be managed and no other stuff can be done with an unknown device. The meta data for a general device (e.g. name, ...) would typically be the data one need for managing devices.
However, if you need to separate the interface between the management and the device functionality, you can use virtual inheritance.
IManagedDevice and IFooDevice are both interfaces of the same concrete device, so they both have a common virtual base IDevice.
Concretely (run code):
#include <cassert>
class IDevice {
public:
// must be polymorphic, a virtual destructor is a good idea
virtual ~IDevice() {}
};
class IManagedDevice : public virtual IDevice {
// management stuff
};
class IFooDevice : public virtual IDevice {
// foo stuff
};
class ConcreteDevice : public IFooDevice, public IManagedDevice {
// implementation stuff
};
int main() {
ConcreteDevice device;
IManagedDevice* managed_device = &device;
IFooDevice* foo_device = dynamic_cast<IFooDevice*>(managed_device);
assert(foo_device);
return 0;
}