How to invoque different operations by reusing a single API function? - c++

My API function execute_api() shall perform specific operations for:
method name : view / create / update / delete / update_all / delete_all
method type : get / post
I want my code to reuse the same logic in execute_api() but tailor the execution to implement any operations I have listed above. Here a quick code snippet:
void execute_api()
{
void fill_request_vo( Request& req); // Request is a .oml file
void calculate_url(Request& req); // calculate the url for the server to hit depending upon the operation selected
void calculate_header(Request& req); // calculate header for the server to hit depending upon the operation selected
// execute the services based on some conditions
// ResponseVO will be filled in case of success scenerio
void parse_response(Response& res); // does some logic with the response
}
Question : In short, i need a better a way to reuse this function for any method type/name listed out with just changing parameter is the Request.oml.
Below is my solution to this problem, but need some better suggestions. Please ignore below if you find it lengthy
My Solution:
fill the method name in Request (method_name as enum - view/create/update/delete/update_all/delete_all).
Hence depending on the method name selected, i need to calculate url and header.
switch(req.get_method_name())
{
case add:
// do something - calculate url
break;
case view :
// do something - calculate url
break;
.....
....
}
i wanted to repeat this same design for header, but depend upon method_type(get/post)
switch(req.get_method_type())
{
case get:
// prepare headers accordingly
break;
case post:
// prepare headers accordingly
break;
...
}
Question : is there any way to achieve this? we need on keep on adding switch for the new operation..hence i'm looking for someother suggestion
Sorry for such a long query. let me know if anything is not clear.

I think you could make use of Command pattern here. refer http://en.wikipedia.org/wiki/Command_pattern for more details.
The idea is that you create a Base class, Request (maybe), that exposes a method execute(), which you could call.
You could extend the Request base class and specialize on method type etc. All the complex code related to header formation and URL can be encapsulated twithin the specialized Request classes. So, that your execute_api() would look something like
execute_api() {
....
....
Response& resp = request.execute(); //Generic for any request type
}
This pattern would ensure that your api dispatch request doesn't change with any new method type you add. You can extend this idea to Response class as well.
Hope it helps.
Edit:
For request object creation, we could make use of Simple Factory programming idiom. This is treated differently as compared to Factory Method Pattern
So, the switch-cases would shift to this SimpleRequestFactory, thereby, localizing the code changes required for any future enhancements.
The above code might look something like below:
execute_api() {
....
Request& request = SimpleRequestFactory::getInstance()->createRequestObj(); // Singleton SimpleRequestFactory
Response& resp = request.execute(); //Generic for any request type
}
You could further improve it by introducing another level of abstraction (say), by creating AbstractRequestFactory class, which can have sub-classes like PostRequestFactory, GetRequestFactory etc. So, execute_api might look something like below:
execute_api( AbstractRequestFactory& factory ) {
....
Request& request = factory.getRequestObject(/*pass req params*/);
Response& res = request.execute(); //Generic for any request type
}

Related

Akka get response as ComletedStage from actor

I am referring to api
Patters.ask(actor, msg, duration);
here is sample
class MyActor extends AbstractBehavior{
interface Command{}
interface SomeMessage implements Command{ INSTANCE}
public Reveive<Comamnd> receive(){
return newReceiveBuilder().onMessage(SomeMessage.class, this::someMessage).build();
}
private Behavior<Command> someMessage(SomeMessage ref){
System.out.println("Dru lalal");
}
}
ActorRef<MyActor.Command> myActor = ...;
Future<Object> future = Patterns.ask(myActor, SomeMessage.INSTANCE, Duration.ofMillis(10000));
What is gone be object ?
Obviously this won't compile. Some part of picture is missing, but javadoc doesn't state what.
Call "Patterns.ask" suppose to return future with object, those value is provided by actor as business logic. But there is not business logic in actor. I assume there is suppose to be some kind of convention or mapping for method that returns value with what "Patters.ask" triggers.
Same is true about back logic. I will not able to define receiver since it expect to return Receiver not SomeObject and thus, api want't let me bind result to some message. Only thing I can do is manually pass ComputableFuture
ComputableFuture<MyOBject> future = new ComputableFuture<>();
myActor.tell(new Message(future));
private Behavior<Command> someMessage(Message message){
var result = compute();
message.future.comlete(result);
}
And here we go, I have manually manage everything, also issues with passing non serializable message, lifecycle of objects and so on.
Wrong objects is used. Instead of "AskPattern.ask" for new java typed dsl, I used classic "Patterns.ask".
Most of times new api objects has same object name but located in different package. I used to check only package name becouse while playing with in IDE they always next to each other since name is the same. I got used to ignore the classic "com.akka" objects while playing with api.
And here I got into trap, object name is different and not placed in IDE next to "classic" package object.

Storing temporary variables for use in Loopback 4 application

I have an authentication token I'd like to use in multiple Loopback 4 controllers. This token expires. Once expired I run some login logic to fetch a new token.
My issue is I'm not sure how or where to store this token.
So I can use this throughout my application I'm thinking to either save the token as a environment variable eg.
process.env.AUTH_TOKEN = 'TEST';
or use Loopback 4's Application-level context
https://loopback.io/doc/en/lb4/Context.html
Are these suitable solutions for storing this token? If not what would be an alternative solution?
In the case of using Context, how would I go about doing this using best practices?
Taking all the comments above into account I would recommend you to crate a separate module which will encapsulate the logic related to your authentication token and how you use it. I.e. a new module will be responsible for:
Fetching a new token when it is empty
Storing of the token
Refreshing the token when it has expired
Execution of the API calls (or whatever you do with that token, sorry it was not clear from your description) - can be moved to a separate module, but it is a different story
I imagine your module in JavaScript may look something like:
let AUTH_TOKEN = "";
function makeAPICall(some, params) {
if (! AUTH_TOKEN) {
acquireNewToken();
}
if (expired()) {
refreshToken();
}
return "some_data"; // TODO: here you do you what you want with your auth token and return some data
}
function acquireNewToken() {
authToken = "new_token"; // TODO: put the logic to acquire a new token here
}
function refreshToken() {
authToken = "new_token"; // TODO: put the logic to refresh a token here
}
function expired() {
return false; // TODO: put the logic to check if token expired here
}
module.exports = {
makeAPICall: makeAPICall
};
Then you can require the authModule in all your controllers and use it like below:
let authModule = require('./modules/authModule');
authModule.makeAPICall("some", "params");
I believe you will never need to expose the auth token to your controllers as you can implement all the logic related to auth token usage within the authModule and only pass some parameters to makeAPICall function to tell it what to do and which data to get. But in case if you really need to expose it you can change the authModule a bit (add getToken function and add it to module.exports):
function getToken() {
return authToken;
}
module.exports = {
makeAPICall: makeAPICall,
getToken: getToken
};
Now, let's get back to your questions:
Are these suitable solutions for storing this token? If not what would be an alternative solution?
As proposed above the solution is to store the token as a local variable in scope of custom module. Note, as Node.js uses caching for modules your AUTH_TOKEN variable will be the same across all the controllers (every new require will return you exactly the same object with the same token).
If you do not want to require the authModule every time you need to access your AUTH_TOKEN you can also simply declare it as a global variable: global.AUTH_TOKEN = "";. Note, that global variables have it's drawback like it may cause implicit coupling between files, etc. Here is a good article about when you should and when you should not use global variables: https://stackabuse.com/using-global-variables-in-node-js/
In the case of using Context, how would I go about doing this using
best practices?
You can use Loopback 4 Context as well and it will be almost an equivalent of the solution with the custom authModule I proposed above. The only difference with the customer module - you can put a bit more custom logic there and avoid copy-pasting some of your code in the controllers. With Loopback 4 Context you can use Server level context and store your AUTH_TOKEN there, but you will still need some place where you get a new token and refresh it when it expires. Again, you can implement this logic in the custom authModule. I.e. you can still keep that custom module and store the AUTH_TOKEN in Loopback Context at the same time. This will be absolutely OK, but it will make the code a bit more complex from my point of view.

how to respond to a request with a dependency on another actor?

this might be a stupid question, but I need to ask since I havent found an answer to it yet. I have used the akka-http with routing with the typical routing pattern of a path with a
complete with a HttpRequest.
For instance:
~ path("reactJS") {
complete(
HttpResponse(entity = HttpEntity(ContentTypes.`text/html(UTF-8)`, Source.fromFile(reactJS).mkString))
)
}
However, I would like to have a separate actor that handles a file system and then, in my mind, I would like the server to pass the request over to the file handler actor. So my question would be, how would one naturally do a complete of a request with a dependency on another actor? I guess then the server would have a routing looking like:
~ path("patient" / IntNumber) { index =>
FileHandler ! index
}
class FileHandler extends Actor{
def receive = {
case msg:Int => sender() ! file handling
}
and the serving of the request would have to be a case in the receive method of the server, right?
looking into:How to respond with the result of an actor call?
I think your best bet is to use the ask pattern (?) and then use the onComplete directive within your routing tree to handle the Future that comes back from the ask. Taking your example, and modifying it a bit to show how you could leverage ask is shown below:
path("patient" / IntNumber) { index =>
import akka.pattern.ask
implicit val timeout = akka.util.Timeout(10 seconds)
val fut = (fileHandlerActor ? index).mapTo[String]
onComplete(fut){
case util.Success(fileData) =>
complete(HttpResponse(entity = HttpEntity(
ContentTypes.`text/html(UTF-8)`, fileData))
case util.Failure(ex) =>
complete(HttpResponse(StatusCodes.InternalServerError))
}
}
The assumption here is that your actor is responding with a String that is to become the HTTP response entity. Also, that timeout is a requirement of using ask, but you could very easily define it elsewhere in your code, as long as it's in scope here.

OOP: proper class design for database connection in derived child class?

I'm coding a long-running, multi-threaded server in C++. It receives requests on a socket, does database lookups and returns responses on a socket.
The server reads various run information from a configuration file, including database connectivity parameters. I have to use a database abstraction class from the company's code library. I don't want to wait until trying to do the DB search to lazy instantiate the DB connection (due to not shown complexity and the need for error exit at startup if DB connection cannot be made).
My problem is how to get the database connection information down into the search class without doing any number of "ugly" or bad OOP things that would technically work. I want to learn how to do this right way.
Is there a good design pattern for doing this? Should I be using the "Parameterize from Above" pattern? Am I missing some simpler Composition pattern?
// Read config file.
// Open DB connection using config values.
Server::process_request(string request, string response) {
try {
Process process(request);
if (process.do_parse(response)) {
return REQ_OK;
} else {
// handle error
}
} catch (..,) {
// handle exceptions
}
}
class Process : public GenericRequest {
public:
Process(string *input) : generic_process(input) {};
bool do_parse(string &output);
}
bool Process::do_parse(string &output) {
// Parse the input request.
Search search; // database search object
search.init( search parameters from parsing above );
output = format_response(search.get_results());
}
class Search {
// must use the Database library connection handle.
}
How do I get the DB connection from the Server class at top into the Search class instance at the bottom of the pseudo-code above?
It seems that the problem you are trying to solve is one of objects dependency, and is well solved using dependency injection.
Your class Process requires an instance of Search, which must be configured somehow. Instead of having instances of Process allocating their own Search instance, it would be easier to have them receive a ready made one at construction time. The Process class won't have to know about the Search configuration details, and thus an unecessary dependency is avoided.
But then the problem cascades up to whichever object must create a Process, because now this one has to know that configuration detail! In your situation, it is not really a problem, since the Server class is the one creating Process instances, and it happens to know the configuration details for Search.
However, a better solution is to implement a specialized class - for instance DBService, which will encapsulate the DB details acquired from the configuration step, and provide a method to get ready made Search instances. With this setup, no other objects will depend on the Search class for its construction and configuration. As an added benefit, you can easily implement and inject a DBService mockup object which will help you build test cases.
class DBSearch {
/* implement/extends the Search interface/class wrt DB */
};
class DBService {
/* constructor reads up configuration details somehow: command line, file */
Search *newSearch(){
return new DBSearch(config); // search object specialized on db
}
};
The code above somewhat illustrates the solution. Note that the newSearch method is not constrained to build only a Search instance, but may build any object specializing that class (as for example the class DBSearch above). The dependency is there almost removed from Process, which now only needs to know about the interface of Search it really manipulates.
The central element of good OOP design highlighted here is reducing coupling between objects to reduce the amount of work needed when modifying or enhancing parts of the application,
Please look up for dependency injection on SO for more information on that OOP design pattern.

Better structure for request based protocol implementation

I am using a protocol, which is basically a request & response protocol over TCP, similar to other line-based protocols (SMTP, HTTP etc.).
The protocol has about 130 different request methods (e.g. login, user add, user update, log get, file info, files info, ...). All these methods do not map so well to the broad methods as used in HTTP (GET,POST,PUT,...). Such broad methods would introduce some inconsequent twists of the actual meaning.
But the protocol methods can be grouped by type (e.g. user management, file management, session management, ...).
Current server-side implementation uses a class Worker with methods ReadRequest() (reads request, consisting of method plus parameter list), HandleRequest() (see below) and WriteResponse() (writes response code & actual response data).
HandleRequest() will call a function for the actual request method - using a hash map of method name to member function pointer to the actual handler.
The actual handler is a plain member function there is one per protocol method: each one validates its input parameters, does whatever it has to do and sets response code (success yes/no) and response data.
Example code:
class Worker {
typedef bool (Worker::*CommandHandler)();
typedef std::map<UTF8String,CommandHandler> CommandHandlerMap;
// handlers will be initialized once
// e.g. m_CommandHandlers["login"] = &Worker::Handle_LOGIN;
static CommandHandlerMap m_CommandHandlers;
bool HandleRequest() {
CommandHandlerMap::const_iterator ihandler;
if( (ihandler=m_CommandHandlers.find(m_CurRequest.instruction)) != m_CommandHandler.end() ) {
// call actual handler
return (this->*(ihandler->second))();
}
// error case:
m_CurResponse.success = false;
m_CurResponse.info = "unknown or invalid instruction";
return true;
}
//...
bool Handle_LOGIN() {
const UTF8String username = m_CurRequest.parameters["username"];
const UTF8String password = m_CurRequest.parameters["password"];
// ....
if( success ) {
// initialize some state...
m_Session.Init(...);
m_LogHandle.Init(...);
m_AuthHandle.Init(...);
// set response data
m_CurResponse.success = true;
m_CurResponse.Write( "last_login", ... );
m_CurResponse.Write( "whatever", ... );
} else {
m_CurResponse.Write( "error", "failed, because ..." );
}
return true;
}
};
So. The problem is: My worker class now has about 130 "command handler methods". And each one needs access to:
request parameters
response object (to write response data)
different other session-local objects (like a database handle, a handle for authorization/permission queries, logging, handles to various sub-systems of the server etc.)
What is a good strategy for a better structuring of those command handler methods?
One idea was to have one class per command handler, and initializing it with references to request, response objects etc. - but the overhead is IMHO not acceptable (actually, it would add an indirection for any single access to everything the handler needs: request, response, session objects, ...). It could be acceptable if it would provide an actual advantage. However, that doesn't sound much reasonable:
class HandlerBase {
protected:
Request &request;
Response &response;
Session &session;
DBHandle &db;
FooHandle &foo;
// ...
public:
HandlerBase( Request &req, Response &rsp, Session &s, ... )
: request(req), response(rsp), session(s), ...
{}
//...
virtual bool Handle() = 0;
};
class LoginHandler : public HandlerBase {
public:
LoginHandler( Request &req, Response &rsp, Session &s, ... )
: HandlerBase(req,rsp,s,..)
{}
//...
virtual bool Handle() {
// actual code for handling "login" request ...
}
};
Okay, the HandlerBase could just take a reference (or pointer) to the worker object itself (instead of refs to request, response etc.). But that would also add another indirection (this->worker->session instead of this->session). That indirection would be ok, if it would buy some advantage after all.
Some info about the overall architecture
The worker object represents a single worker thread for an actual TCP connection to some client. Each thread (so, each worker) needs its own database handle, authorization handle etc. These "handles" are per-thread-objects that allow access to some sub-system of the server.
This whole architecture is based on some kind of dependency injection: e.g. to create a session object, one has to provide a "database handle" to the session constructor. The session object then uses this database handle to access the database. It will never call global code or use singletons. So, each thread can run undisturbed on its own.
But the cost is, that - instead of just calling out to singleton objects - the worker and its command handlers must access any data or other code of the system through such thread-specific handles. Those handles define its execution context.
Summary & Clarification: My actual question
I am searching for an elegant alternative to the current ("worker object with a huge list of handler methods") solution: It should be maintainable, have low-overhead & should not require writing too much glue-code. Additionally, it MUST still allow each single method control over very different aspects of its execution (that means: if a method "super flurry foo" wants to fail whenever full moon is on, then it must be possible for that implementation to do so). It also means, that I do not want any kind of entity abstraction (create/read/update/delete XFoo-type) at this architectural layer of my code (it exists at different layers in my code). This architectural layer is pure protocol, nothing else.
In the end, it will surely be a compromise, but I am interested in any ideas!
The AAA bonus: a solution with interchangeable protocol implementations (instead of just that current class Worker, which is responsible for parsing requests and writing responses). There maybe could be an interchangeable class ProtocolSyntax, that handles those protocol syntax details, but still uses our new shiny structured command handlers.
You've already got most of the right ideas, here's how I would proceed.
Let's start with your second question: interchangeable protocols. If you have generic request and response objects, you can have an interface that reads requests and writes responses:
class Protocol {
virtual Request *readRequest() = 0;
virtual void writeResponse(Response *response) = 0;
}
and you could have an implementation called HttpProtocol for example.
As for your command handlers, "one class per command handler" is the right approach:
class Command {
virtual void execute(Request *request, Response *response, Session *session) = 0;
}
Note that I rolled up all the common session handles (DB, Foo etc.) into a single object instead of passing around a whole bunch of parameters. Also making these method parameters instead of constructor arguments means you only need one instance of each command.
Next, you would have a CommandFactory which contains the map of command names to command objects:
class CommandFactory {
std::map<UTF8String, Command *> handlers;
Command *getCommand(const UTF8String &name) {
return handlers[name];
}
}
If you've done all this, the Worker becomes extremely thin and simply coordinates everything:
class Worker {
Protocol *protocol;
CommandFactory *commandFactory;
Session *session;
void handleRequest() {
Request *request = protocol->readRequest();
Response response;
Command *command = commandFactory->getCommand(request->getCommandName());
command->execute(request, &response, session);
protocol->writeResponse(&response);
}
}
If it were me I would probably use a hybrid solution of the two in your question.
Have a worker base class that can handle multiple related commands, and can allow your main "dispatch" class to probe for supported commands. For the glue, you would simply need to tell the dispatch class about each worker class.
class HandlerBase
{
public:
HandlerBase(HandlerDispatch & dispatch) : m_dispatch(dispatch) {
PopulateCommands();
}
virtual ~HandlerBase();
bool CommandSupported(UTF8String & cmdName);
virtual bool HandleCommand(UTF8String & cmdName, Request & req, Response & res);
virtual void PopulateCommands();
protected:
CommandHandlerMap m_CommandHandlers;
HandlerDispatch & m_dispatch;
};
class AuthenticationHandler : public HandlerBase
{
public:
AuthenticationHandler(HandlerDispatch & dispatch) : HandlerBase(dispatch) {}
bool HandleCommand(UTF8String & cmdName, Request & req, Response & res) {
CommandHandlerMap::const_iterator ihandler;
if( (ihandler=m_CommandHandlers.find(req.instruction)) != m_CommandHandler.end() ) {
// call actual handler
return (this->*(ihandler->second))(req,res);
}
// error case:
res.success = false;
res.info = "unknown or invalid instruction";
return true;
}
void PopulateCommands() {
m_CommandHandlers["login"]=Handle_LOGIN;
m_CommandHandlers["logout"]=Handle_LOGOUT;
}
void Handle_LOGIN(Request & req, Response & res) {
Session & session = m_dispatch.GetSessionForRequest(req);
// ...
}
};
class HandlerDispatch
{
public:
HandlerDispatch();
virtual ~HandlerDispatch() {
// delete all handlers
}
void AddHandler(HandlerBase * pHandler);
bool HandleRequest() {
vector<HandlerBase *>::iterator i;
for ( i=m_handlers.begin() ; i < m_handlers.end(); i++ ) {
if ((*i)->CommandSupported(m_CurRequest.instruction)) {
return (*i)->HandleCommand(m_CurRequest.instruction,m_CurRequest,m_CurResponse);
}
}
// error case:
m_CurResponse.success = false;
m_CurResponse.info = "unknown or invalid instruction";
return true;
}
protected:
std::vector<HandlerBase*> m_handlers;
}
And then to glue it all together you would do something like this:
// Init
m_handlerDispatch.AddHandler(new AuthenticationHandler(m_handlerDispatch));
As for the transport (TCP) specific part, did you have a look at the ZMQ library that supports various distributed computing patterns via messaging sockets/queues? IMHO you should find an appropriate pattern that serves your needs in their Guide document.
For choice of the protocol messages implementation i would personally favorite google protocol buffers which works very well with C++, we are using it for a couple of projects now.
At least you'll boil down to dispatcher and handler implementations for specific requests and their parameters + the necessary return parameters. Google protobuf message extensions allow to to this in a generic way.
EDIT:
To get a bit more concrete, using protobuf messages the main difference of the dispatcher model vs yours will be that you don't need to do the complete message parsing before dispatch, but you can register handlers that tell themselves if they can handle a particular message or not by the message's extensions. The (main) dispatcher class doesn't need to know about the concrete extensions to handle, but just ask the registered handler classes. You can easily extend this mechanism to have certain sub-dispatchers to cover deeper message category hierarchies.
Because the protobuf compiler can already see your messaging data model completely, you don't need any kind of reflection or dynamic class polymorphism tests to figure out the concrete message content. Your C++ code can statically ask for possible extensions of a message and won't compile if such doesn't exist.
I don't know how to explain this in a better way, or to show a concrete example how to improve your existing code with this approach. I'm afraid you already spent some efforts on the de-/serialization code of your message formats, that could have been avoided using google protobuf messages (or what kind of classes are Request and Response?).
The ZMQ library might help to implement your Session context to dispatch requests through the infrastructure.
Certainly you shouldn't end up in a single interface that handles all kinds of possible requests, but a number of interfaces that specialize on message categories (extension points).
I think this is an ideal case for a REST-like implementation. One other way could also be grouping the handler methods based on category/any-other-criteria to several worker classes.
If the protocol methods can only be grouped by type but methods of the same group do not have anything common in their implementation, possibly the only thing you can do to improve maintainability is distributing methods between different files, one file for a group.
But it is very likely that methods of the same group have some of the following common features:
There may be some data fields in the Worker class that are used by only one group of methods or by several (but not every) group. For example, if m_AuthHandle may be used only by user management and session management methods.
There may be some groups of input parameters, used by every method of some group.
There may be some common data, written to the response by every method of some group.
There may be some common methods, called by several methods of some group.
If some of these facts is true, there is a good reason to group these features into different classes. Not one class per command handler, but one class per event group. Or, if there are features, common to several groups, a hierarchy of classes.
It may be convenient to group instances of all these group classes in one place:
classe UserManagement: public IManagement {...};
classe FileManagement: public IManagement {...};
classe SessionManagement: public IManagement {...};
struct Handlers {
smartptr<IManagement> userManagement;
smartptr<IManagement> fileManagement;
smartptr<IManagement> sessionManagement;
...
Handlers():
userManagement(new UserManagement),
fileManagement(new FileManagement),
sessionManagement(new SessionManagement),
...
{}
};
Instead of new SomeClass, some template like make_unique may be used. Or, if "interchangeable protocol implementations" are needed, one of the possibilities is to use factories instead of some (or all) new SomeClass operators.
m_CommandHandlers.find() should be split into two map searches: one - to find appropriate handler in this structure, other (in the appropriate implementation of IManagement) - to find a member function pointer to the actual handler.
In addition to finding a member function pointer, HandleRequest method of any IManagement implementation may extract common parameters for its event group and pass them to event handlers (one by one if there are just several of them, or grouped in a structure if there are many).
Also IManagement implementation may contain WriteCommonResponce method to simplify writing responce fields, common to all event handlers.
The Command Pattern is your solution to both aspects of this problem.
Use it to implement your protocol handler with a generalised IProtocol Interface (and/or abstract base class) and different implementations of protocol handler with a different Classes specialised for each protocol.
Then implement your Commands the same way with an ICommand Interface and each Command Methods implemented in seperate class. You are nearly there with this. Split your existing Methods into new Specialised Classes.
Wrap Your Requests and Responses as Mememento objects