C++ Interface for logger - c++

I am working a big legacy project and need to redo the common logger.
I tried to make same logger interface with before to avoiding changing ton of loggers.
The reason I need to redo the logger is the old one is syslog UDP which was using built-in library functions, while the new one I'm using GELF UDP.
Suppose I have a log with two parts of message, severity is info. The old interface is like below:
Log_INFO<< "First part message" <<"Second part message"<< endl;
Log_INFO is like 'std::cout', but it has two functionality:
Print out message in the command line.
Collect it in Graylog.
My new function is like below:
//Severity = {debug,info,warning, error, critical}
Log(Severity, whole_message)
For the same example,
Log("info",first_part_message+ second_part_message)
My question is how can I make my function is able to read log like the old one.

One common way of doing this is creating a custom streambuf-derived class, say LogStreambuf, and an ostream-derived class, say LogStream, that uses LogStreambuf (but is otherwise a plain jane ostream).
Then your log objects would be
LogStream Log_INFO("info");
LogStream Log_WARN("warn");
etc.
Your custom streambuf probably should call your Log function from its sync method.
See e.g. this for an example, and this for further guidance.

Related

boost::log - Using independent severity levels inside a library/plugin

This is a kind of follow-up of another question I asked (here) where I was made aware that using the same backend with multiple sinks is not a safe approach.
What I am trying to obtain is to "decouple" the severity levels inside a library/plugin from the applications using them, while being able to write the different logs to the same output (may it be stdout or, more likely, a file or a remote logger); this for the following reasons:
I wouldn't like to tie the severity levels inside my library/plugin to those of the applications using them because the change (for whatever reason) of the severity levels list in one of the applications using the library/plugin would cause the library to be "updated" with the new severities and, as a waterfall, of all other applications which use the library/plugin
I would like to be able to use library specific severity levels (which, for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks)
Which is the best way to obtain this?
Some afterthoughts: As per Andrey's reply to my previous question the "problem" is that the backend is not synchronized to receive data from multiple sources (sinks); thus the solution might seem to be to create a synchronized version of the backends (e.g. wrapping the writes to the backend in a boost::asio post)...
Is this the only solution?
Edit/Update
I update the question after Andrey's awesome reply, mainly for sake of completeness: the libraries/plugins are meant to be used with internally developed applications only, thus it is assumed that there will be a common API we can shape for defining the log structure and behaviour.
Plus, most applications are meant to run mainly "unmanned", i.e. with really minimal, if not null, user/runtime interaction, so the basic idea is to have the log level set in some plugin specific configuration file, read at startup (or set to be reloaded upon a specific application API command from the application).
First, I'd like to address this premise:
for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks
You don't need different sinks to be able to filter or format different types of severity levels. Your filters and formatters have to deal with that, not the sink itself. Only create multiple sinks if you need multiple log targets. So to answer your question, you should focus on the protocol of setting up filters and formatters rather than sinks.
The exact way to do that is difficult to suggest because you didn't specify the design of your application/plugin system. What I mean by that is that there must be some common API that must be shared by both the application and the libraries, and the way you set up logging will depend on where that API belongs. Severity levels, among other things, must be a part of that API. For example:
If you're writing plugins for a specific application (e.g. plugins for a media player) then the application is the one that defines the plugin API, including the severity levels and even possibly the attribute names the plugins must use. The application configures sinks, including filters and formatters, using the attributes mandated by the API, and plugins never do any configuration and only emit log records. Note that the API may include some attributes that allow to distinguish plugins from each other (e.g. a channel name), which would allow the application to process logs from different plugins differently (e.g. write to different files).
If you're writing both plugins and application(s) to adhere some common API, possibly defined by a third party, then logging protocol must still be defined by that API. If it's not, then you cannot assume that any other application or plugin not written by you supports logging of any kind, even that it uses Boost.Log at all. In this case every plugin and the application itself must deal with logging independently, which is the worst case scenario because the plugins and the application may affect each other in unpredictable ways. It is also difficult to manage the system like that because every component will have to be configured separately by the user.
If you're writing an application that must be compatible with multiple libraries, each having its own API, then it is the application who should be aware of logging convention taken in each an every library it uses, there's no way around it. This may include setting up callbacks in the libraries, intercepting file output and translating between library's log severity levels and the application severity levels. If the libraries use Boost.Log to emit log records then they should document the attributes they use, including the severity levels, so that the application is able to setup the logging properly.
So, in order to take one approach or the other, you should first decide how your application and plugins interface each other and what API they share and how that API defines logging. The best case scenario is when you define the API, so you can also set the logging conventions you want. In that case, although possible, it is not advisable or typical to have arbitrary severity levels allowed by the API because it significantly complicates implementation and configuration of the system.
However, just in case if for some reason you do need to support arbitrary severity levels and there's no way around that, you can define an API for the library to provide, which can help the application to set up filters and formatters. For example, each plugin can provide API like this:
// Returns the filter that the plugin wishes to use for its records
boost::log::filter get_filter();
// The function extracts log severity from the log record
// and converts it to a string
typedef std::function<
std::string(boost::log::record_view const&)
> severity_formatter;
// Returns the severity formatter, specific for the plugin
severity_formatter get_severity_formatter();
Then the application can use a special filter that will make use of this API.
struct plugin_filters
{
std::shared_mutex mutex;
// Plugin-specific filters
std::vector< boost::log::filter > filters;
};
// Custom filter
bool check_plugin_filters(
boost::log::attribute_value_set const& values,
std::shared_ptr< plugin_filters > const& p)
{
// Filters can be called in parallel, we need to synchronize
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->filters)
{
// Call each of the plugin's filter and pass the record
// if any of the filters passes
if (f(values))
return true;
}
// Suppress the record by default
return false;
}
std::shared_ptr< plugin_filters > pf = std::make_shared< plugin_filters >();
// Set the filter
sink->set_filter(std::bind(&check_plugin_filters, std::placeholders::_1, pf));
// Add filters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->filters.push_back(plugin1->get_filter());
pf->filters.push_back(plugin2->get_filter());
...
And a similar formatter:
struct plugin_formatters
{
std::shared_mutex mutex;
// Plugin-specific severity formatters
std::vector< severity_formatter > severity_formatters;
};
// Custom severity formatter
std::string plugin_severity_formatter(
boost::log::record_view const& rec,
std::shared_ptr< plugin_formatters > const& p)
{
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->severity_formatters)
{
// Call each of the plugin's formatter and return the result
// if any of the formatters is able to extract the severity
std::string str = f(rec);
if (!str.empty())
return str;
}
// By default return an empty string
return std::string();
}
std::shared_ptr< plugin_formatters > pf =
std::make_shared< plugin_formatters >();
// Set the formatter
sink->set_formatter(
boost::log::expressions::stream << "["
<< boost::phoenix::bind(&plugin_severity_formatter,
boost::log::expressions::record, pf)
<< "] " << boost::log::expressions::message);
// Add formatters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->severity_formatters.push_back(plugin1->get_severity_formatter());
pf->severity_formatters.push_back(plugin2->get_severity_formatter());
...
Note, however, that at least with regard to filters, this approach is flawed because you allow the plugins to define the filters. Normally, it should be the application who selects which records are being logged. And for that there must be a way to translate library-specific severity levels to some common, probably defined by the application levels.

C++ - Logging statements according to the level

I have the following statements:
static Logging::Logger* common_logger = new Logging::Logger(Logging::Logger::LEVEL);
In the Logger.h i have
class Logger {
public:
enum LEVEL {
Debug,
Warning,
Notification,
Error
};
}
I have included the file Logger.h inside my another class as :
Logging::log(CustomDialog::logger, Logging::Entry, CustomDialog::CLASSNAME, "CustomDialog");
I need to know if this is the right way to do the reason why i am doing this is to get logs based upon the level.
Regards,
Take a look at Log4cxx - it's easy to use and contains just about every feature you might want in a logging framework for C++. It's extensible, it can be configured through configuration files, and it even supports remote logging out of the box.
You can use ACE_DEBUG, it seems old-school (ala printf) but thread-safe, reliable and fully configurable (use logfiles, stdout etc..) You'll have to link against libACE(Adaptive Communication Framework) of course, but it's development packages are easily available in many linux distros per default nowadays. I've been looking over the list from that C++ logging libraries post, mentioned by Als, but it seems most people are running into mem leaks with many of the frameworks and boost::Log is not out yet.
Another point is that most logging libraries using streams, for example like this:
// from thread 1
mlog(mlog::DEBUG) << "Debug message goes here" << mlog::endl;
// from thread 2
mlog(mlog::INFO) << "Info message goes here" << mlog::endl;
will not work as expected in a multithreaded environment, while ACE will perform correctly there.
The output of the above will look something like this:
[thread1 | 12:04.23] Debug me[thread2 | 12:04.24] Info message goesssage goes herehere

How to dynamically build a new protobuf from a set of already defined descriptors?

At my server, we receive Self Described Messages (as defined here... which btw wasn't all that easy as there aren't any 'good' examples of this in c++).
At this point I am having no issue creating messages from these self-described ones. I can take the FileDescriptorSet, go through each FileDescriptorProto, adding each to a DescriptorPool (using BuildFile, which also gives me every defined FileDescriptor).
From here I can create any of the messages which were defined in the FileDescriptorSet with a DynamicMessageFactory instanced with the DP and calling GetPrototype (which is very easy to do as our SelfDescribedMessage required the messages full_name() and thus we can call the FindMessageTypeByName method of the DP, giving us the properly encoded Message Prototype).
The question is how can I take each already defined Descriptor or message and dynamically BUILD a 'master' message that contains all of the defined messages as nested messages. This would primarily be used for saving the current state of the messages. Currently we're handling this by just instancing a type of each message in the server(to keep a central state across different programs). But when we want to 'save off' the current state, we're forced to stream them to disk as defined here. They're streamed one message at a time (with a size prefix). We'd like to have ONE message (one to rule them all) instead of the steady stream of separate messages. This can be used for other things once it is worked out (network based shared state with optimized and easy serialization)
Since we already have the cross-linked and defined Descriptors, one would think there would be an easy way to build 'new' messages from those already defined ones. So far the solution has alluded us. We've tried creating our own DescriptorProto and adding new fields of the type from our already defined Descriptors but got lost (haven't deep dived into this one yet). We've also looked at possibly adding them as extensions (unknown at this time how to do so). Do we need to create our own DescriptorDatabase (also unknown at this time how to do so)?
Any insights?
Linked example source on BitBucket.
Hopefully this explanation will help.
I am attempting to dynamically build a Message from a set of already defined Messages. The set of already defined messages are created by using the "self-described" method explained(briefly) in the official c++ protobuf tutorial (i.e. these messages not available in compiled form). This newly defined message will need to be created at runtime.
Have tried using the straight Descriptors for each message and attempted to build a FileDescriptorProto. Have tried looking at the DatabaseDescriptor methods. Both with no luck. Currently attempting to add these defined messages as an extension to another message (even tho at compile time those defined messages, and their 'descriptor-set' were not classified as extending anything) which is where the example code starts.
you need a protobuf::DynamicMessageFactory:
{
using namespace google;
protobuf::DynamicMessageFactory dmf;
protobuf::Message* actual_msg = dmf.GetPrototype(some_desc)->New();
const protobuf::Reflection* refl = actual_msg->GetReflection();
const protobuf::FieldDescriptor* fd = trip_desc->FindFieldByName("someField");
refl->SetString(actual_msg, fd, "whee");
...
cout << actual_msg->DebugString() << endl;
}
I was able to solve this problem by dynamically creating a .proto file and loading it with an Importer.
The only requirement is for each client to either send across its proto file (only needed at init... not during full execution). The server then saves each proto file to a temp directory. An alternative if possible is to just point the server to a central location that holds all of the needed proto files.
This was done by first using a DiskSourceTree to map actual path locations to in program virtual ones. Then building the .proto file to import every proto file that was sent across AND define an optional field in a 'master message'.
After the master.proto has been saved to disk, i Import it with the Importer. Now using the Importers DescriptorPool and a DynamicMessageFactory, I'm able to reliably generate the whole message under one message. I will be putting an example of what I am describing up later on tonight or tomorrow.
If anyone has any suggestions on how to make this process better or how to do it different, please say so.
I will be leaving this question unanswered up until the bounty is about to expire just in case someone else has a better solution.
What about serializing all the messages into strings, and making the master message a sequence of (byte) strings, a la
message MessageSet
{
required FileDescriptorSet proto_files = 1;
repeated bytes serialized_sub_message = 2;
}

Instantiating a Qt File-Based Logger for Debugging in a C++ Library

The following page provides a nice simple solution for file based logging in Qt for debugging without using a larger logging framework like the many that are suggested in other SO questions.
I'm writing a library and would like to instantiate a logger that the classes in the library can use (mostly for debugging purposes). There is no int main() function since it's a library. So would the best approach be to add the instantiation into a file like logger.h and have any classes include logger.h if it would like to do qDebug() << PREFIX << "Bla" as the link above suggests?
I pretty much agree with OrcunC but I'd recommend making that ofstream a little more accessible and capable of handling the Qt value types.
Here's my recommended process:
Create a global QIODevice that to which everything will be written. This will probably be a QFile.
Create a QTextStream wrapper around that QIODevice that you'll then use for all the logging.
If you want something slightly more complicated, create methods that do the filtering based on log level info.
For example:
// setup the global logger somewhere appropriate
QFile *file = new QFile("your.log");
file->open(QIODevice::ReadOnly);
QTextStream *qlogger = new QTextStream(file);
And once the global logger is initialized, you could reference it as a global:
#include "qlogger.h"
//... and within some method
*qlogger << "your log" << aQtValueType;
But you might want some filtering:
#include "qlogger.h"
// lower number = higher priority
void setCurrentLogLevel(int level) {
globalLogLevel = level;
}
QTextStream* qLog(int level) {
if (level <= globalLogLevel) {
return qlogger;
}
return getNullLogger(); // implementation left to reader
}
And then you'd likely create an enum that represented the LogLevel and do something like this:
#include "qlogger.h"
//...
setCurrentLogLevel(LogLevel::Warning);
*qLog(LogLevel::Debug) << "this will be filtered" << yourQMap;
*qLog(LogLevel::Critical) << "not filtered" << yourQString;
As you'd be dealing with globals, carefully consider memory management issues.
If you follow the method in that link, ALL messages of the application output with qCritical(), qDebug(), qFatal() and qWarning() will flow into your handler.
So be careful! You may get not only your library's trace messages but the entire QT framework's messages. I guess this is not what you really want.
Instead of this
as a simple solution define a global *ofstream* in your library and use it only within your library.
whenever you write a library in c++ or c , it is best practice to declare all your methods in a .h file and define the methods/classes in a .cpp/.c file. This serves 2 purposes.
The .h file needs to be used to compile a 3rd party application that is using your library, and the library itself is used at link time.
The developer who is using your library can use the .h file as a reference to your library since it contains all the declarations.
So ,yes , you need to declare methods in a .h file and have other classes include logger.h.

How to replace WinAPI functions calls in the MS VC++ project with my own implementation (name and parameters set are the same)?

I need to replace all WinAPI calls of the
CreateFile,
ReadFile,
SetFilePointer,
CloseHandle
with my own implementation (which use low-level file reading via Bluetooth).
The code, where functions will be replaced, is Video File Player and it already works with the regular hdd files.
It is also needed, that Video Player still can play files from HDD, if the file in the VideoPlayer input is a regular hdd file.
What is the best practice for such task?
I suggest that you follow these steps:
Write a set of wrapper functions, e.g MyCreateFile, MyReadFile, etc, that initially just call the corresponding API and pass the same arguments along, unmodified.
Use your text editor to search for all calls to the original APIs, and replace these with calls to your new wrapper functions.
Test that the application still functions correctly.
Modify the wrapper functions to suit your own purposes.
Note that CreateFile is a macro which expands to either CreateFileW or CreateFileA, depending on whether UNICODE is defined. Consider using LPCTSTR and the TCHAR functions so that your application can be built as either ANSI or Unicode.
Please don't use #define, as suggested in other responses here, as this will just lead to maintenance problems, and as Maximilian correctly points out, it's not a best-practice.
You could just write your new functions in a custom namespace. e.g.
namespace Bluetooth
{
void CreateFile(/*params*/);
void etc...
}
Then in your code, the only thing you would have to change is:
if (::CreateFile(...))
{
}
to
if (Bluetooth::CreateFile(...))
{
}
Easy! :)
If you're trying to intercept calls to these APIs from another application, consider Detours.
If you can edit the code, you should just re-write it to use a custom API that does what you want. Failing that, use Maximilian's technique, but be warned that it is a maintenance horror.
If you cannot edit the code, you can patch the import tables to redirect calls to your own code. A description of this technique can be found in this article - search for the section titled "Spying by altering of the Import Address Table".
This is dangerous, but if you're careful you can make it work. Also check out Microsoft Detours, which does the same sort of thing but doesn't require you to mess around with the actual patching.
If you really want to hijack the API, look at syringe.dll (L-GPL).
I don't think this is best practice but it should work if you put it in an include file that's included everywhere the function you want to change is called:
#define CreateFile MyCreateFile
HRESULT MyCreateFile(whatever the params are);
Implementation of MyCreateFile looks something like this:
#undef CreateFile
HRESULT MyCreateFile(NobodyCanRememberParamListsLikeThat params)
{
if (InputIsNormalFile())
CreateFile(params);
else
// do your thing
}
You basically make every CreateFile call a MyCreateFile call where you can decide if you want need to use your own implementation or the orginal one.
Disclaimer: I think doing this is ugly and I wouldn't do it. I'd rather search and replace all occurences or something.