Both Java and .Net seem to have a wealth of object validation frameworks (e.g. Commons Validator, XWork, etc.), but I've been unable to find anything similar for C++. Has anyone come across something like this, or do people typically roll their own?
In 2020 there is cpp-validator library.
This is a C++14/C++17 header-only library that can be used to validate:
plain variables;
properties of objects, where a property can be accessed either as object's variable or object's getter method;
contents and properties of containers;
nested containers and objects.
Basic usage of the library includes two steps:
first, define a validator using almost declarative syntax;
then, apply the validator to data that must be validated and check the results.
See example below.
// define validator
auto string_validator=validator(
value(gte,"sample string"),
size(lt,15)
);
// validate variable
std::string var="sample";
error_report err;
validate(var,string_validator,err);
if (err)
{
std::cerr << err.message() << std::endl;
/* prints:
must be greater than or equal to "sample string"
*/
}
Some GUI frameworks have validators.
Check out wxWidgets Validators
Related
I am working a big legacy project and need to redo the common logger.
I tried to make same logger interface with before to avoiding changing ton of loggers.
The reason I need to redo the logger is the old one is syslog UDP which was using built-in library functions, while the new one I'm using GELF UDP.
Suppose I have a log with two parts of message, severity is info. The old interface is like below:
Log_INFO<< "First part message" <<"Second part message"<< endl;
Log_INFO is like 'std::cout', but it has two functionality:
Print out message in the command line.
Collect it in Graylog.
My new function is like below:
//Severity = {debug,info,warning, error, critical}
Log(Severity, whole_message)
For the same example,
Log("info",first_part_message+ second_part_message)
My question is how can I make my function is able to read log like the old one.
One common way of doing this is creating a custom streambuf-derived class, say LogStreambuf, and an ostream-derived class, say LogStream, that uses LogStreambuf (but is otherwise a plain jane ostream).
Then your log objects would be
LogStream Log_INFO("info");
LogStream Log_WARN("warn");
etc.
Your custom streambuf probably should call your Log function from its sync method.
See e.g. this for an example, and this for further guidance.
I'd like to understand how to transmit the contents of a C++ class between processes or across a network.
I'm reading the Google Protobuf tutorial:
https://developers.google.com/protocol-buffers/docs/cpptutorial
and it seems you must create an abstracted, non-C++ interface to represent your class:
syntax = "proto2";
package tutorial;
message Person {
optional string name = 1;
optional int32 id = 2;
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
}
However, I'd prefer to specify my class via C++ code (rather than the abstraction) and just add something like serialize() and deserialize() methods.
Is this possible with Google Protobuf? Or is this how Protobuf works and I'd need to use a different serialization technique?
UPDATE
The reason for this is I don't want to have to maintain two interfaces. I'd prefer to have one C++ class, update it and not have to worry about a second .proto interface/definition. Code maintainability.
That's how Protobuf works. You have to use something else if you want to serialize your manually-written C++ classes. However, I'm not sure you really want that, because you then will have to either restrict yourself to very simple fields with no invariants (just like in Protobuf) or write custom (de)serialization logic yourself.
You could make a simple protocol buffer to hold binary information, but it sort of breaks the point of using Protocol buffers.
You can sort of cheat the system by using SerializeToString() and ParseFromString() to simply serialize binary information into a string.
There is also SerializeToOstream() and ParseFromIstream().
The real value of protocol buffers is being able to use messages across programs, systems and languages while using a single definition. If you aren't making messages using the protocol they've defined; this is more work than simply using native C++ capabilities.
Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");
This is a kind of follow-up of another question I asked (here) where I was made aware that using the same backend with multiple sinks is not a safe approach.
What I am trying to obtain is to "decouple" the severity levels inside a library/plugin from the applications using them, while being able to write the different logs to the same output (may it be stdout or, more likely, a file or a remote logger); this for the following reasons:
I wouldn't like to tie the severity levels inside my library/plugin to those of the applications using them because the change (for whatever reason) of the severity levels list in one of the applications using the library/plugin would cause the library to be "updated" with the new severities and, as a waterfall, of all other applications which use the library/plugin
I would like to be able to use library specific severity levels (which, for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks)
Which is the best way to obtain this?
Some afterthoughts: As per Andrey's reply to my previous question the "problem" is that the backend is not synchronized to receive data from multiple sources (sinks); thus the solution might seem to be to create a synchronized version of the backends (e.g. wrapping the writes to the backend in a boost::asio post)...
Is this the only solution?
Edit/Update
I update the question after Andrey's awesome reply, mainly for sake of completeness: the libraries/plugins are meant to be used with internally developed applications only, thus it is assumed that there will be a common API we can shape for defining the log structure and behaviour.
Plus, most applications are meant to run mainly "unmanned", i.e. with really minimal, if not null, user/runtime interaction, so the basic idea is to have the log level set in some plugin specific configuration file, read at startup (or set to be reloaded upon a specific application API command from the application).
First, I'd like to address this premise:
for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks
You don't need different sinks to be able to filter or format different types of severity levels. Your filters and formatters have to deal with that, not the sink itself. Only create multiple sinks if you need multiple log targets. So to answer your question, you should focus on the protocol of setting up filters and formatters rather than sinks.
The exact way to do that is difficult to suggest because you didn't specify the design of your application/plugin system. What I mean by that is that there must be some common API that must be shared by both the application and the libraries, and the way you set up logging will depend on where that API belongs. Severity levels, among other things, must be a part of that API. For example:
If you're writing plugins for a specific application (e.g. plugins for a media player) then the application is the one that defines the plugin API, including the severity levels and even possibly the attribute names the plugins must use. The application configures sinks, including filters and formatters, using the attributes mandated by the API, and plugins never do any configuration and only emit log records. Note that the API may include some attributes that allow to distinguish plugins from each other (e.g. a channel name), which would allow the application to process logs from different plugins differently (e.g. write to different files).
If you're writing both plugins and application(s) to adhere some common API, possibly defined by a third party, then logging protocol must still be defined by that API. If it's not, then you cannot assume that any other application or plugin not written by you supports logging of any kind, even that it uses Boost.Log at all. In this case every plugin and the application itself must deal with logging independently, which is the worst case scenario because the plugins and the application may affect each other in unpredictable ways. It is also difficult to manage the system like that because every component will have to be configured separately by the user.
If you're writing an application that must be compatible with multiple libraries, each having its own API, then it is the application who should be aware of logging convention taken in each an every library it uses, there's no way around it. This may include setting up callbacks in the libraries, intercepting file output and translating between library's log severity levels and the application severity levels. If the libraries use Boost.Log to emit log records then they should document the attributes they use, including the severity levels, so that the application is able to setup the logging properly.
So, in order to take one approach or the other, you should first decide how your application and plugins interface each other and what API they share and how that API defines logging. The best case scenario is when you define the API, so you can also set the logging conventions you want. In that case, although possible, it is not advisable or typical to have arbitrary severity levels allowed by the API because it significantly complicates implementation and configuration of the system.
However, just in case if for some reason you do need to support arbitrary severity levels and there's no way around that, you can define an API for the library to provide, which can help the application to set up filters and formatters. For example, each plugin can provide API like this:
// Returns the filter that the plugin wishes to use for its records
boost::log::filter get_filter();
// The function extracts log severity from the log record
// and converts it to a string
typedef std::function<
std::string(boost::log::record_view const&)
> severity_formatter;
// Returns the severity formatter, specific for the plugin
severity_formatter get_severity_formatter();
Then the application can use a special filter that will make use of this API.
struct plugin_filters
{
std::shared_mutex mutex;
// Plugin-specific filters
std::vector< boost::log::filter > filters;
};
// Custom filter
bool check_plugin_filters(
boost::log::attribute_value_set const& values,
std::shared_ptr< plugin_filters > const& p)
{
// Filters can be called in parallel, we need to synchronize
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->filters)
{
// Call each of the plugin's filter and pass the record
// if any of the filters passes
if (f(values))
return true;
}
// Suppress the record by default
return false;
}
std::shared_ptr< plugin_filters > pf = std::make_shared< plugin_filters >();
// Set the filter
sink->set_filter(std::bind(&check_plugin_filters, std::placeholders::_1, pf));
// Add filters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->filters.push_back(plugin1->get_filter());
pf->filters.push_back(plugin2->get_filter());
...
And a similar formatter:
struct plugin_formatters
{
std::shared_mutex mutex;
// Plugin-specific severity formatters
std::vector< severity_formatter > severity_formatters;
};
// Custom severity formatter
std::string plugin_severity_formatter(
boost::log::record_view const& rec,
std::shared_ptr< plugin_formatters > const& p)
{
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->severity_formatters)
{
// Call each of the plugin's formatter and return the result
// if any of the formatters is able to extract the severity
std::string str = f(rec);
if (!str.empty())
return str;
}
// By default return an empty string
return std::string();
}
std::shared_ptr< plugin_formatters > pf =
std::make_shared< plugin_formatters >();
// Set the formatter
sink->set_formatter(
boost::log::expressions::stream << "["
<< boost::phoenix::bind(&plugin_severity_formatter,
boost::log::expressions::record, pf)
<< "] " << boost::log::expressions::message);
// Add formatters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->severity_formatters.push_back(plugin1->get_severity_formatter());
pf->severity_formatters.push_back(plugin2->get_severity_formatter());
...
Note, however, that at least with regard to filters, this approach is flawed because you allow the plugins to define the filters. Normally, it should be the application who selects which records are being logged. And for that there must be a way to translate library-specific severity levels to some common, probably defined by the application levels.
The following page provides a nice simple solution for file based logging in Qt for debugging without using a larger logging framework like the many that are suggested in other SO questions.
I'm writing a library and would like to instantiate a logger that the classes in the library can use (mostly for debugging purposes). There is no int main() function since it's a library. So would the best approach be to add the instantiation into a file like logger.h and have any classes include logger.h if it would like to do qDebug() << PREFIX << "Bla" as the link above suggests?
I pretty much agree with OrcunC but I'd recommend making that ofstream a little more accessible and capable of handling the Qt value types.
Here's my recommended process:
Create a global QIODevice that to which everything will be written. This will probably be a QFile.
Create a QTextStream wrapper around that QIODevice that you'll then use for all the logging.
If you want something slightly more complicated, create methods that do the filtering based on log level info.
For example:
// setup the global logger somewhere appropriate
QFile *file = new QFile("your.log");
file->open(QIODevice::ReadOnly);
QTextStream *qlogger = new QTextStream(file);
And once the global logger is initialized, you could reference it as a global:
#include "qlogger.h"
//... and within some method
*qlogger << "your log" << aQtValueType;
But you might want some filtering:
#include "qlogger.h"
// lower number = higher priority
void setCurrentLogLevel(int level) {
globalLogLevel = level;
}
QTextStream* qLog(int level) {
if (level <= globalLogLevel) {
return qlogger;
}
return getNullLogger(); // implementation left to reader
}
And then you'd likely create an enum that represented the LogLevel and do something like this:
#include "qlogger.h"
//...
setCurrentLogLevel(LogLevel::Warning);
*qLog(LogLevel::Debug) << "this will be filtered" << yourQMap;
*qLog(LogLevel::Critical) << "not filtered" << yourQString;
As you'd be dealing with globals, carefully consider memory management issues.
If you follow the method in that link, ALL messages of the application output with qCritical(), qDebug(), qFatal() and qWarning() will flow into your handler.
So be careful! You may get not only your library's trace messages but the entire QT framework's messages. I guess this is not what you really want.
Instead of this
as a simple solution define a global *ofstream* in your library and use it only within your library.
whenever you write a library in c++ or c , it is best practice to declare all your methods in a .h file and define the methods/classes in a .cpp/.c file. This serves 2 purposes.
The .h file needs to be used to compile a 3rd party application that is using your library, and the library itself is used at link time.
The developer who is using your library can use the .h file as a reference to your library since it contains all the declarations.
So ,yes , you need to declare methods in a .h file and have other classes include logger.h.