I have the following statements:
static Logging::Logger* common_logger = new Logging::Logger(Logging::Logger::LEVEL);
In the Logger.h i have
class Logger {
public:
enum LEVEL {
Debug,
Warning,
Notification,
Error
};
}
I have included the file Logger.h inside my another class as :
Logging::log(CustomDialog::logger, Logging::Entry, CustomDialog::CLASSNAME, "CustomDialog");
I need to know if this is the right way to do the reason why i am doing this is to get logs based upon the level.
Regards,
Take a look at Log4cxx - it's easy to use and contains just about every feature you might want in a logging framework for C++. It's extensible, it can be configured through configuration files, and it even supports remote logging out of the box.
You can use ACE_DEBUG, it seems old-school (ala printf) but thread-safe, reliable and fully configurable (use logfiles, stdout etc..) You'll have to link against libACE(Adaptive Communication Framework) of course, but it's development packages are easily available in many linux distros per default nowadays. I've been looking over the list from that C++ logging libraries post, mentioned by Als, but it seems most people are running into mem leaks with many of the frameworks and boost::Log is not out yet.
Another point is that most logging libraries using streams, for example like this:
// from thread 1
mlog(mlog::DEBUG) << "Debug message goes here" << mlog::endl;
// from thread 2
mlog(mlog::INFO) << "Info message goes here" << mlog::endl;
will not work as expected in a multithreaded environment, while ACE will perform correctly there.
The output of the above will look something like this:
[thread1 | 12:04.23] Debug me[thread2 | 12:04.24] Info message goesssage goes herehere
Related
I am working a big legacy project and need to redo the common logger.
I tried to make same logger interface with before to avoiding changing ton of loggers.
The reason I need to redo the logger is the old one is syslog UDP which was using built-in library functions, while the new one I'm using GELF UDP.
Suppose I have a log with two parts of message, severity is info. The old interface is like below:
Log_INFO<< "First part message" <<"Second part message"<< endl;
Log_INFO is like 'std::cout', but it has two functionality:
Print out message in the command line.
Collect it in Graylog.
My new function is like below:
//Severity = {debug,info,warning, error, critical}
Log(Severity, whole_message)
For the same example,
Log("info",first_part_message+ second_part_message)
My question is how can I make my function is able to read log like the old one.
One common way of doing this is creating a custom streambuf-derived class, say LogStreambuf, and an ostream-derived class, say LogStream, that uses LogStreambuf (but is otherwise a plain jane ostream).
Then your log objects would be
LogStream Log_INFO("info");
LogStream Log_WARN("warn");
etc.
Your custom streambuf probably should call your Log function from its sync method.
See e.g. this for an example, and this for further guidance.
This is a kind of follow-up of another question I asked (here) where I was made aware that using the same backend with multiple sinks is not a safe approach.
What I am trying to obtain is to "decouple" the severity levels inside a library/plugin from the applications using them, while being able to write the different logs to the same output (may it be stdout or, more likely, a file or a remote logger); this for the following reasons:
I wouldn't like to tie the severity levels inside my library/plugin to those of the applications using them because the change (for whatever reason) of the severity levels list in one of the applications using the library/plugin would cause the library to be "updated" with the new severities and, as a waterfall, of all other applications which use the library/plugin
I would like to be able to use library specific severity levels (which, for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks)
Which is the best way to obtain this?
Some afterthoughts: As per Andrey's reply to my previous question the "problem" is that the backend is not synchronized to receive data from multiple sources (sinks); thus the solution might seem to be to create a synchronized version of the backends (e.g. wrapping the writes to the backend in a boost::asio post)...
Is this the only solution?
Edit/Update
I update the question after Andrey's awesome reply, mainly for sake of completeness: the libraries/plugins are meant to be used with internally developed applications only, thus it is assumed that there will be a common API we can shape for defining the log structure and behaviour.
Plus, most applications are meant to run mainly "unmanned", i.e. with really minimal, if not null, user/runtime interaction, so the basic idea is to have the log level set in some plugin specific configuration file, read at startup (or set to be reloaded upon a specific application API command from the application).
First, I'd like to address this premise:
for being correctly displayed in the log messaged should be supplied to the sink formatter - thus my need of using different sinks
You don't need different sinks to be able to filter or format different types of severity levels. Your filters and formatters have to deal with that, not the sink itself. Only create multiple sinks if you need multiple log targets. So to answer your question, you should focus on the protocol of setting up filters and formatters rather than sinks.
The exact way to do that is difficult to suggest because you didn't specify the design of your application/plugin system. What I mean by that is that there must be some common API that must be shared by both the application and the libraries, and the way you set up logging will depend on where that API belongs. Severity levels, among other things, must be a part of that API. For example:
If you're writing plugins for a specific application (e.g. plugins for a media player) then the application is the one that defines the plugin API, including the severity levels and even possibly the attribute names the plugins must use. The application configures sinks, including filters and formatters, using the attributes mandated by the API, and plugins never do any configuration and only emit log records. Note that the API may include some attributes that allow to distinguish plugins from each other (e.g. a channel name), which would allow the application to process logs from different plugins differently (e.g. write to different files).
If you're writing both plugins and application(s) to adhere some common API, possibly defined by a third party, then logging protocol must still be defined by that API. If it's not, then you cannot assume that any other application or plugin not written by you supports logging of any kind, even that it uses Boost.Log at all. In this case every plugin and the application itself must deal with logging independently, which is the worst case scenario because the plugins and the application may affect each other in unpredictable ways. It is also difficult to manage the system like that because every component will have to be configured separately by the user.
If you're writing an application that must be compatible with multiple libraries, each having its own API, then it is the application who should be aware of logging convention taken in each an every library it uses, there's no way around it. This may include setting up callbacks in the libraries, intercepting file output and translating between library's log severity levels and the application severity levels. If the libraries use Boost.Log to emit log records then they should document the attributes they use, including the severity levels, so that the application is able to setup the logging properly.
So, in order to take one approach or the other, you should first decide how your application and plugins interface each other and what API they share and how that API defines logging. The best case scenario is when you define the API, so you can also set the logging conventions you want. In that case, although possible, it is not advisable or typical to have arbitrary severity levels allowed by the API because it significantly complicates implementation and configuration of the system.
However, just in case if for some reason you do need to support arbitrary severity levels and there's no way around that, you can define an API for the library to provide, which can help the application to set up filters and formatters. For example, each plugin can provide API like this:
// Returns the filter that the plugin wishes to use for its records
boost::log::filter get_filter();
// The function extracts log severity from the log record
// and converts it to a string
typedef std::function<
std::string(boost::log::record_view const&)
> severity_formatter;
// Returns the severity formatter, specific for the plugin
severity_formatter get_severity_formatter();
Then the application can use a special filter that will make use of this API.
struct plugin_filters
{
std::shared_mutex mutex;
// Plugin-specific filters
std::vector< boost::log::filter > filters;
};
// Custom filter
bool check_plugin_filters(
boost::log::attribute_value_set const& values,
std::shared_ptr< plugin_filters > const& p)
{
// Filters can be called in parallel, we need to synchronize
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->filters)
{
// Call each of the plugin's filter and pass the record
// if any of the filters passes
if (f(values))
return true;
}
// Suppress the record by default
return false;
}
std::shared_ptr< plugin_filters > pf = std::make_shared< plugin_filters >();
// Set the filter
sink->set_filter(std::bind(&check_plugin_filters, std::placeholders::_1, pf));
// Add filters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->filters.push_back(plugin1->get_filter());
pf->filters.push_back(plugin2->get_filter());
...
And a similar formatter:
struct plugin_formatters
{
std::shared_mutex mutex;
// Plugin-specific severity formatters
std::vector< severity_formatter > severity_formatters;
};
// Custom severity formatter
std::string plugin_severity_formatter(
boost::log::record_view const& rec,
std::shared_ptr< plugin_formatters > const& p)
{
std::shared_lock< std::shared_mutex > lock(p->mutex);
for (auto const& f : p->severity_formatters)
{
// Call each of the plugin's formatter and return the result
// if any of the formatters is able to extract the severity
std::string str = f(rec);
if (!str.empty())
return str;
}
// By default return an empty string
return std::string();
}
std::shared_ptr< plugin_formatters > pf =
std::make_shared< plugin_formatters >();
// Set the formatter
sink->set_formatter(
boost::log::expressions::stream << "["
<< boost::phoenix::bind(&plugin_severity_formatter,
boost::log::expressions::record, pf)
<< "] " << boost::log::expressions::message);
// Add formatters from plugins
std::unique_lock< std::shared_mutex > lock(pf->mutex);
pf->severity_formatters.push_back(plugin1->get_severity_formatter());
pf->severity_formatters.push_back(plugin2->get_severity_formatter());
...
Note, however, that at least with regard to filters, this approach is flawed because you allow the plugins to define the filters. Normally, it should be the application who selects which records are being logged. And for that there must be a way to translate library-specific severity levels to some common, probably defined by the application levels.
How do I get the list of open file handles by process id in C#?
I'm interested in digging down and getting the file names as well.
Looking for the programmatic equivalent of what process explorer does.
Most likely this will require interop.
Considering adding a bounty on this, the implementation is nasty complicated.
Ouch this is going to be hard to do from managed code.
There is a sample on codeproject
Most of the stuff can be done in interop, but you need a driver to get the filename cause it lives in the kernel's address space. Process Explorer embeds the driver in its resources. Getting this all hooked up from C# and supporting 64bit as well as 32, is going to be a major headache.
You can also run the command line app, Handle, by Mark Rusinovich, and parse the output.
Have a look at this file :
http://vmccontroller.codeplex.com/SourceControl/changeset/view/47386#195318
And use:
DetectOpenFiles.GetOpenFilesEnumerator(processID);
Demo:
using System;
using System.Diagnostics;
namespace OpenFiles
{
class Program
{
static void Main(string[] args)
{
using (var openFiles = VmcController.Services.DetectOpenFiles.GetOpenFilesEnumerator(Process.GetCurrentProcess().Id))
{
while (openFiles.MoveNext())
{
Console.WriteLine(openFiles.Current);
}
}
Console.WriteLine();
Console.ReadKey();
}
}
}
It has dependency over assembly System.EnterpriseServices
You can P/INVOKE into the NtQuerySystemInformation function to query for all handles and then go from there. This Google groups discussion has details.
Take a look at wj32's Process Hacker version 1, which can do what you asked, and more.
Handle is great program, and the link to codeproject is good.
#Brian
The reason for the code is that handle.exe is NOT redistributable. Nor do they release their source.
It looks as if .Net will not easily do this since it appears that an embedded device drive is requried to access the information. This cannot be done in .net without an unmanged DLL. It's relatviely deep kernel code when compared to typical .net coding. I'm surprised that WMI does not expose this.
Perhaps using command line tool:
OpenedFilesView v1.50 - View opened/locked files in your system (sharing violation issues)
http://www.nirsoft.net/utils/opened_files_view.html
So...I have a kernel mode component and a user mode component I'm putting together using the turnkey build environment of the NT DDK 7.1.0. The kernel component is all .c/.h/.rc files. The user mode component is .cpp/.c/.h/.rc files.
At first it seemed simplest to use build for both, as I saw you could modify the ./sources file of the user mode component to say something like:
TARGETNAME = MyUserModeComponent
TARGETTYPE = PROGRAM
UMTYPE = windows
UMENTRY = winmain
USE_MSVCRT = 1
That didn't seem to cause a problem and so I was pleased, until I tried to #include <string> (or <memory>, or whatever) Doesn't find that stuff:
error C1083: Cannot open include file: 'string': No such file or directory
Still, it's compiling the user mode piece with C++ language semantics. But how do I get the standard includes to work?
I don't technically need to use the DDK build tool for the user mode piece. I could make a visual studio solution. I'm a bit wary as I have bumped into other annoyances, like the fact that the DDK uses __stdcall instead of __cdecl by default... and there isn't any pragma or compiler switch to override this. You literally have to go into each declaration you care about and change it, assuming you have source to do so. :-/
I'm starting to wonder if this is just a fractal descent into "just because you CAN doesn't mean you SHOULD build user mode apps with the DDK. Here be dragons." So my question isn't just about this particular technical hurdle, but rather if I should abandon the idea of building a C++ user mode component with the DDK tools...just because the kernel component is pure C.
To build a user mode program with WINDDK you need to add some variables to your SOURCES file:
386_STDCALL=0 to use cdecl calling convention by default
USE_STL=1 to use STL
USE_NATIVE_EH=1 to add a support for exception handling
Everything else you already have.
I'll put my full SOURCES file for reference:
TARGETNAME = MyUserModeComponent
TARGETTYPE = PROGRAM
TARGETPATH = obj
UMTYPE = console
UMENTRY = main
USE_MSVCRT = 1
USE_NATIVE_EH=1
USE_STL=1
386_STDCALL=0
SOURCES= main.cpp
And main.cpp:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string s = "bla bla bla!";
cout << s;
return 0;
}
Have fun!
Quick Answer
Abandon the idea of building user-mode components with DDK tools (although I find the concept fascinating :-P)
Your kernel mode component should be built separately from the user mode components as a matter of good practice.
Vague thoughts
Off the top of my head, and this really speaking from limited experience...there are a lot of subtle differences that can creep up if you try to mix the two together.
Using your own example of __cdecl vs __stdcall; You have two different calling conventions. _cdecl is all kernel stuff and all of the C++ methods are wrapped around in WINAPI (_stdcall) passing conventions and __stdcall will clean do auto stack clean up and expect frame pointers inserted all over the place. And if you by accident use compiler options to trigger a __fastcall, it would be a pain to debug.
You can definitely hack something together, but do you really want to keep track of that in your user-space code and build environment? UGH I say.
Unless you have very specific engineering reasons to mix the two environments, (and no a unified build experience is not a valid reason, because you can get that from a batch file called buildall.bat) I say use the separate toolchains.
The following page provides a nice simple solution for file based logging in Qt for debugging without using a larger logging framework like the many that are suggested in other SO questions.
I'm writing a library and would like to instantiate a logger that the classes in the library can use (mostly for debugging purposes). There is no int main() function since it's a library. So would the best approach be to add the instantiation into a file like logger.h and have any classes include logger.h if it would like to do qDebug() << PREFIX << "Bla" as the link above suggests?
I pretty much agree with OrcunC but I'd recommend making that ofstream a little more accessible and capable of handling the Qt value types.
Here's my recommended process:
Create a global QIODevice that to which everything will be written. This will probably be a QFile.
Create a QTextStream wrapper around that QIODevice that you'll then use for all the logging.
If you want something slightly more complicated, create methods that do the filtering based on log level info.
For example:
// setup the global logger somewhere appropriate
QFile *file = new QFile("your.log");
file->open(QIODevice::ReadOnly);
QTextStream *qlogger = new QTextStream(file);
And once the global logger is initialized, you could reference it as a global:
#include "qlogger.h"
//... and within some method
*qlogger << "your log" << aQtValueType;
But you might want some filtering:
#include "qlogger.h"
// lower number = higher priority
void setCurrentLogLevel(int level) {
globalLogLevel = level;
}
QTextStream* qLog(int level) {
if (level <= globalLogLevel) {
return qlogger;
}
return getNullLogger(); // implementation left to reader
}
And then you'd likely create an enum that represented the LogLevel and do something like this:
#include "qlogger.h"
//...
setCurrentLogLevel(LogLevel::Warning);
*qLog(LogLevel::Debug) << "this will be filtered" << yourQMap;
*qLog(LogLevel::Critical) << "not filtered" << yourQString;
As you'd be dealing with globals, carefully consider memory management issues.
If you follow the method in that link, ALL messages of the application output with qCritical(), qDebug(), qFatal() and qWarning() will flow into your handler.
So be careful! You may get not only your library's trace messages but the entire QT framework's messages. I guess this is not what you really want.
Instead of this
as a simple solution define a global *ofstream* in your library and use it only within your library.
whenever you write a library in c++ or c , it is best practice to declare all your methods in a .h file and define the methods/classes in a .cpp/.c file. This serves 2 purposes.
The .h file needs to be used to compile a 3rd party application that is using your library, and the library itself is used at link time.
The developer who is using your library can use the .h file as a reference to your library since it contains all the declarations.
So ,yes , you need to declare methods in a .h file and have other classes include logger.h.