boost::log setting "Channel" attribute in a channel logger - c++

I use several severity_channel_logger_mt instances throughout the project with different "Channel" attributes. However, for one specific log line, I'd like to set the "Channel" attribute directly in the call. Using the macro BOOST_LOG_CHANNEL_SEV(logger, channel, severity), this is actually not difficult to do. However, this changes the "Channel" attribute. Subsequent logging calls will not use the initial channel attribute, but the changed one from the last logging call.
The only way I found to change the channel attribute back to the original value is: to mis-use the open_record() function of the logger object.
My Question: is there a more elegant way of doing this? Is there perhaps a specific function that allows setting attributes of a logger directly?
Code snippet to highlight the procedure:
auto & lg = global_logger::get();
BOOST_LOG_CHANNEL_SEV(lg, "test-subsystem", exec_severity) << "Message 1";
// misuse the open_record call to set the channel attribute
// reset channel name back to "global"
auto rc = lg.open_record(boost::log::keywords::channel = "global" );
rc.reset(); // attempt to clean-up a bit
BOOST_LOG_CHANNEL(lg, exec_severity) << "Message 2";
In the above example, "Message 1" shall come from "test-subsystem", but the other messages shall come from "global" channel. If the open_record() and rc.reset(); lines are commented out, both messages come from the "test-system"
Update:
I ended up implementing a slightly different solution
I created a dedicated logger for these log messages
I use BOOST_LOG_CHANNEL_SEV() to log to this logger which takes an argument to set the "Channel" name on each call
The updated code snippet from above looks like this:
auto & stlog = global_logger::get();
auto & lg = special_logger::get();
BOOST_LOG_CHANNEL_SEV(lg, "test-subsystem", exec_severity) << "Message 1";
// this log line will be in channel "global" again
BOOST_LOG_SEV(stlog, exec_severity) << "Message 2";

is there a more elegant way of doing this?
As you can see in the channel feature reference section, there is a channel method, which can be used to set the channel name. This method is inherited by the logger.
However, it is generally advised to avoid modifying the channel name for performance reasons. When you have multiple distinct subsystems with corresponding channel names it is better to dedicate a separate logger for each subsystem. Otherwise you're paying performance overhead for setting channel name on each log record along with the necessary thread synchronization.

Related

Temporary disable console output for boost::log

I added sink to file via boost::log::add_file_log and console output via boost::log::add_console_log. I am calling a logger via BOOST_LOG_SEV and everything workds perfectely. But there is a place, where a want output only to the file.
How I can disable cosole output in certain place?
You could achieve this with attributes and filters. For example, you could set up a filter in your console sink to suppress any log records that have (or don't have, depending on your preference) a particular attribute value attached.
boost::log::add_console_log
(
...
boost::log::keywords::filter = !boost::log::expressions::has_attr("NoConsole")
...
);
Then you could set this attribute in the code region that shouldn't output logs in the console. For example, you could use a scoped attribute:
BOOST_LOG_SCOPED_THREAD_ATTR("NoConsole", true);
BOOST_LOG(logger) << "No console output";
You can use whatever method of setting the attribute - as a thread-local attribute, or a logger-specific, it doesn't matter.
The important difference from temporarily removing the sink is that the solution with attributes will not affect other threads that may be logging while you're suspending console output.
You can easely do it with remove_sink() function.
console_sink = boost::log::add_console_log(std::cout);
boost::log::core::get()->remove_sink(console_sink);
After that you can call an add_console_log() again and enable console output.

Use channel hiearchy of Boost.Log for severity and sink filtering

I have been studying Boost.Log for a while and I believe now is the time for me to transition my code base from log4cxx to Boost.Log. I believe the design and implementation of Boost.Log will significantly improve my code maintenance and usage. I know the Boost.Log FAQ has a page that says
As for hierarchical loggers, there is no need for this feature in the current library design. One of the main benefits it provides in log4j is determining the appenders (sinks, in terms of this library) in which a log record will
end up. This library achieves the same result by filtering.
I understand the conceptual equivalence and am not trying to make Boost.Log into log4j/log4cxx. Rather my question is: How do I use Boost.Log to get the same functionality that I currently use from log4cxx? In particular, I want to set severity thresholds and sinks for specific nodes in a log source or channel hierarchy. For example, I have logging sources organized by libA.moduleB.componentC.logD with levels in the hierarchy separated by dots .. Using log4cxx one can set the overall threshold of libA to INFO with the more specific logger, libA.moduleB, having a threshold of DEBUG.
libA.threshold=INFO
libA.moduleB.threshold=DEBUG
Similarly one can attach sinks to arbitrary nodes in the hierarchy.
I believe that a similar capability is possible with Boost.Log but I need help/guidance on how to actually implement this. Plus, I am sure others who would like to transition to Boost.Log from other frameworks will have the same question.
I sincerely appreciate your comments.
In Boost.Log sinks (the objects that write log files) and loggers (the objects through which your application emits log records) are not connected directly, and any sink may receive a log message from any logger. In order to make records from certain loggers appear only in particular sinks you will have to arrange filters in sinks so that the unnecessary records are suppressed for sinks that are not supposed to receive them and passed for others. To distinguish records from different loggers the loggers have to add distinct attributes to every record they make. Typically this is achieved with channels - loggers will attach a Channel attribute that can be used to identify the logger in the filters, formatters or sinks. Channels can be combined with other attributes, such as severity levels. It must be noted though that channels and severity levels are orthogonal, and any channel may have records of any level. Values of different attributes are analyzed separately in filters.
So, for example, if you want records from channel A to be written to file A.log, and from channel B - to B.log, you have to create two sinks - one for each file, and set their filters accordingly.
BOOST_LOG_ATTRIBUTE_KEYWORD(a_severity, "Severity", severity_level)
BOOST_LOG_ATTRIBUTE_KEYWORD(a_channel, "Channel", std::string)
logging::add_file_log(
keywords::file_name = "A.log",
keywords::filter = a_channel == "A");
logging::add_file_log(
keywords::file_name = "B.log",
keywords::filter = a_channel == "B");
See the docs about defining attribute keywords and convenience setup functions. Now you can create loggers for each channel and log records will be routed to sinks by filters.
typedef src::severity_channel_logger< severity_level, std::string > logger_type;
logger_type lg_a(keywords::channel = "A");
logger_type lg_b(keywords::channel = "B");
BOOST_LOG_SEV(lg_a, info) << "Hello, A.log!";
BOOST_LOG_SEV(lg_b, info) << "Hello, B.log!";
You can have as many loggers for a single channel as you like - messages from each of them will be directed to a single sink.
However, there are two problems here. First, the library has no knowledge of the channel nature and considers it just an opaque value. It has no knowledge of channel hierarchy, so "A" and "A.bb" are considered different and unrelated channels. Second, setting up filters like above can be difficult if you want multiple channels to be written to a single file (like, "A" and "A.bb"). Things will become yet more complicated if you want different severity levels for different channels.
If channel hierarchy is not crucial for you, you can make filter configuration easier with a severity threshold filter. With that filter you can set minimal severity level for each corresponding channel. If you want to inherit severity thresholds in sub-channels then your only way is to write your own filter; the library does not provide that out of the box.
There are multiple ways to create a filter but it boils down to writing a function that accepts attribute values from log records and returns true if this record passed the filter and false otherwise. Perhaps, the easiest way is shown in Tutorial, see the example with phoenix::bind from Boost.Phoenix.
bool my_filter(
logging::value_ref< severity_level, tag::a_severity > const& level,
logging::value_ref< std::string, tag::a_channel > const& channel,
channel_hierarchy const& thresholds)
{
// See if the log record has the severity level and the channel attributes
if (!level || !channel)
return false;
std::string const& chan = channel.get();
// Parse the channel string, look for it in the hierarchy
// and find out the severity threshold for this channel
severity_level threshold = thresholds.find(chan);
return level.get() >= threshold;
}
Now setting up sinks would change like this to make use of your new filter:
logging::add_file_log(
keywords::file_name = "A.log",
keywords::filter = phoenix::bind(&my_filter, a_severity.or_none(), a_channel.or_none(), hierarchy_A));
logging::add_file_log(
keywords::file_name = "B.log",
keywords::filter = phoenix::bind(&my_filter, a_severity.or_none(), a_channel.or_none(), hierarchy_B));
Here hierarchy_A and hierarchy_B are your data structures used to store severity thresholds for different channels for the two log files.

Wowza: Modifying a Stream as it is playing?

It seems like this must be happening in many different contexts such as adding subtitles. What I want to do is grab a frame, change some feature within it and then "put it back" so that the end user sees this change. I think I know how to grab and modify the frame but re-inserting it into the stream I do not see how to do. Would appreciate a link or code.
On a live stream, there are a few things to consider depending on what the end goal might be. If it's true packet / frame level manipulation you would likely need to make the modification and set the output to a new stream (source remains unscathed but new stream has the modification). Modifying the stream inline will be very problematic.
Packet level modification using IMediaStreamLivePacketNotify
You can implement the IMediaStreamLivePacketNotify interface to handle new packets and modify them as necessary. Example implementation:
private class PacketListener implements IMediaStreamLivePacketNotify
{
#Override
public void onLivePacket(IMediaStream stream, AMFPacket packet)
{
// handle packet modifications
}
}
Upon modifying the packet you could publish it to a secondary stream that you publish through the Publisher object.
Publisher.createInstance(vhost, appName, appInstName);
The publisher contains functionality to add A/V data to your new stream:
switch (packet.getType())
{
case IVHost.CONTENTTYPE_AUDIO:
publisher.addAudioData(packet.getData(), packet.getAbsTimecode());
break;
case IVHost.CONTENTTYPE_VIDEO:
publisher.addVideoData(packet.getData(), packet.getAbsTimecode());
break;
case IVHost.CONTENTTYPE_DATA:
case IVHost.CONTENTTYPE_DATA3:
publisher.addDataData(packet.getData(), packet.getAbsTimecode());
}
There is similar functionality within the Duplicate Streams module for a broader look at this implementation.
Packet level modification using getPlayPackets()
You could also look at the IMediaStream object and leverage the IMediaStream.getPlayPackets() functionality. Then you can obtain the packets and modify as needed in a corresponding thread that continually processes the inbound stream. Thereafter, you could use the Publisher object to publish the new stream (similar to the above).
Metadata injection
However, if you are just looking to inject some metadata the process becomes much more basic. You can modify the AMFDataList within the source stream to include the new meta information.
Adding onto the stream
If you are looking to add data onto the inline stream (vs modifying it) you could simply add it via the ImediaStream object:
IMediaStream.addAudioData(..)

c++ driver mongodb connection options

It seems that c++ drivers doesn't accept mongodb connection uri format.
There's no documentation on how i should create connection string, any guess?
I need to connect to a replica set with 3 servers, and set readPreference options.
Create a connection to a replica set in MongoDB C++ client
Until the problems explained in #acm's answer are resolved, I have found a workaround to the bad Connection Strings of the C++ driver. You can create a DBClientReplicaSet using a vector of hosts and ports this way:
//First create a vector of hosts
//( you can ignore port numbers if yours are default)
vector<HostAndPort> hosts;
hosts.push_back(mongo::HostAndPort("YourHost1.com:portNumber1"));
hosts.push_back(mongo::HostAndPort("YourHost2.com:portNumber2"));
hosts.push_back(mongo::HostAndPort("YourHost3.com:portNumber3"));
//Then create a Replica Set DB Client:
mongo::DBClientReplicaSet connection("YourReplicaSetName",hosts,0);
//Connect to it now:
connection.connect();
//Authenticate to the database(s) if needed
std::string errmsg;
connection.auth("DB1Name","UserForDB1","pass1",errmsg);
connection.auth("DB2Name","UserForDB2","pass2",errmsg);
Now, you can use insert, update, etc. just as you did with DBClientConnection. For a quick fix, you can replace your references to DBClientConnection with DBClientBase (which is a parent to both DBClientConnection and DBClientReplicaSet)
Last pitfall: if you are using getLastError(), you must use it with the aimed database name like this:
connection.getLastError(std::string("DBName"));
cause otherwise it will always return "command failed: must log in" as described in this JIRA ticket.
Set the read preferences for every request
You have two ways to do that:
SlaveOK option
It lets your read queries be directed to secondary servers.
It takes place in the query options, which are at the end of the parameters of DBClientReplicaSet.query(). The options are listed in Mongo's official documentation
The one you would look for is mongo::QueryOption_SlaveOk, which will allow you to have reads made on secondary instances.
This is how you should call query();
connection.query("Database.Collection",
QUERY("_id" << id),
n,
m,
BSON("SomeField" << 1),
QueryOption_SlaveOk);
where n is the number of documents to return (0 if you don't want any limit), m the number to skip (defaults to 0), the next field is your projection and the last your query option.
To use several query option, you can use bitwise or | like this :
connection.query("Database.Collection",
QUERY("_id" << id),
n,
m,
BSON("SomeField" << 1),
QueryOption_SlaveOk | QueryOption_NoCursorTimeout | QueryOption_Exhaust);
Query::readPref option
The Query object has a readPref method which sets read preferences for a special query. It should be called for each query.
You can pass different arguments for more control. They are listed here.
So here's what you should do (I did not test that one cause I can't right now but it should work just fine)
/* you should pass an array for the tags. Not sure if this is required.
Anyway, let's create an empty array using the builder. */
BSONArrayBuilder bab;
/* if any, add your tags here */
connection.query("Database.Collection",
QUERY("_id" << id).readPref(ReadPreference_SecondaryPreferred, bab.arr()),
n,
m,
BSON("SomeField" << 1),
QueryOption_NoCursorTimeout | QueryOption_Exhaust);
Note: if any readPref option is used, it should override the slaveOk option.
Hope this helped.
Please see the connection string documentation for details on the connection string format.
(code links below are to 2.2.3 files)
To use a connection string with the C++ driver, you should use the ConnectionString class. You first call the ConnectionString::parse static method with a connection string to obtain a ConnectionString object. You then call ConnectionString::connect to obtain a DBClientBase object which you can then use to send queries.
As for read preference, at the moment I do not see a way to set the read preference in the connection string for the C++ driver, which would preclude a per-connection setting.
However, the implementation of DBClientBase returned by calling ConnectionString::parse with a string that identifies a replica set will return you an instance of DBClientReplicaSet. That class honors $readPreference in queries, so you can set your read preference on a per-query basis.
Since the current C++ drivers still do not accept the standard mongodb connection URIs, I've opened a ticket:
https://jira.mongodb.org/browse/CXX-2
Please vote for it to help get this fixed.
it seems like you can set read Preference before send a read request by call "readPref" method of your Query object. I'v not found a way to set read Preference on mongo collection object yet.

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}