I am using boost test within a home-grown GUI, and want to access test results (e.g. the failure message and location when a test fails)
The unit_test::test_observer class provides the virtual method:
void assertion_result(boost::unit_test::assertion_result)
However, unit_test::assertion_result is just an enum indicating success or failure. From there, I cannot see how to access further information about the test result.
The framework also provides the class test_tools::assertion_result, which encapsulates an error message, but this only appears to be used for evaluating pre-conditions. (I would have expected this type to be the argument to unit_test::test_observer::assertion_result).
The log output classes appear to provide more information on test results. These are implemented as streams, which makes it non-trivial to extract test result data.
Does anyone know how I can access the information on test results - success/failure, the test code, the location, etc?
Adding an observer will not give you the level of details you need.
From this class you can add your own formatter using the add_formatter function. This will contain the details of what is happening and where, depending on the formatter log level.
Related
I am trying to get the mib2c template code to work for a simple scalar, but continue to get MY-MIB::mibName = No such object available on this agent at this OID upon a snmpget request no matter what I do.
When generating the mib2c template code I choose the net-snmp -> scalar options. From there I have tried options 1 and 2.
My understanding of the option 2 template is that you shouldn't even have to change any of the code to get it to successfully return a zero value for scalars.
However, the debug messages show that the init_* functions are getting called but the handlers are not getting called at all.
I am wondering if anyone can point me to resources showing a successful implementation example of the mib2c generated code as I am fairly lost at this point.
Thanks!
I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?
I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!
I need to turn off the spdlog level before some code then return it to the previous value after.
How do I get the current level before turning it off?
To get current level of logger use logger::level().
To set new level use logger::set_level().
Scenario 1: User-constructed logger
If you have a spdlog::logger object you're using (say, my_logger), then:
You can obtain the level with: my_logger.level().
If you just want to know whether a certain-level message would be logged, then use my_logger.should_log(some_level) where some_level could be, for example spdlog::level::debug.
Scenario 2: The global logger
Now suppose you're using the global logger (e.g. you emit log messages using spdlog::info(), spdlog::error() and such).
spdlog version 1.8.0 and later
You can obtain the global log level with a call to spdlog::get_level() (which is a freestanding function, not a method).
spdlog versions before 1.8.0
You need to get your hand on the implicit logger object - by callingspdlog::default_logger_raw() (it gets you a pointer.) Now just proceed as in Scenario 1 above.
There seems now to be a function to get the global logging level:
spdlog::get_level();
When It is possible to define a custom ErrorCollector class for handling google::protobuf parsing errors
struct ErrorCollector : ::google::protobuf::io::ErrorCollector
{
void AddError(int line, int column, const std::string& message) override
{
// log error
}
void AddWarning(int line, int column, const std::string& message) override
{
// log warning
}
};
When parsing from a text file, you can use the protobuf TextFormat class and register your custom ErrorCollector
::google::protobuf::io::IstreamInputStream input_stream(&file);
::google::protobuf::TextFormat::Parser parser;
ErrorCollector error_collector;
parser.RecordErrorsTo(&error_collector);
if (parser.Parse(&input_stream, &msg))
{
// handle msg
}
For parsing wire format, I currently use Message::ParseFromArray
if (msg.ParseFromArray(data, data_len))
{
// handle msg
}
This doesn't allow me to specify a custom ErrorCollector though.
I've searched through the source code, but as of yet have been unable to find if this is possible.
Is it possible to use an ErrorCollector when parsing wire format?
Is there another way to intercept parse errors and make them available to client code?
There are essentially two ways that parsing the wire format could fail:
The bytes are not a valid protobuf (e.g. they are corrupted, or in a totally different format).
A required field is missing.
For case 1, protobuf does not give you any more information than "it's invalid". This is partly for code simplicity (and speed), but it is also partly because any attempt to provide more information usually turns out more misleading than helpful. Detailed error reporting is useful for text format because text is often written by humans, but machines make very different kinds of errors. In some languages, protobuf actually reports specific errors like "end-group tag does not match start-group tag". In the vast majority of cases, this error really just means "the bytes are corrupted", but inevitably people think the error is trying to tell them something deeper which they do not understand. They then post questions to stack overflow like "How do I make sure my start-group and end-group tags match?" when they really should be comparing bytes between their source and destination to narrow down where they got corrupted. Even reporting the byte position where the parse error occurred is not very useful: protobuf is a dense encoding, which means that many random corrupt byte sequences will parse successfully, which means the parser may only notice a problem somewhere later down the line rather than at the point where things actually went wrong.
The one case that clearly is useful to distinguish is case 2 (missing required fields) -- at least, if you use required fields (I personally recommend avoiding them). There are a couple options here:
Normally, required field checks write errors to the console (on stderr). You can intercept these and record them your own way using SetLogHandler, but this doesn't give you structured information, only text messages.
To check required fields more programmatically, you can separate required field checking from parsing. Use MessageLite::ParsePartialFromArray() or one of the other Partial parsing methods to parse a message while ignoring the absence of required fields. You can then use the MessageLite::IsInitialized() to check if all required fields are set. If it returns false, use Message::FindInitializationErrors() to get a list of paths of all required fields that are missing.
I'm using the decorator pattern to implement caching for my Repositories as such:
IFooRepository()
IFooRepository FooRepository()
IFooRepository CachedFooRepository(IFooRepository fooRepository)
The Cached repository checks the cache for the requested object and if it doesn't exist, calls the FooRepository to retrieve and store it. I'm currently registering these types with StructureMap using the following method:
For<IFooRepository>().Use<CachedFooRepository()
.Ctor<IFooRepository>().Use<FooRepository>();
This works fine, but as the number of cached repositories grows, registering each one individually is becoming unwieldy and is error prone. Seeing as I have a common convention, I'm trying to scan my assembly using a custom IRegistrationConvention, but I can't seem to figure out how to pass the FooRepository to the constructor of CachedFooRepository in the void Process(Type type, Registry registry) function.
I've found examples to do something like:
Type interfaceType = type.GetInterface(type.Name.Replace("Cached", "I"));
registry.AddType(interfaceType, type);
or
Type interfaceType = type.GetInterface(type.Name.Replace("Cached", "I"));
registry.For(interfaceType).Use(type);
But neither method will allow me to chain the .Ctor. What am I missing? Any ideas?