Dll's, Message Boxes, and Unit Testing - unit-testing

Ok... I am working on a dll that manages some configured settings (I won't bore you with the details and reasoning here as that doesn't apply). I have a class for referencing assemblies to use to interface with this system. this class has a Load() method. When there are read or validation errors, I currently have it showing a message box. I didn't feel it should be the responsibility of the referencing assembly to manage this? Or am I wrong? Currently this is creating havoc with creating unit tests, so I'm considering adding a property to suppress messages, but still allow the exceptions to be thrown. I read on one other SO posting where someone was recommended to use IoC and a dialog result helper class. The illustration was with Constructor Injection... but that would again put that responsibility into the hands of the referencing assembly. What's best practice in this case?

Personally, I think you're wrong - sorry. The DLL's responsibility is to notify of the errors, the calling code's responsibility is to determine what to do with that notification. If it's a GUI, then it can show a dialog box. If it's a unit test, it can test appropriately. If it's a website, it can write the notification out in HTML to the user. If it's a service of some sort, it can log it. And so on.

You can use a delegate to send messages to be handled elsewhere. I have made an example below using a unittest:
public delegate void ErrorHandlingDelegate(Exception exception); //The delegate
public class AsseblymRefClass //Your class doing the business logic
{
public void DoStuff(ErrorHandlingDelegate errorHandling) //Passing the delegate as argument
{
try
{
//Do your stuff
throw new Exception();
}
catch (Exception ex)
{
errorHandling(ex); //Calling the delegate
}
}
}
//Another class that can handle the error through its method 'HandleErrorsFromOtherClass()'
public class ErrorHandlingClass
{
public void HandleErrorsFromOtherClass(Exception exception)
{
MessageBox.Show(exception.Message);
}
}
[Test]
public void testmethod() //The test that creates your class, and a class for the errorhandling, and connects the two
{
ErrorHandlingClass errorHandling = new ErrorHandlingClass();
AsseblymRefClass assemblyRef = new AsseblymRefClass();
assemblyRef.DoStuff(errorHandling.HandleErrorsFromOtherClass);
}
Any method that fits the delegate can be used. Thus, you can replace your production code, with something that doesn't show a messagebox when unit testing.

Related

Service #Transactional exception translation

I have a web service with an operation that looks like
public Result checkout(String id) throws LockException;
implemented as:
#Transactional
public Result checkout(String id) throws LockException {
someDao.acquireLock(id); // ConstraintViolationException might be thrown on commit
Data data = otherDao.find(id);
return convert(data);
}
My problem is that locking can only fail on transaction commit which occurs outside of my service method so I have no opportunity to translate the ConstraintViolationException to my custom LockException.
Option 1
One option that's been suggested is to make the service delegate to another method that's #Transactional. E.g.
public Result checkout(String id) throws LockException {
try {
return someInternalService.checkout(id);
}
catch (ConstraintViolationException ex) {
throw new LockException();
}
}
...
public class SomeInternalService {
#Transactional
public Result checkout(String id) {
someDao.acquireLock(id);
Data data = otherDao.find(id);
return convert(data);
}
}
My issues with this are:
There is no reasonable name for the internal service that isn't already in use by the external service since they are essentially doing the same thing. This seems like an indicator of bad design.
If I want to reuse someInternalService.checkout in another place, the contract for that is wrong because whatever uses it can get a ConstraintViolationException.
Option 2
I thought of maybe using AOP to put advice around the service that translates the exception. This seems wrong to me though because checkout needs to declare that it throws LockException for clients to use it, but the actual service will never throw this and it will instead be thrown by the advice. There's nothing to prevent someone in the future from removing throws LockException from the interface because it appear to be incorrect.
Also, this way is harder to test. I can't write a JUnit test that verifies an exception is thrown without creating a spring context and using AOP during the tests.
Option 3
Use manual transaction management in checkout? I don't really like this because everything else in the application is using the declarative style.
Does anyone know the correct way to handle this situation?
There's no one correct way.
A couple more options for you:
Make the DAO transactional - that's not great, but can work
Create a wrapping service - called Facade - whose job it is to do exception handling/wrapping around the transactional services you've mentioned - this is a clear separation of concerns and can share method names with the real lower-level service

unit test all permutations or start publicly exposing more? Where's the line?

I know that there have been a few posts about this already but I wanted to post one with a concrete example to focus on the gray areas you face when choosing between testing private/internal methods and refactoring into a public class.
Say I have a simple class I want to test that has some internal code refactored into a private or internal method.
Example:
public class Guy
{
public void DoSomeWork()
{
try
{
//do some work
}
catch(Exception e)
{
LogError(e.Message);
}
try
{
//do some more work
}
catch(SomeSpecificException e)
{
LogError(e.Message);
}
}
private void LogError(string message)
{
//if logging folder doesn't exist, create it
//create a log file
//log the message
}
}
Most people would advise that I should take the error logging logic out and put it in a class marked public because it's starting to be complex enough to stand on its own, but the assembly's internal error logging logic shouldn't be exposed publicly--it's not a logging assembly.
If I don't do that, then every time I have code that could call LogError(), I need to have subsequent tests that retest all of the internal logic for LogError(). That quickly becomes oppressive.
I also understand I can mark the method as internal and make it visible to a testing assembly but it's not very "pure."
What sort of guidelines can I abide by in this type of situation?
but the assembly's internal error logging logic shouldn't be exposed publicly
That's true. It can be internal and tested as such. But this
private void LogError(string message)
{
//if logging folder doesn't exist, create it
//create a log file
//log the message
}
Calls for a separate component. Create folder? Create file? Whatever DoSomeWork does, it should have nothing to do with creating log file internals. All it should require is simple "log this message"-feature. Files, folders, formats -- it's all specific to logging component and shouldn't leak to any other.
Your sample code is perfect example of single responsibility principle violation. Extracting logging to fix it solves your testing issues completely.
One possible way to handle this is instead of logging the error in your DoSomeWork method, you simply raise the error, and let it get handled upstream. Then you test that the error was raised, and for the purposes of testing the DoSomeWork method, you don't care what happens after that.
Whether or not this is a good idea depends a lot on how the rest of your program is structured, but having LogError sprinkled all over the place may indicate you need a broader error handling strategy that this approach requires.
UPDATE:
public void DoSomeWork() {
try {
WorkPartA()
}
catch(Exception e) {
LogError(e.Message)
}
try {
WorkPartB()
}
catch(SomeSpecificException e) {
LogError(e.Message)
}
}
public void WorkPartA() {
//do some work
}
public void WorkPartB() {
//do some more work
}
As a response to your comment below, the problem might be that you need to break up your code into smaller pieces. This approach allows you to test raising the exception within WorkPartA() and WorkPartB() individually, while still allowing the process to run if WorkPartA fails. As far as testing DoSomeWork() in this case, I might not even bother. There's a high probability it would be a very low-value test anyway.

Getting rid of 'new' operators for subcomponents objects

I've been reading Misko Hevery's classic articles about Dependency injection, and basically 'separating the object graph creation code from the code logic'.
The main idea seems to be "get rid of the 'new' operators", put them in dedicated objects ('Factories') and inject everything you depend on."
Now, I can't seem to wrap my head about how to make this works with objects that are composed of several other components, and whose job is to isolate those components to the outerworld.
Lame example
A View class to represent a combination of a few fields, and a button. All the components depend on a graphical ui context, but you want to hide it behind the interfaces of each sub-component.
So something like (in pseudo-code, language does not really matter I guess):
class CustomView() {
public CustomView(UIContext ui) {
this.ui = ui
}
public void start() {
this.field = new Field(this.ui);
this.button = new Button(this.ui, "ClickMe");
this.button.addEventListener(function () {
if (field.getText().isEmtpy()) {
alert("Field should not be empty");
} else {
this.fireValueEntered(this.field.getText());
}
});
}
// The interface of this component is that callers
// subscribe to "addValueEnteredListener"..)
public void addValueEnteredListener(Callback ...) {
}
public void fireValueEnteredListener(text) {
// Would call each listeners in turn
}
}
The callers would do something like :
// Assuming a UIContext comes from somewhere...
ui = // Wherever you get UI Context from ?
v = new CustomView(ui);
v.addValueEnteredListener(function (text) {
// whatever...
});
Now, this code has three 'new' operators, and I'm not sure which one Misko (and other DI proponents) are advocating to rid off, or how.
Getting rid of new Field() and new Button()
Just Inject it
I don't think the idea here is to actually inject the instances of Field and Button , which could be done this way :
class CustomView() {
public CustomView(Field field, Button button) {
this.field = field;
this.button = button;
}
public void start() {
this.button.addEventListener(function () {
if (field.getText().isEmtpy()) {
alert("Field should not be empty");
} else {
this.fireValueEntered(this.field.getText());
}
});
}
// ... etc ...
This makes the code of the component lighter, surely, and it actually hides the notion of UI, so the MetaForm component has clearly been improved in terms of readability and testability.
However, the burden is now on the client to create those things :
// Assuming a UIContext comes from somewhere...
ui = // wherever ui gets injected from
form = new Form(ui);
button = new Button(ui);
v = new CustomView(form, button);
v.addValueEnteredListener(function (text) {
// whatever...
});
That sounds really troubling to me, espacially since the client know has to all the inners of the class, which sounds silly.
Mama knows, inject her instead
What the articles seems to advocate is instead injecting a Factory to create the components elements.
class CustomView() {
public CustomView(Factory factory) {
this.factory = factory;
}
public void start() {
this.field = factory.createField();
this.button = factory.createButton();
this.button.addEventListener(function () {
if (field.getText().isEmtpy()) {
alert("Field should not be empty");
} else {
this.fireValueEntered(this.field.getText());
}
});
}
// ... etc ...
And then everything gets nice for the caller, because its just has to get the factory from somewhere (and this factory will be the only to know about the UI context, so hurray for decoupling.)
// Assuming a UIContext comes from somewhere...
factory = // wherever factory gets injected from
v = new CustomView(factory);
v.addValueEnteredListener(function (text) {
// whatever...
});
A possible drawback is that when testing the MetaForm, you will typically have to use a 'Mock' Factory that ... create Mocks version of the Field & Button classes. But obviously there is another drawback ...
Yo' Factory so fat!!
How big will the Factory get ? If you follow the pattern rigorously, then every single frigging component you ever want to create in you application at runtime (wich is typically the case for UI, right) will have to get its own createXXXXX methods in at least one factory.
Because now you need :
Factory.createField to create the field
Factory.createButton to create the button
Factory.createMetaForm to create the field, the button and the MetaForm when a client (say the MetaEditPage wants to use one)
And obviously a Factory.createMetaEditPage for the client..
... and its turtles all the way.
I can see some strategies to simplify this :
As much as possible, separate the parts of the graph that are created at "startup" time from the parts that are created at runtime (using an DI framework like Spring for the former, and factories for the latter)
Layer the factories, or collocate related objects in the same factories (a UIWidgetFactory would make sense for Field and Button, but where would you put the other ones ? In a Factory linked to the Application ? To some other logical level ?)
I can almost hear all the jokes from C guys that no Java app can do anything without calling a ApplicationProcessFactoryCreator.createFactory().createProcess().startApplication() chain of nonsense...
So my questions are :
I am completely missing a point here ?
If not, which strategy would you suggest to make the things bearable ?
Addendum : why I'm not sure dependency injection would help
Assume I decide to use dependency injection, with a guice-like framework. I would end up writing code like this :
class CustomView
#Inject
private Field fiedl;
#Inject
private Button button;
public void start() {
this.button.addEventListener(....
// etc...
And then what ?
How would my "Composition Root" make use of that ? I can certainely not configure a "singleton" (with a lowercase 's', as in 'the single instance of a class) for the Field and the Button (since I want to create as many instances of them as instances of MetaForm ?
It would not make sense to use a Provider, since my problem is not which instance of buttons I want to create, but just that I want to create it lately, with some configuration (for example its text) that only makes sense for this form.
To me DI is not going to help because I am new-ing parts of my component rather than Dependencies. And I suppose I could turn any subcomponent into a dependency, and let a framework inject them. It's just that injecting subcomponents looks really artificial and couter-intuitive to me, in this case... so again, I must be missing something ;)
Edit
In particular, my issue is that I can't seem to understand how you would test the following scenario :
"when I click on the button, if the Field is empty, there should be an error".
This is doable if I inject the button, so that I can call its "fireClicked" event manually - but it feels a bit silly.
The alternative is to just do view.getButton().fireClicked() , but that looks a bit ugly...
Well, you can use some DI Framework (Spring or Guice) and get rid of factory method completely. Just put some annotation on the field/constructor/set method and DI Framework will do the work. At unit-test use mock framework.
How about not being overly obsessed with the "no new" dogma ?
My take is that DI works well for - well you know, Dependencies, or Delegates. Your example is about composition and IMHO it absolutely makes sense that the owning entity (your CustomView) creates explicitly its components. After all, the clients of the composite do not even have to know that these components exist, how they are initialized or how you use them.

Unit Testing, using properties to pass in an interface

Im reading "The art of unit testing" atm and im having some issues with using properties to pass in an interface. The book states the following: "If you want parameters to be optional, use property getters/setters, which is a better way of defining optional parameters than adding different constructors to the class for each dependency."
The code for the property example is as follows:
public class LogAnalyzer
{
private IExtensionManager manager;
public LogAnalyzer ()
{
manager = new FileExtensionManager();
}
public IExtensionManager ExtensionManager
{
get { return manager; }
set { manager = value; }
}
public bool IsValidLogFileName(string fileName)
{
return manager.IsValid(fileName);
}
}
[Test]
Public void
IsValidFileName_NameShorterThan6CharsButSupportedExtension_ReturnsFalse()
{
//set up the stub to use, make sure it returns true
...
//create analyzer and inject stub
LogAnalyzer log = new LogAnalyzer ();
log.ExtensionManager=someFakeManagerCreatedEarlier;
//Assert logic assuming extension is supported
...
}
When/how would i use this feature?? The only scenario i can think of (This is probably wrong!) is if i had two methods in one class,
Method1() retrieves the database connection string from the config file and contains some form of check on the retrieved string.
Method2() then connect to the database and returns some data. The check here could be that that returned data is not null?
In this case, to test Method1() i could declare a stub that implements the IExtensionManager Interface, where the stub has a string which should pass any error checks i have in method1().
For Method2(), i declare a stub which implements the interface, and declare a datatable which contains some data, in the stub class. id then use the properties to assign this to the private manager variable and then call Method2?
The above may be complete BS, so if it is, id appreciate it if someone would let me know and ill remove it.
Thanks
Property injection used to change object's behavior after it was created.
BTW your code is tight coupled to FileExtensionManager, which is concrete implementation of IExtensionManager. How you are going to test LogAnalyzer with default manager? Use constructor injection to provide dependencies to your objects - this will make them testable:
public LogAnalyzer (IExtensionManager manager)
{
this.manager = manager();
}

Where should I place this code in MVC?

My code works perfectly, BUT. Whats the best practice in this case?
Here is the code that is important.
This is in the controller.
private IProductRepository repository;
[HttpPost]
public ActionResult Delete(int productId) {
Product prod = repository.Products.FirstOrDefault(p => p.ProductID == productId);
if (prod != null) {
repository.DeleteProduct(prod);
TempData["message"] = string.Format("{0} was deleted", prod.Name);
}
return RedirectToAction("Index");
}
This is the repository (both Interface etc)
public interface IProductRepository {
IQueryable<Product> Products { get; }
void SaveProduct(Product product);
void DeleteProduct(Product product);
}
And here comes the repository..... (the part that is important) I want to point out though... that this is not a fakeclass as is pretty clear. The testing is done on fakeclasses.
private EFDbContext context = new EFDbContext();
public IQueryable<Product> Products {
get { return context.Products; }
}
public void DeleteProduct(Product product) {
context.Products.Remove(product);
context.SaveChanges();
}
Well first question:
When doing testing on this, I will make a two TestMethods on the Controller in "ControllerTest". "Can_delete_valid_product" and "Cannot_delete_invalid_product". Is there any point in having a testclass for the repository? Like "RepositoryTest", afterall the controller tests if the deletefunction works no need to test it twice right?
Second question:
In this I test in the controller if the product exists, before trying to delete it. If it exists I call the deletefunction in the repository. This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null). Question is if the testing if product exists should be done in the repository instead?
I prefer to keep logic out of the controller for the most part. A test of the controller action verifies if the repository is called, but the repository itself is mocked in that test. I would make the repository responsible for handling null checking.
Personally I create separate tests for my repositories/data access to ensure that it works properly. The controllers themselves would be tested with mocks.
Actually it's entirely possible (just maybe not that likely) that someone could delete a product just as someone else is trying to delete it. In this case you probably don't care/need to know that someone did though so I would probably just swallow that exception in the repository (though I would log it first). In terms of null checking/defensive programming that's entirely a personal choice. Some people leave checks like that to the entry points of the system where as others will build a layered defense that has additional checks throughout the code. The problem is that these checks can get quite ugly which is a big part of why I wish Code Contracts would gain more traction.
This means that there should never be the posibility of an exception. BUT you could still create an exception in the repository if you send down null. (which cant happen here but you could still do it if you forget to check if null).
Or if it's deleted after you check it exists but before you delete it. Or if you lose connection to the repository (or will the method never return in this case?). You can't avoid exceptions in this way.