Mocking/stubbing whether or debug log is enabled? - unit-testing

How do I write a mock test that allows me to validate that an inaccessible property (debugLog) is set to true? Do I try to find a way to find the value of the property? Do I verify that console.debug is set? Does a spy make sense in this situation or should I use a stub?
Class X
let showDebugLogs = false,
debugLog = _.noop
/**
* Configures Class X instances to output or not output debug logs.
* #param {Boolean} state The state.
*/
exports.showDebugLogs = function (state) {
showDebugLogs = state;
debugLog = showDebugLogs ? console.debug || console.log : _.noop;
};
Unit Test
describe('showDebugLogs(state)', function () {
let spy;
it('should configure RealtimeEvents instances to output or not output debug logs', function () {
spy = sinon.spy(X, 'debugLog');
X.showDebugLogs(true);
assert.strictEqual(spy.calledOnce, true, 'Debug logging was not enabled as expected.');
spy.restore();
});
});

Mock testing is used for "isoloting" a class under test from its environment to decrease its side effects and to increase its test-ability. For example, if you are testing a class which makes AJAX calls to a web server, you'd probably do not want to:
1) wait for AJAX calls to complete (waste of time)
2) observe your tests fall apart because of possible networking problems
3) cause data modifications on the server side
and so on.
So what you do is to "MOCK" the part of your code, which makes the AJAX call, and depending on your test you either:
1) return success and response accompanying a successful request
2) return an error and report the nature of the point of failure to see how your code is handing it.
For your case, what you need is just a simple unit test case. You can use introspection techniques to assert internal states of your object, if this is what you really want to. However, this comes with a warning: it is discouraged. Please see Notes at the bottom
Unit testing should be done to test behavior or public state of an object. So, you should really NOT care about internals of a class.
Therefore, I suggest you to re-consider what you are trying to test and find a better way of testing it.
Suggestion: Instead of checking a flag in your class, you can mock up logger for your test. And write at least two test cases as follows:
1) When showDebugLogs = true, make sure log statement of your mock logger is fired
2) When showDebuLogs = false, log statement of your mock logger is not called.
Notes: There has been a long debate between two schools of people: a group advocating that private members/methods are implementation details and should NOT be tested directly, and another group which opposes this idea:
Excerpt from a wikipedia article:
There is some debate among practitioners of TDD, documented in their
blogs and other writings, as to whether it is wise to test private
methods and data anyway. Some argue that private members are a mere
implementation detail that may change, and should be allowed to do so
without breaking numbers of tests. Thus it should be sufficient to
test any class through its public interface or through its subclass
interface, which some languages call the "protected" interface.[29]
Others say that crucial aspects of functionality may be implemented in
private methods and testing them directly offers advantage of smaller
and more direct unit tests

Related

Should I unit test classes which extend Sonata BaseEntityManager class?

Here is part of the code which extends BaseEntityManager:
namespace Vop\PolicyBundle\Entity;
use Doctrine\Common\Collections\ArrayCollection;
use Doctrine\Common\Persistence\ObjectRepository;
use Sonata\CoreBundle\Model\BaseEntityManager;
class AdditionalInsuredTypeManager extends BaseEntityManager
{
/**
* #param int $productId
*
* #return ArrayCollection
*/
public function getProductInsuredTypes($productId = null)
{
$repository = $this->getRepository();
$allActiveTypes = $repository->findAllActive();
// other code
}
/**
* #return AdditionalInsuredTypeRepository|ObjectRepository
*/
protected function getRepository()
{
return parent::getRepository();
}
}
And here I am trying to write a unit test:
public function testGetProductInsuredTypes()
{
$managerRegistry = $this->getMockBuilder(\Doctrine\Common\Persistence\ManagerRegistry::class)
->getMock();
$additionalInsuredTypeManager = new AdditionalInsuredTypeManager(
AdditionalInsuredTypeManager::class,
$managerRegistry
);
$additionalInsuredTypeManager->getProductInsuredTypes(null);
}
What are the problems:
I am mocking ManagerRegistry, but I have learned that I should not mock what I do not own. But this is required parameter for constructor.
I am getting error:
Unable to find the mapping information for the class Vop\PolicyBundle\Entity\AdditionalInsuredTypeManager. Please check the 'auto_mapping' option (http://symfony.com/doc/current/reference/configuration/doctrine.html#configuration-overview) or add the bundle to the 'mappings' section in the doctrine configuration.
/home/darius/PhpstormProjects/vop/vendor/sonata-project/core-bundle/Model/BaseManager.php:54
/home/darius/PhpstormProjects/vop/vendor/sonata-project/core-bundle/Model/BaseManager.php:153
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Entity/AdditionalInsuredTypeManager.php:46
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Entity/AdditionalInsuredTypeManager.php:21
/home/darius/PhpstormProjects/vop/src/Vop/PolicyBundle/Tests/Unit/Entity/AdditionalInsuredTypeManagerTest.php:22
I do not know how to fix this error, but this really has to do something with extending that BaseEntityManager I assume.
I see the error is caused by this line:
$repository = $this->getRepository();
I cannot even inject the repository from the constructor, because parent constructor has no such parameter.
There is very little amout of information about testing:
https://sonata-project.org/bundles/core/master/doc/reference/testing.html
I can not tell you whether it's useful to test your repositories, neither can I give you pointers for your error other than that you should most likely not extend doctrine's entity manager. If anything use a custom EntityRepository or write a service in which you inject the EntityManager (or better the EntityRegistry):
class MyEntityManager
{
private $entityManager;
public function __construct(EntityManager $entityManager)
{
$this->entityManager = $entityManager;
}
public function getProductInsuredTypes($productId = null)
{
$repository = $this->entityManager->getRepository(Product::class);
$allActiveTypes = $repository->findAllActive();
// other code
}
}
What I can give you is an explanation how I approach testing repositories:
I think unit tests, especially with mocks, seem a bit wasteful. They only tie you to the current implementation and since anything of relevance is mocked out you will most likely not test any behavior.
What might be useful is doing functional tests where you provide a real database connection and then perform a query against the repository to see if it really returns the expected results. I usually do this for more complex queries or when using advanced doctrine features like native queries with a custom ResultsetMap. In this case you would not mock the EntityManager and instead would use Doctrine's TestHelper to create an in memory sqlite-database. This can look something like this:
protected $entityManager;
protected function setUp()
{
parent::setUp();
$config = DoctrineTestHelper::createTestConfiguration();
$config->setNamingStrategy(new UnderscoreNamingStrategy());
$config->setRepositoryFactory(new RepositoryFactory());
$this->entityManager = DoctrineTestHelper::createTestEntityManager($config);
}
The downside is, that you will manually have to register custom types and listeners which means the behavior might differ from your production configuration. Additionally you are still tasked with setting up the schema and providing fixtures for your tests. You will also most likely not use SQLite in production so this is another deviation. The benefit is that you will not have to deal with clearing your database between runs and can easily parallelize running tests, plus it's usually faster and easier than setting up a complete test database.
The final option is somewhat close to the previous one. You can have a test database, which you define in your parameters, environment variables or a config_test.yml, you can then bootstrap the kernel and fetch the entity manager from your DI-container. Symfony's WebTestCase can be used as a reference for this approach.
The downside is, that you will have to boot your kernel and make sure to have separate database setups for development, production and testing to ensure your test data does not mess up anything. You will also have to setup your schema and fixtures and additionally you can easily run into issues where tests are not isolated and start being flakey, e.g. when run in different order or when running only parts of your test suite. Obviously since this is a full integration test through your bootstrapped application the performance footprint compared to a unit test or smaller functional test is noticeably higher and application caches might give you additional headaches.
As a rule of thumb:
I trust Doctrine when doing the most basic kind of queries and therefore do not test repositories for simple find-methods.
When I write critical queries I prefer testing them indirectly on higher layers, e.g. ensure in an acceptance test that the page displays the information.
When I encounter issues with repositories or need tests on lower layers I start with functional tests and then move on to integration tests for repositories.
tl;dr
Unit tests for repositories are mostly pointless (in my opinion)
Functional tests are good for testing simple queries in isolation with a reasonable amount of effort, should you need them.
Integration tests ensure the most production like behavior, but are more painful to setup and maintain

Does "unit test only one thing" means one feature or one whole scenario of a unit?

When people say "test only one thing". Does that mean that test one feature at a time or one scenario at a time?
method() {
//setup data
def data = new Data()
//send external webservice call
def success = service.webserviceCall(data)
//persist
if (success) {
data.save()
}
}
Based on the example, do we test by feature of the method:
testA() //test if service.webserviceCall is called properly, so assert if called once with the right parameter
testB() //test if service.webserviceCall succeeds, assert that it should save the data
testC() //test if service.webserviceCall fails, assert that it should not save the data
By scenario:
testA() //test if service.webserviceCall succeeds, so assert if service is called once with the right parameter, and assert that the data should be saved
testB() //test if service.webserviceCall fails, so again assert if service is called once with the right parameter, then assert that it should not save the data
I'm not sure if this is a subjective topic, but I'm trying to do the by feature approach. I got the idea from Roy Osherove's blogs, but I'm not sure if I understood it correct.
It was mentioned there that it would be easier to isolate the errors, but I'm not sure if its overkill. Complex methods will tend to have lots of tests.
(Please excuse my wording on the by feature/scenario, I'm not sure how to word them)
You are right in that this is a subjective topic.
Think about how you want this method to behave, not just on how it's currently implemented. Otherwise your tests will just mirror the production code and will break everytime the implementation changes.
Based on the limited context provided, I'd write the following (separate) tests:
Is the webservice command called with the expected data?
If the command returns successfully, is the data saved? Don't overspecify the arguments provided to your webservice call here, as the previous test covers this.
If it's important that the data is not saved when the command returns a failure, I'd write a third test for this. If it's not important, I wouldn't even bother.
You might have heard the adage "one assert per test". This is good advice in general because a test stops executing as soon as a single assert fails. All asserts further down are not executed. By splitting up the asserts in multiple tests you will receive more feedback when something goes wrong. When tests go red, you know exactly all the asserts that fail and don't have to run through the -fix assertion failure, run tests, fix next assertion failure, repeat- cycle.
So in the terminology you propose, my approach would also be to write a test per feature of the method.
Sidenote: you construct your data object in the method itself and call the save method of that object. How do you sense that the data is saved in your tests?
I understand it like this:
"unit test one thing" == "unit test one behavior"
(After all, it is the behavior that the client wants!)
I would suggest that you approach your testing "one feature at a time". I agree with you where you quoted that with this approach it is "easier to isolate the errors". Roy Osherove really does know what he is talking about especially when it comes to TDD.
In my experience I like to focus on the behaviors that I am trying to test (and I am not particularly referring to BDD here). Essentially I would test each behavior that I am expecting from this code. You said that you are mocking out the dependencies (webservice, and data storage) so I would still class this as a unit test with the following expected behaviors:
a call to this method will result in a particular call to a web service
a successful web service call will result in the data being saved
an unsuccessful web service call will result in the data not being saved
Having tests for these three behaviors will help you isolate any issues with the code immediately.
Your tests should also have no dependency on the actual code written to achieve the behavior. For example, if my implementation called some decorator internal to my class which in turn called the webservice correctly then that should be no concern of my test. My test should only be concerned with the external dependencies and public interface of the class itself.
If I exposed internal methods of my class (or implementation details, such as the decorator mentioned above) for the purposes of testing its particular implementation then I have created brittle tests that will fail when the implementation changes.
In summary, I would recommend that your tests should lock down the behavior of a class and isolate failures to identify the 'unit of behavior' that is failing.
A unit test in general is a test that is done without a call to database or file system or even to that effect doesnot call a webservice either. The idea of a unit test is that if you did not have any internet connection you should be able to unit test. So having said that , if a method calls a webservice or calls a database, then you basically are expected to mock the responses from an external system. You should be testing that unit of work only. As mentioned above by prgmtc on how you should be asserting one assert per method is the way to go.
Second, if you are calling a real webservice or database etc, then consider calling those test as integrated or integration test depending upon what you are trying to test.
In my opinion to get the most out of TDD you want to be doing test first development. Have a look at uncle Bobs 3 Rules of TDD.
If you follow these rules strictly, you end up writing tests that generally only have a single assert statements. In reality you will often find you end up with a number of assert statements that act as a single logical assert as it often helps with the understanding of the unit test itself.
Here is an example
[Test]
public void ValidateBankAccount_GivenInvalidAccountType_ShouldReturnValidationFailure()
{
//---------------Set up test pack-------------------
const string validBankAccount = "99999999999";
const string validBranchCode = "222222";
const string invalidAccountType = "99";
const string invalidAccoutTypeResult = "3";
var bankAccountValidation = Substitute.For<IBankAccountValidation>();
bankAccountValidation.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType)
.Returns(invalidAccoutTypeResult);
var service = new BankAccountCheckingService(bankAccountValidation);
//---------------Assert Precondition----------------
//---------------Execute Test ----------------------
var result = service.ValidateBankAccount(validBankAccount, validBranchCode, invalidAccountType);
//---------------Test Result -----------------------
Assert.IsFalse(result.IsValid);
Assert.AreEqual("Invalid account type", result.Message);
}
And the ValidationResult class that is returned from the service
public interface IValidationResult
{
bool IsValid { get; }
string Message { get; }
}
public class ValidationResult : IValidationResult
{
public static IValidationResult Success()
{
return new ValidationResult(true,"");
}
public static IValidationResult Failure(string message)
{
return new ValidationResult(false, message);
}
public ValidationResult(bool isValid, string message)
{
Message = message;
IsValid = isValid;
}
public bool IsValid { get; private set; }
public string Message { get; private set; }
}
Note I would have unit tests the ValidationResult class itself, but in the test above I feel it gives more clarity to include both Asserts.

How to avoid unnecessary preconditions in unit testing?

Ideally, a test class is written for every class in the production code. In test class, all the test methods may not require the same preconditions. How do we solve this problem?
Do we create separate test classes for these?
I suggest creating separate methods wrapping necessary precondition setup. Do not confuse this approach with traditional test setup. As an example, assume you wrote tests for receipt provider, which searches repository and depending on some validation steps, returns receipt. We might end-up with:
receipt doesn't exist in repository: return null
receipt exists, but doesn't match validator date: return null
receipt exists, matches validator date, but was not fully committed (i.e. was not processed by some external system): return null
We have several conditions here: receipt exists/doesn't exist, receipt is invalid date-wise, receipt is not commited. Our happy path is the default setup (for example done via traditional test setup). Then, happy path test would be as simple as (some C# pseudo-code):
[Test]
public void GetReceipt_ReturnsReceipt()
{
receiptProvider.GetReceipt("701").IsNotNull();
}
Now, for the special condition cases we simply write tiny, dedicated methods that would arrange our test environment (eg. setup dependencies) so that conditions are met:
[Test]
public void GetReceipt_ReturnsNull_WhenReceiptDoesntExist()
{
ReceiptDoesNotExistInRepository("701")
receiptProvider.GetReceipt("701").IsNull();
}
[Test]
public void GetReceipt_ReturnsNull_WhenExistingReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
receiptProvider.GetReceipt("701").IsNull();
}
You'll end up with couple extra helper methods, but your tests will be much easier to read and understand. This is especially helpful when logic is more complicated than simple yes-no setup:
[Test]
public void GetReceipt_ThrowsException_WhenUncommittedReceiptHasInvalidDate()
{
ReceiptHasInvalidDate("701");
ReceiptIsUncommitted("701");
receiptProvider.GetReceipt("701").Throws<Exception>();
}
It's an option to group tests with the same preconditions in the same classes, this also helps avoiding test classes of over a thousand lines. You can also group the creation of the preconditions in seperate methods and let each test call the applicable method. You can do this when most of the methods have different preconditions, otherwise you could just use a setup method that is called before the test.
I like to use a Setup method that will get called before each test runs. In this method I instantiate the class I want to test, giving it any dependencies it needs to be created. Then I will set the specific details for the individual tests inside the test method. It moves any common initialization of the class out to the setup method and allows the test to be focused on what it needs to be evaluated.
You may find this link valuable, it discusses an approach to Test Setups:
In Defense of Test Setup Methods, by Erik Dietrich

How are integration tests written for interacting with external API?

First up, where my knowledge is at:
Unit Tests are those which test a small piece of code (single methods, mostly).
Integration Tests are those which test the interaction between multiple areas of code (which hopefully already have their own Unit Tests). Sometimes, parts of the code under test requires other code to act in a particular way. This is where Mocks & Stubs come in. So, we mock/stub out a part of the code to perform very specifically. This allows our Integration Test to run predictably without side effects.
All tests should be able to be run stand-alone without data sharing. If data sharing is necessary, this is a sign the system isn't decoupled enough.
Next up, the situation I am facing:
When interacting with an external API (specifically, a RESTful API that will modify live data with a POST request), I understand we can (should?) mock out the interaction with that API (more eloquently stated in this answer) for an Integration Test. I also understand we can Unit Test the individual components of interacting with that API (constructing the request, parsing the result, throwing errors, etc). What I don't get is how to actually go about this.
So, finally: My question(s).
How do I test my interaction with an external API that has side effects?
A perfect example is Google's Content API for shopping. To be able to perform the task at hand, it requires a decent amount of prep work, then performing the actual request, then analysing the return value. Some of this is without any 'sandbox' environment.
The code to do this generally has quite a few layers of abstraction, something like:
<?php
class Request
{
public function setUrl(..){ /* ... */ }
public function setData(..){ /* ... */ }
public function setHeaders(..){ /* ... */ }
public function execute(..){
// Do some CURL request or some-such
}
public function wasSuccessful(){
// some test to see if the CURL request was successful
}
}
class GoogleAPIRequest
{
private $request;
abstract protected function getUrl();
abstract protected function getData();
public function __construct() {
$this->request = new Request();
$this->request->setUrl($this->getUrl());
$this->request->setData($this->getData());
$this->request->setHeaders($this->getHeaders());
}
public function doRequest() {
$this->request->execute();
}
public function wasSuccessful() {
return ($this->request->wasSuccessful() && $this->parseResult());
}
private function parseResult() {
// return false when result can't be parsed
}
protected function getHeaders() {
// return some GoogleAPI specific headers
}
}
class CreateSubAccountRequest extends GoogleAPIRequest
{
private $dataObject;
public function __construct($dataObject) {
parent::__construct();
$this->dataObject = $dataObject;
}
protected function getUrl() {
return "http://...";
}
protected function getData() {
return $this->dataObject->getSomeValue();
}
}
class aTest
{
public function testTheRequest() {
$dataObject = getSomeDataObject(..);
$request = new CreateSubAccountRequest($dataObject);
$request->doRequest();
$this->assertTrue($request->wasSuccessful());
}
}
?>
Note: This is a PHP5 / PHPUnit example
Given that testTheRequest is the method called by the test suite, the example will execute a live request.
Now, this live request will (hopefully, provided everything went well) do a POST request that has the side effect of altering live data.
Is this acceptable? What alternatives do I have? I can't see a way to mock out the Request object for the test. And even if I did, it would mean setting up results / entry points for every possible code path that Google's API accepts (which in this case would have to be found by trial and error), but would allow me the use of fixtures.
A further extension is when certain requests rely on certain data being Live already. Using the Google Content API as an example again, to add a Data Feed to a Sub Account, the Sub Account must already exist.
One approach I can think of is the following steps;
In testCreateAccount
Create a sub-account
Assert the sub-account was created
Delete the sub-account
Have testCreateDataFeed depend on testCreateAccount not having any errors
In testCreateDataFeed, create a new account
Create the data feed
Assert the data feed was created
Delete the data feed
Delete the sub-account
This then raises the further question; how do I test the deletion of accounts / data feeds? testCreateDataFeed feels dirty to me - What if creating the data feed fails? The test fails, therefore the sub-account is never deleted... I can't test deletion without creation, so do I write another test (testDeleteAccount) that relies on testCreateAccount before creating then deleting an account of its own (since data shouldn't be shared between tests).
In Summary
How do I test interacting with an external API that effects live data?
How can I mock / stub objects in an Integration test when they're hidden behind layers of abstraction?
What do I do when a test fails and the live data is left in an inconsistent state?
How in code do I actually go about doing all this?
Related:
How can mocking external services improve unit tests?
Writing unit tests for a REST-ful API
This is more an additional answer to the one already given:
Looking through your code, the class GoogleAPIRequest has a hard-encoded dependency of class Request. This prevents you from testing it independently from the request class, so you can't mock the request.
You need to make the request injectable, so you can change it to a mock while testing. That done, no real API HTTP requests are send, the live data is not changed and you can test much quicker.
I've recently had to update a library because the api it connects to was updated.
My knowledge isn't enough to explain in detail, but i learnt a great deal from looking at the code. https://github.com/gridiron-guru/FantasyDataAPI
You can submit a request as you would normally to the api and then save that response as a json file, you can then use that as a mock.
Have a look at the tests in this library which connects to an api using Guzzle.
It mocks responses from the api, there's a good deal of information in the docs on how the testing works it might give you an idea of how to go about it.
but basically you do a manual call to the api along with any parameters you need, and save the response as a json file.
When you write your test for the api call, send along the same parameters and get it to load in the mock rather than using the live api, you can then test the data in the mock you created contains the expected values.
My Updated version of the api in question can be found here.
Updated Repo
One of the ways to test out external APIs is as you mentioned, by creating a mock and working against that with the behavior hard coded as you have understood it.
Sometimes people refer to this type of testing as "contract based" testing, where you can write tests against the API based on the behavior you have observed and coded against, and when those tests start failing, the "contract is broken". If they are simple REST based tests using dummy data you can also provide them to the external provider to run so they can discover where/when they might be changing the API enough that it should be a new version or produce a warning about not being backwards compatible.
Ref: https://www.thoughtworks.com/radar/techniques/consumer-driven-contract-testing

Unit test 'structure' of method?

Sorry for the long post...
While being introduced to a brown field project, I'm having doubts regarding certain sets of unit tests and what to think. Say you had a repostory class, wrapping a stored procedure and in the developer guide book, a certain set guidelines (rules), describe how this class should be constructured. The class could look like the following:
public class PersonRepository
{
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (new SomeProfiler("someKey"))
{
var sp = Ioc.Resolve<IPersonStoredProcedure>();
sp.addNameArguement(personName);
sp.addCityArguement(cityName);
return sp.invoke();
}
} }
Now, I would of course write some integration tests, testing that the SP can be invoked, and that the behavior is as expected. However, would I write unit tests that assert that:
Constructor for SomeProfiler with the input parameter "someKey" is called
The Constructor of PersonStoredProcedure is called
The addNameArgument method on the stored procedure is called with parameter personName
The addCityArgument method on the stored procedure is called with parameter cityName
The invoke method is called on the stored procedure -
If so, I would potentially be testing the whole structure of a method, besides the behavior. My initial thought is that it is overkill. However, in regards to the coding practices enforced by the team, these test ensure a uniform and 'correct' structure and that the next layer is called correctly (from DAL to DB, BLL to DAL etc).
In my case these type of tests, are performed for each layer of the application.
Follow up question - the use of the SomeProfiler class smells a little like a convention to me - Instead creating explicit tests for this, could one create convention styled test by using static code analysis or unittest + reflection?
Thanks in advance.
I think that your initial thought was right - this is an overkill. Although you can use reflection to make sure that the class has the methods you expect I'm not sure you want to test it that way.
Perhaps instead of unit testing you should use some tool such as FxCop/StyleCop or nDepend to make sure all of the classes in a specific assembly/dll has these properties.
Having said that I'm a believer of "only code what you need" why test that a method exist, either you use it somewhere in your code and in that can you can test the specific case or you don't - and so it's irrelevant.
Unit tests should focus on behavior, not implementation. So writing a test to verify that certain arguments are set or passed in doesn't add much value to your testing strategy.
As the example provided appears to be communicating with your database, it can't truly be considered a "unit test" as it must communicate with physical dependencies that have additional setup and preconditions, such as availability of the environment, database schema, existing data, stored-procedures, etc. Any test you write is actually verifying these preconditions as well.
In it's present condition, your best bet for these types of tests is to test the behavior provided by the class -- invoke a method on your repository and then validate that the results are what you expected. However, you'll suddenly realize that there's a hidden cost here -- the database maintains state between test runs, and you'll need additional setup or tear-down logic to ensure that the database is in a well-known state.
While I realize the intent of the question was about the testing a "black box", it seems obvious that there's some hidden magic here in your API. My preference to solve the well-known state problem is to use an in-memory database that is scoped to the current test, which isolates me from environment considerations and enables me to parallelize my integration tests. I'd wager that under the current design, there is no "seam" to programmatically introduce a database configuration so you're "hemmed in". In my experience, magic hurts.
However, a slight change to the existing design solves this problem and the "magic" goes away:
public class PersonRepository : IPersonRepository
{
private ConnectionManager _mgr;
public PersonRepository(ConnectionManager mgr)
{
_mgr = mgr;
}
public PersonCollection FindPersonsByNameAndCity(string personName, string cityName)
{
using (var p = _mgr.CreateProfiler("somekey"))
{
var sp = new PersonStoredProcedure(p);
sp.addArguement("name", personName);
sp.addArguement("city", cityName);
return sp.invoke();
}
}
}