in the past i unit tested the flink jobs by writing the job with pluggable Sources/Sink and then mocked them via simple Source-/SinkFunctions. Like this:
public class Example {
private static SourceFunction<String> someSource;
private static SourceFunction<String> someOtherSource;
private static SinkFunction<String> someSink;
Example(
SourceFunction<String> someSource,
SourceFunction<String> someOtherSource,
SinkFunction<String> someSink
) {
this.someSource = someSource;
this.someOtherSource = someOtherSource;
this.someSink = someSink;
}
void build(StreamExecutionEnvironment env) {
/*
... build your logic here ...
*/
}
public static void main(String[] args) {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Example(
new FlinkKafkaConsumer<String>(...),
new FlinkKafkaConsumer<String>(...),
new FlinkKafkaProducer<String>(...)
).build(env);
env.execute();
}
}
This way i could easily test the whole job by just exchange the real KafkaSinks & Sources with custom Sink-/SourceFunctions.
The new DataSources are much more complex for simply implementing it for test cases.
Even if i would implement it it would end in a generic hell to make this injectable at the constructor.
So i was wondering whats the best approach is to unit test the whole job without bring up e.g. a complete Kafka cluster.
Are there any ideas or solutions ?
You could get part way there by building something based on the NumberSequenceSource, followed by a map.
The DataGeneratorSource described in FLIP-238 is meant to fill this need, and it will be released as part of 1.16. (I believe it is self-contained, so you could copy it and start using it now.)
An alternative approach to using a pluggable Sink is to use DataStream#executeAndCollect():
DataStream<Integer> stream = env.fromElements(1, 2, 3);
try (CloseableIterator<Integer> results = stream.executeAndCollect()) {
assertThat(results).containsInAnyOrder(1, 2, 3);
}
Related
So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.
In my Kotlin JUnit tests, I want to start/stop embedded servers and use them within my tests.
I tried using the JUnit #Before annotation on a method in my test class and it works fine, but it isn't the right behaviour since it runs every test case instead of just once.
Therefore I want to use the #BeforeClass annotation on a method, but adding it to a method results in an error saying it must be on a static method. Kotlin doesn't appear to have static methods. And then the same applies for static variables, because I need to keep a reference to the embedded server around for use in the test cases.
So how do I create this embedded database just once for all of my test cases?
class MyTest {
#Before fun setup() {
// works in that it opens the database connection, but is wrong
// since this is per test case instead of being shared for all
}
#BeforeClass fun setupClass() {
// what I want to do instead, but results in error because
// this isn't a static method, and static keyword doesn't exist
}
var referenceToServer: ServerType // wrong because is not static either
...
}
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the answers to commonly asked Kotlin topics are present in SO.
Your unit test class usually needs a few things to manage a shared resource for a group of test methods. And in Kotlin you can use #BeforeClass and #AfterClass not in the test class, but rather within its companion object along with the #JvmStatic annotation.
The structure of a test class would look like:
class MyTestClass {
companion object {
init {
// things that may need to be setup before companion class member variables are instantiated
}
// variables you initialize for the class just once:
val someClassVar = initializer()
// variables you initialize for the class later in the #BeforeClass method:
lateinit var someClassLateVar: SomeResource
#BeforeClass #JvmStatic fun setup() {
// things to execute once and keep around for the class
}
#AfterClass #JvmStatic fun teardown() {
// clean up after this class, leave nothing dirty behind
}
}
// variables you initialize per instance of the test class:
val someInstanceVar = initializer()
// variables you initialize per test case later in your #Before methods:
var lateinit someInstanceLateZVar: MyType
#Before fun prepareTest() {
// things to do before each test
}
#After fun cleanupTest() {
// things to do after each test
}
#Test fun testSomething() {
// an actual test case
}
#Test fun testSomethingElse() {
// another test case
}
// ...more test cases
}
Given the above, you should read about:
companion objects - similar to the Class object in Java, but a singleton per class that is not static
#JvmStatic - an annotation that turns a companion object method into a static method on the outer class for Java interop
lateinit - allows a var property to be initialized later when you have a well defined lifecycle
Delegates.notNull() - can be used instead of lateinit for a property that should be set at least once before being read.
Here are fuller examples of test classes for Kotlin that manage embedded resources.
The first is copied and modified from Solr-Undertow tests, and before the test cases are run, configures and starts a Solr-Undertow server. After the tests run, it cleans up any temporary files created by the tests. It also ensures environment variables and system properties are correct before the tests are run. Between test cases it unloads any temporary loaded Solr cores. The test:
class TestServerWithPlugin {
companion object {
val workingDir = Paths.get("test-data/solr-standalone").toAbsolutePath()
val coreWithPluginDir = workingDir.resolve("plugin-test/collection1")
lateinit var server: Server
#BeforeClass #JvmStatic fun setup() {
assertTrue(coreWithPluginDir.exists(), "test core w/plugin does not exist $coreWithPluginDir")
// make sure no system properties are set that could interfere with test
resetEnvProxy()
cleanSysProps()
routeJbossLoggingToSlf4j()
cleanFiles()
val config = mapOf(...)
val configLoader = ServerConfigFromOverridesAndReference(workingDir, config) verifiedBy { loader ->
...
}
assertNotNull(System.getProperty("solr.solr.home"))
server = Server(configLoader)
val (serverStarted, message) = server.run()
if (!serverStarted) {
fail("Server not started: '$message'")
}
}
#AfterClass #JvmStatic fun teardown() {
server.shutdown()
cleanFiles()
resetEnvProxy()
cleanSysProps()
}
private fun cleanSysProps() { ... }
private fun cleanFiles() {
// don't leave any test files behind
coreWithPluginDir.resolve("data").deleteRecursively()
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties"))
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties.unloaded"))
}
}
val adminClient: SolrClient = HttpSolrClient("http://localhost:8983/solr/")
#Before fun prepareTest() {
// anything before each test?
}
#After fun cleanupTest() {
// make sure test cores do not bleed over between test cases
unloadCoreIfExists("tempCollection1")
unloadCoreIfExists("tempCollection2")
unloadCoreIfExists("tempCollection3")
}
private fun unloadCoreIfExists(name: String) { ... }
#Test
fun testServerLoadsPlugin() {
println("Loading core 'withplugin' from dir ${coreWithPluginDir.toString()}")
val response = CoreAdminRequest.createCore("tempCollection1", coreWithPluginDir.toString(), adminClient)
assertEquals(0, response.status)
}
// ... other test cases
}
And another starting AWS DynamoDB local as an embedded database (copied and modified slightly from Running AWS DynamoDB-local embedded). This test must hack the java.library.path before anything else happens or local DynamoDB (using sqlite with binary libraries) won't run. Then it starts a server to share for all test classes, and cleans up temporary data between tests. The test:
class TestAccountManager {
companion object {
init {
// we need to control the "java.library.path" or sqlite cannot find its libraries
val dynLibPath = File("./src/test/dynlib/").absoluteFile
System.setProperty("java.library.path", dynLibPath.toString());
// TEST HACK: if we kill this value in the System classloader, it will be
// recreated on next access allowing java.library.path to be reset
val fieldSysPath = ClassLoader::class.java.getDeclaredField("sys_paths")
fieldSysPath.setAccessible(true)
fieldSysPath.set(null, null)
// ensure logging always goes through Slf4j
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.Slf4jLog")
}
private val localDbPort = 19444
private lateinit var localDb: DynamoDBProxyServer
private lateinit var dbClient: AmazonDynamoDBClient
private lateinit var dynamo: DynamoDB
#BeforeClass #JvmStatic fun setup() {
// do not use ServerRunner, it is evil and doesn't set the port correctly, also
// it resets logging to be off.
localDb = DynamoDBProxyServer(localDbPort, LocalDynamoDBServerHandler(
LocalDynamoDBRequestHandler(0, true, null, true, true), null)
)
localDb.start()
// fake credentials are required even though ignored
val auth = BasicAWSCredentials("fakeKey", "fakeSecret")
dbClient = AmazonDynamoDBClient(auth) initializedWith {
signerRegionOverride = "us-east-1"
setEndpoint("http://localhost:$localDbPort")
}
dynamo = DynamoDB(dbClient)
// create the tables once
AccountManagerSchema.createTables(dbClient)
// for debugging reference
dynamo.listTables().forEach { table ->
println(table.tableName)
}
}
#AfterClass #JvmStatic fun teardown() {
dbClient.shutdown()
localDb.stop()
}
}
val jsonMapper = jacksonObjectMapper()
val dynamoMapper: DynamoDBMapper = DynamoDBMapper(dbClient)
#Before fun prepareTest() {
// insert commonly used test data
setupStaticBillingData(dbClient)
}
#After fun cleanupTest() {
// delete anything that shouldn't survive any test case
deleteAllInTable<Account>()
deleteAllInTable<Organization>()
deleteAllInTable<Billing>()
}
private inline fun <reified T: Any> deleteAllInTable() { ... }
#Test fun testAccountJsonRoundTrip() {
val acct = Account("123", ...)
dynamoMapper.save(acct)
val item = dynamo.getTable("Accounts").getItem("id", "123")
val acctReadJson = jsonMapper.readValue<Account>(item.toJSON())
assertEquals(acct, acctReadJson)
}
// ...more test cases
}
NOTE: some parts of the examples are abbreviated with ...
Managing resources with before/after callbacks in tests, obviously, has it's pros:
Tests are "atomic". A test executes as a whole things with all the callbacks One won't forget to fire up a dependency service before the tests and shut it down after it's done. If done properly, executions callbacks will work on any environment.
Tests are self-contained. There is no external data or setup phases, everything is contained within a few test classes.
It has some cons too. One important of them is that it pollutes the code and makes the code violate single responsibility principle. Tests now not only test something, but perform a heavyweight initialization and resource management. It can be ok in some cases (like configuring an ObjectMapper), but modifying java.library.path or spawning another processes (or in-process embedded databases) are not so innocent.
Why not treat those services as dependencies for your test eligible for "injection", like described by 12factor.net.
This way you start and initialize dependency services somewhere outside of the test code.
Nowadays virtualization and containers are almost everywhere and most developers' machines are able to run Docker. And most of the application have a dockerized version: Elasticsearch, DynamoDB, PostgreSQL and so on. Docker is a perfect solution for external services that your tests need.
It can be a script that runs is run manually by a developer every time she wants to execute tests.
It can be a task run by build tool (e.g. Gradle has awesome dependsOn and finalizedBy DSL for defining dependencies). A task, of course, can execute the same script that developer executes manually using shell-outs / process execs.
It can be a task run by IDE before test execution. Again, it can use the same script.
Most CI / CD providers have a notion of "service" — an external dependency (process) that runs in parallel to your build and can be accessed via it's usual SDK / connector / API: Gitlab, Travis, Bitbucket, AppVeyor, Semaphore, …
This approach:
Frees your test code from initialization logic. Your tests will only test and do nothing more.
Decouples code and data. Adding a new test case can now be done by adding new data into dependency services with it's native toolset. I.e. for SQL databases you'll use SQL, for Amazon DynamoDB you'll use CLI to create tables and put items.
Is closer to a production code, where you obviously do not start those services when your "main" application starts.
Of course, it has it's flaws (basically, the statements I've started from):
Tests are not more "atomic". Dependency service must be started somehow prior test execution. The way it is started may be different in different environments: developer's machine or CI, IDE or build tool CLI.
Tests are not self-contained. Now your seed data may be even packed inside an image, so changing it may require rebuilding a different project.
Unit test related question
Encountered a problem with testing scala objects that extend another trait/class that has a DB connection (or any other "external" call)
Using a singleton with a DB connection anywhere in my project makes unit-test not be a option because I cannot override / mock the DB connection
This results in changing my design only for test purpose in situations where its clearly needed to be a object
Any suggestions ?
Code snippet for a non testable code :
object How2TestThis extends SomeDBconnection {
val somethingUsingDB = {
getStuff.map(//some logic)
}
val moreThigs {
//more things
}
}
trait SomeDBconnection {
import DBstuff._
val db = connection(someDB)
val getStuff = db.getThings
}
One of the options is to use cake pattern to require some DB connection and mixin specific implementation as desired. For example:
import java.sql.Connection
// Defines general DB connection interface for your application
trait DbConnection {
def getConnection: Connection
}
// Concrete implementation for production/dev environment for example
trait ProductionDbConnectionImpl extends DbConnection {
def getConnection: Connection = ???
}
// Common code that uses that DB connection and needs to be tested.
trait DbConsumer {
this: DbConnection =>
def runDb(sql: String): Unit = {
getConnection.prepareStatement(sql).execute()
}
}
...
// Somewhere in production code when you set everything up in init or main you
// pick concrete db provider
val prodDbConsumer = new DbConsumer with ProductionDbConnectionImpl
prodDbConsumer.runDb("select * from sometable")
...
// Somewhere in test code you mock or stub DB connection ...
val testDbConsumer = new DbConsumer with DbConnection { def getConnection = ??? }
testDbConsumer.runDb("select * from sometable")
If you have to use a singleton/Scala object you can have a lazy val or some init(): Unit method that sets connection up.
Another approach would be to use some sort of injector. For example look at Lift code:
package net.liftweb.http
/**
* A base trait for a Factory. A Factory is both an Injector and
* a collection of FactorMaker instances. The FactoryMaker instances auto-register
* with the Injector. This provides both concrete Maker/Vender functionality as
* well as Injector functionality.
*/
trait Factory extends SimpleInjector
Then somewhere in your code you use this vendor like this:
val identifier = new FactoryMaker[MongoIdentifier](DefaultMongoIdentifier) {}
And then in places where you actually have to get access to DB:
identifier.vend
You can supply alternative provider in tests by surrounding your code with:
identifier.doWith(mongoId) { <your test code> }
which can be conveniently used with specs2 Around context for example:
implicit val dbContext new Around {
def around[T: AsResult](t: => T): Result = {
val mongoId = new MongoIdentifier {
def jndiName: String = dbName
}
identifier.doWith(mongoId) {
AsResult(t)
}
}
}
It's pretty cool because it's implemented in Scala without any special bytecode or JVM hacks.
If you think first 2 options are too complicated and you have a small app you can use Properties file/cmd args to let you know if you are running in test or production mode. Again the idea comes from Lift :). You can easily implement it yourself, but here how you can do it with Lift Props:
// your generic DB code:
val jdbcUrl: String = Props.get("jdbc.url", "jdbc:postgresql:database")
You can have 2 props files:
production.default.props
jdbc.url=jdbc:postgresql:database
test.default.props
jdbc.url=jdbc:h2
Lift will automatically detect run mode Props.mode and pick the right props file to read. You can set run mode with JVM cmd args.
So in this case you can either connect to in-memory DB or just read run mode and set your connection in code accordingly (mock, stub, uninitialized, etc).
Use regular IOC pattern - pass dependencies via constructor arguments to the class. Don't use an object. This gets inconvenient quickly unless you use special dependency injection frameworks.
Some suggestions:
Use object for something that can't have an alternative implementation and if this only implementation will work in all environments. Use object for constants and pure FP non side effecting code. Use singletons for wiring things up at the last moment - like a class with main, not somewhere deep in the code where many components depend on it unless it has no side effects or it uses something like stackable/injectable vendor providers (see Lift).
Conclusion:
You can't mock an object or override its implementation. You need to design your code to be testable and some of the options for it are listed above. It's a good practice to make your code flexible with easily composable parts not only for the purposes of testing but also for reusability and maintainability.
I hope someone has already faced and solved this issue, and can point me to the correct direction.
So I have rest of my unit tests working: Core.Tests has tests for ViewModels to see they are working properly. Now I would like to set up a test project for Phone.Tests that would only test out the binding. So suppose on the login page, something get's entered into the username text box, and that value should be updated in ViewModel and vice-versa.
As a testing framework, I am using WP Toolkit Test framework, and not MS one; WP Toolkit framework runs on the phone itself, meaning it has access to the UI thread.
In theory a test is supposed to look like following:
[TestMethod]
[Asynchronous]
public void Username_Update_View_Should_Update_Model()
{
const string testUsername = "Testing";
var usernameTextBox = GetUiElement<PhoneTextBox>("UsernamePhoneTextBox");
// initial value
Assert.AreEqual(null, _viewModel.Authorization.Username, "Default value should be blank");
//
usernameTextBox.Text = testUsername;
//
Assert.AreEqual(testUsername, _viewModel.Authorization.Username, "Binding not set for {0}", "Username");
}
private T GetUiElement<T>(string name) where T : UIElement
{
return (T)_view.FindName(name);
}
Now, I need to somehow create the view in [TestInitialize] method, and this is what I think I have setup wrong.
I have tried creating the ViewModel manually; then I created the View manually, and binded both DataContext and ViewModel (just to be on safe side) to created viewModel.
At this point, I am expecting changing one property on any one should update the other.
Of-course the error is my test fails. I can't figure out if I should be looking at a custom presenter (all the examples seem to be for ios, droid.) I also tried the following:
public class TestAppStart : MvxNavigatingObject, IMvxAppStart
{
public void Start(object hint = null)
{
ShowViewModel<UserLoginViewModel>();
}
}
and then on my testInitialize I thought I could start it, but I guess I need to find RegisterAppStart and once that's done, try to get the view back from RootFrame.
There must be an easier way... anyone??
Thanks in advance.
Edited: I have got this following as Base Test
public abstract class BaseTest
{
private IMvxIoCProvider _ioc;
protected IMvxIoCProvider Ioc
{
get
{
return _ioc;
}
}
public void Setup()
{
ClearAll();
}
protected virtual void ClearAll()
{
MvxSingleton.ClearAllSingletons();
_ioc = MvxSimpleIoCContainer.Initialize();
_ioc.RegisterSingleton(_ioc);
_ioc.RegisterSingleton((IMvxTrace)new DebugTrace());
InitialiseSingletonCache();
InitialiseMvxSettings();
MvxTrace.Initialize();
AdditionalSetup();
}
private static void InitialiseSingletonCache()
{
MvxSingletonCache.Initialize();
}
protected virtual void InitialiseMvxSettings()
{
_ioc.RegisterSingleton((IMvxSettings)new MvxSettings());
}
protected virtual void AdditionalSetup()
{
_ioc.RegisterSingleton(Mock.Of<ISettings>);
_ioc.RegisterSingleton<IApplicationData>(() => new ApplicationData());
_ioc.RegisterSingleton<IPlatformSpecific>(() => new PlatformSpecific());
_ioc.RegisterSingleton<IValidatorFactory>(() => new ValidatorFactory());
//
_ioc.RegisterType<IMvxMessenger, MvxMessengerHub>();
}
}
On my TestClass initialize, I call base.Setup(), which does setup except the ViewDispatcher. Unfortunately I can't figure out how to use that dispatcher:
I guess really the question I am asking is: how do I get a View through MvvmCross.
PS: I am actually surprised that most don't test the bindings; isn't it where the most amount of mistakes is likely to happen? I am pretty sure the project compiles even if I had bad binding :) scary kind of reminds me of early asp days.
PS: I have actually got another testProject that tests the ViewModels; on that testProject I have managed to hookup following the guidelines at
http://blog.fire-development.com/2013/06/29/mvvmcross-unit-testing-with-autofixture/
Which works beautifully; and also uses autoFixture, NSubstitute and xUnit: and I can't use any of them in Phone Test project.
From my experience, testing the bindings themselves is pretty unusual - most developers stop their testing at the ViewModel and ValueConverter level.
However, if you want to test the bindings, then this should be possible. I suspect the only problem in your current tests is that you haven't initialised any of the MvvmCross infrastructure and so MvxViewModel isn't able to propagate INotifyPropertyChanged events.
If you want to initialise this part of the MvvmCross infrastructure, then be sure to initialise at least:
the MvvmCross IoC container
the MvvmCross main thread dispatcher
This is similar to what is done in the unit tests in the N=29 video - see https://github.com/MvvmCross/NPlus1DaysOfMvvmCross/blob/master/N-29-TipCalcTest/TipCalcTest.Tests/FirstViewModelTests.cs#L57
For your app, you can do this using something like:
public static class MiniSetup
{
public static readonly MiniSetup Instance = new MiniSetup();
private MiniSetup()
{
}
public void EnsureInitialized(Context applicationContext)
{
if (MvxSimpleIoCContainer.Instance != null)
return;
var ioc = MvxSimpleIoCContainer.Initialize();
ioc.RegisterSingleton<IMvxTrace>(new MvxDebugOnlyTrace());
MvxTrace.Initialize();
var mockDispatcher = new SimpleDispatcher();
Ioc.RegisterSingleton<IMvxMainThreadDispatcher>(simpleDispatcher);
}
}
where SimpleDispatcher is something like:
public class SimpleDispatcher
: MvxMainThreadDispatcher
{
public readonly List<MvxViewModelRequest> Requests = new List<MvxViewModelRequest>();
public bool RequestMainThreadAction(Action action)
{
action();
return true;
}
}
If you want further MvvmCross functionality available (e.g. ShowViewModel navigation), then you'll need to provide further services - e.g. things like IMvxViewDispatcher - as the number of these increases, you might be better off just running through a full MvxSetup process (like your main app's Setup does)
I'm a .NET developer and I'm writing tests for my data access layer. I have tests that use fake repository - I have achieved that by using Moq and Ninject.
I'm getting my head around EntityFramework 4.1 Code First model and I'd like to create a prototype for CRUD routines. It's an MVC app, so my entities won't be tracked by a context.
To me it feels wrong that I'm writing tests that will make changes to the database. I will then have to clear the database each time I want to run these tests. Is this the only way to test CRUD routines?
Thank you
How do you expect to test data access if you don't access the data? Yes data access should be tested against real database. There is very simple workaround for your problem. Make your test transactional and rollback changes at the end of the test. You can use base class like this (NUnit):
[TestFixture]
public abstract class BaseTransactionalTest
{
private TransactionalScope _scope = null;
[SetUp]
public void Initialize()
{
_scope = new TransactionalScope(...);
}
[TearDown]
public void CleanUp()
{
if (_scope != null)
{
_scope.Dispose();
_scope = null;
}
}
}