Fluent NHibernate - PersistenceSpecification of HiLo scheme - unit-testing

Not sure if I'm asking the right question so please bear with me! Bit of an NHibernate noob.
We're using Fluent NH and have the following id generation scheme for all tables
public class IdGenerationConvention : IIdConvention
{
public void Apply(IIdentityInstance instance)
{
var where = string.Format("TableKey = '{0}'", instance.EntityType.Name);
instance.GeneratedBy.HiLo("HiloPrimaryKeys", "NextHighValue", "1000", x => x.AddParam("where", where));
}
}
We have an SQL script that generates the HiloPrimaryKeys table and seeds it with data which gets run during deployment. This is working fine.
I'm now trying to write unit tests to verify our persistence layer, ideally using SQLite in memory configuration for speed. This is how I configure NH for the tests:
[SetUp]
public void SetupContext()
{
config = new SQLiteConfiguration()
.InMemory()
.ShowSql()
.Raw("hibernate.generate_statistics", "true");
var nhConfig = Fluently.Configure()
.Database(PersistenceConfigurer)
.Mappings(mappings =>
mappings.FluentMappings.AddFromAssemblyOf<DocumentMap>()
.Conventions.AddFromAssemblyOf<IdGenerationConvention>());
SessionSource = new SessionSource(nhConfig);
Session = SessionSource.CreateSession();
SessionSource.BuildSchema(Session);
}
The problem is I don't know how to tell NHibernate about our deployment script so that it generates the correct schema and seed data during tests.
The specific problem I get is when running the following PersistenceSpecification test:
[Test]
public void ShouldAddDocumentToDatabaseWithSimpleValues()
{
new PersistenceSpecification<Document>(Session)
.CheckProperty(x => x.CreatedBy, "anonymous")
.CheckProperty(x => x.CreatedOn, new DateTime(1954, 12, 23))
.CheckProperty(x => x.Reference, "anonymous")
.CheckProperty(x => x.IsMigrated, true)
.CheckReference(x => x.DocumentType, documentType)
.VerifyTheMappings();
}
Which throws the following exception:
TestCase ... failed:
Execute
NHibernate.Exceptions.GenericADOException:
could not get or update next value[SQL: ]
---> System.Data.SQLite.SQLiteException: SQLite error
no such column: TableKey
So my deduction is that it hasn't run the deployment script when checking the persistence spec.
Is there an existing solution to this situation? My Google-fu seems to have deserted me on this one.

As Brian said, you can run the deployment script after the schema is built. This code works well for me:
var config = new SQLiteConfiguration()
.InMemory()
.ShowSql()
.Raw("hibernate.generate_statistics", "true");
var nhConfig = Fluently.Configure()
.Database(config)
.Mappings(mappings =>
mappings.FluentMappings.AddFromAssemblyOf<DocumentMap>()
.Conventions.AddFromAssemblyOf<IdGenerationConvention>());
var SessionSource = new SessionSource(nhConfig);
var Session = SessionSource.CreateSession();
SessionSource.BuildSchema(Session);
// run the deployment script
var deploymentScriptQuery = Session.CreateSQLQuery("ALTER TABLE HiloPrimaryKeys ADD COLUMN TableKey VARCHAR(255); INSERT INTO HiloPrimaryKeys (TableKey, NextHighValue) values ('Document', 1);");
deploymentScriptQuery.ExecuteUpdate();
The deployment script could be loaded from file etc...
Building FNH configuration and database schema is time consuming action. Execution of test suit will take unacceptable amount of time if the count of tests that are using the schema grows and the schema and the configuration are built by each test class. Both configuration and schema should be shared between all tests. Here is how to achieve that without loosing test isolation.
EDIT:
If more than one session instance is required in test then connection pooling should be turned on, or both sessions should be created via the same connection. Details here...

Disclaimer: I'm not an NHibernate user...
...But one possible workaround would be to run your deployment script (or some variation of it) in your test's Setup method (using a shell execute/Process.Start) or to run it in your build script just before you run these tests. You may need to add cleanup in this case if you want a fresh database each test.

We have an SQL script that generates the HiloPrimaryKeys table and seeds it with data which gets run during deployment. This is working fine.
Can you create an entity that gets mapped that represents this HiloPrimaryKeys table and fill this table before your tests start? You could put this in a base class that all your other tests inherit from so that you wouldn't have to add this to every testing class.
This is similar to Brian's solution but instead this table will be created when you do your automapping just like the rest of your tables.

Related

Test runners inconsistent with HttpClient and Mocking HttpMessageRequest XUnit

So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.

How can I unit test a MassTransit consumer that builds and executes a routing slip?

In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs

How to modify domain mapping on grails before test executions

Thanks in advance for your help!!
I have to change the sequence of Domain object, cause when is working according some environment variable, de PK will be assigned by sequence ( its value will be over 100M , and if it's working with another "scope", I will have to setup the PK of the same domain ( It`s about from a migrated process, so the PK inserted will be from 40M to 90M, it's on demand process):
As an example:
static mapping = {
if (System.getenv("MIGRATOR")) {
id generator: 'assigned'
}else{
id generator: 'sequence', params: [sequence: 'MY_SEQ']
}
}
And I would like with my integration test do something like:
void "test ..." {
System.metaclass.'static'.getenv = {return (it.equals(MIGRATOR))}
..stuff test about migration and thing related to insert add hoc Domain instance.
}
But I realize that environment is setting up before test running.. so I don't see another way..
Note: I do Integration test cause is an transactional code, with withTransactions functions, so as unit test, it doesn't work , I do it in this way, but, I hear another propose so I can change my point of view to test it.
If you just want to make sure that your mapping is correct with your env variables, you can do a integration test, and inspect your domain class mapping though the org.codehaus.groovy.grails.orm.hibernate.cfg.Mapping instance:
Mapping mapping = new GrailsDomainBinder().getMapping(MyDomainClass)
println mapping.getIdentity() //id[generator:sequence, column:id, type:class java.lang.Long]
Another option is to set your variable in your cmd / console before running the test, take the advantage of running a single test in grails:
set MIGRATOR=true
grails test-app -integration package.TestSpec

How to create unit tests against non in-memory database such as MySQL in Play framework, with resetting to known state?

I want to create unit tests that cover code that use relational database in Play framework 2.1.0. There are many possibilities for this and all cause problems:
Testing on in-memory H2 database
Play framework documentation proposes to run unit tests on H2 in-memory database, even if main database used for development and production use other software (i.e. MySQL):
app = Helpers.fakeApplication(Helpers.inMemoryDatabase());
My application don't use complicated RDBMS features such as stored procedures and most database access cases are ebean calls, so it should be compatible with both MySQL and H2.
However, table creation statements in evolutions use MySQL-specific features, such as specifying ENGINE = InnoDB, DEFAULT CHARACTER SET = utf8, etc. I fear if I will remove these proprietary parts of CREATE TABLE, MySQL will use some default setting that I can't control and that depend on version, so to test and develop application main MySQL config must be modified.
Anybody used this approach (making evolutions compatible with both MySQL and H2)?
Other ideas how it can be handled:
Separate evolutions for MySQL and H2 (not a good idea)
Some way to make H2 ignore additional MySQL stuff in create table (MySQL compatibility mode don't work, it still complain even on default character set). I don't know how.
Testing on the same database driver as main database
The only advantage of H2 in-memory database that it is fast, and testing on the same database driver than dev/production database may be better, because it is closer to real environment.
How it can be done right in Play framework?
Tried:
Map<String, String> settings = new HashMap<String, String>();
settings.put("db.default.url", "jdbc:mysql://localhost/sometestdatabase");
settings.put("db.default.jndiName", "DefaultDS");
app = Helpers.fakeApplication(settings);
Looks like evolutions work here, but how it's best to clean database before each test? By creating custom code that truncates each table? If it will drop tables, then will evolutions run again before next test, or they are applied once per play test command? Or once per Helpers.fakeApplication() invocation?
What are best practices here? Heard about dbunit, is it possible to integrate it without much pain and quirks?
First, I would recommend you to use the same RDBMS for testing and production as it could avoid some hard-to-find bugs.
Concerning the need to clean your database between each test, you can use Ebean DdlGenerator to generate scripts to create a clean database and JUnit's #Before annotation to automatically execute these scripts before every test.
Using the DdlGenerator can be done like this :
EbeanServer server = Ebean.getServer(serverName);
ServerConfig config = new ServerConfig();
DdlGenerator ddl = new DdlGenerator((SpiEbeanServer) server, new MySqlPlatform(), config);
This code can be placed in a base-class that you could make inherit your tests (or inside a custom Runner that you can use with the #RunWith annotation).
It will also allow you to easily automate the FakeApplication creation, avoiding some boilerplate code.
Some links that can be helpful :
http://blog.matthieuguillermin.fr/2012/03/unit-testing-tricks-for-play-2-0-and-ebean/
https://gist.github.com/nboire/2819920
I used the same database engine that main database and dbunit for cleaning up before each test.
public class SomeTest {
// ...
#Before
public void startApp() throws Exception {
// Set up connection to test database, different from main database. Config better should be used instead of hard-coding.
Map<String, String> settings = new HashMap<String, String>();
settings.put("db.default.url", "jdbc:mysql://localhost/somedatabase?characterEncoding=UTF-8&useOldAliasMetadataBehavior=true");
settings.put("db.default.user", "root");
settings.put("db.default.password", "root");
settings.put("db.default.jndiName", "DefaultDS"); // make connection available to dbunit through JNDI
app = Helpers.fakeApplication(settings);
Helpers.start(app);
databaseTester = new JndiDatabaseTester("DefaultDS");
IDataSet initialDataSet = new FlatXmlDataSetBuilder().build(play.Play.application()
.resourceAsStream("/resources/dataset.xml"));
databaseTester.setDataSet(initialDataSet);
databaseTester.onSetup();
}
#After
public void stopApp() throws Exception {
databaseTester.onTearDown();
Helpers.stop(app);
}
}
My dataset.xml just contain table names to tell dbunit to empty these tables before each test. It also can contain fixtures.
<?xml version="1.0" encoding="UTF-8"?>
<dataset>
<name_of_my_first_table />
<name_of_my_second_table />
</dataset>
Evolutions run automatically on test database when using this approach, so if you remove all tables from test database, they will be recreated.
It is overkill to use dbunit if you only need to clean tables, you can clean them by issuing query directly or by using ebean DdlGenerator. But I also use dbunit for comparing data.
I don't use Helpers.running, because it takes Runnable and Runnable implementations cannot throw exceptions - very inconvenient for tests. But if you look at code for running(), it just calls Helpers.start() and Helpers.stop() so I call these methods directly in #Before and #After.
Decided not to use H2 for running tests: yes, it runs faster, but there are too much difference between it and MySQL.
Anybody used this approach (making evolutions compatible with both MySQL and H2)?
I have found an answer for the MySQL specific features: How can I unit test for MySQL database with Play 2.x?
When I wrote my tests for my postgres database I simply created a HashMap to connect to the database, and then I wrote test queries to make sure correct amount of records exists and so on... Here is my code.
#Test
public void testDataBase() {
final HashMap<String,String> postgres = new HashMap<String, String>();
postgres.put("db.default.driver","org.postgresql.Driver");
postgres.put("db.default.url","jdbc:postgresql://localhost/myDataBase");
postgres.put("db.default.user", "postgres");
postgres.put("db.default.password", "password");
running(fakeApplication(postgres), new Runnable() {
#Override
public void run() {
//Insert Assertions Here
}
});
}
You can also using DB mock, if goal is to validate both your Slick|JPA|Anorm mappings & functions based on.
When it's fit it has advantage to be more compliant with unit testing than a test DB, and more easy to manage (not setup/clear tasks, not sync of tests to avoid access to same test tables).
You can have a look at my framework Acolyte ( http://github.com/cchantep/acolyte ) which is used in specs of Anorm itself (e.g. https://github.com/playframework/playframework/blob/master/framework/src/anorm/src/test/scala/anorm/SqlResultSpec.scala ).

calling an Asp.net MVC 3 action method that updates a database in a unit testing method

Coders, I am in process of writing test cases for Asp.net MVC 3 project and I need call an action method that insert data into a database using Entity Framework. Here is the code for the action method:
//
// POST: /School/Create
[HttpPost]
public ActionResult Create(School school)
{
if (ModelState.IsValid)
{
db.Schools.Add(school);
db.SaveChanges();
return RedirectToAction("Index");
}
return View(school);
}
And here is the code for my test method:
[TestMethod]
public void CreateNewSchool()
{
var schoolController = new SchoolController();
var viewResult = schoolController.Index();
//creating a school object
School school = new School();
school.Name = "OOO";
//passing the school object to the action method
schoolController.Create(school);
//making sure that the model is not null
Assert.IsNotNull(viewResult.Model);
}
Notice, however, that I don’t check if the data were actually inserted in the database. I just check that the model of the view is not null. I do manually check the database using SQL server management studio.
The problem is though that when I call the action method in the test method to create/insert a record in the database nothing happened to the database. However, if I run the application and brows to the create page and try to create a new recorded then the record will be added to the database. So it appears to me that insertion to database happens only if I run the application and actually brows to the create page and hit the create button, but I cannot programmatically call the action method in the test method to insert a new record in the database. I have also debugged the test case and it did hit the db.SaveChanges(); line in the action method, but no changes were reflected in the database.
So, can someone explain to me why I am not able to insert a record by calling the action method in my test method?
Thanks in advance.
I would look into how your db context is getting instantiated. In many cases, it is not desirable for unit tests to cause database round-trips, so people use strategies like mocking to prevent it. It might be something as simple as using a different connection string when you are running a unit test versus running it as an asp.net application.
Do you have any validation on the school class? I am guessing when you run it under the test environment that school is not valid and so ModelState.IsValid returns false and doesn't save.