Another datasource for service layer unit testing - unit-testing

After learning JUnit and experienced its benefits for both programmer and the project, I wanted now to unit test the service layer of each entities and test if each methods works properly.
As of now, I already have created a unit test for all of my service classes but the problem is that the datasource's data isn't suited for testing. Thus I have to created another database for service layer testing and configure the datasource for the unit test of the service layers. But the things is I don't know how to configure another datasource which only the src/test/java could access and couldn't be accessed upon production. I'm still new to SpringBoot and SpringData so I'm asking how to configure such requirements here.
As of now I have this application.properties configuration.
spring.datasource.url=<DatabaseURL>
spring.datasource.username=<DatabaseUsername>
spring.datasource.password=<DatabasePassword>
spring.datasource.driver-class-name=<DatabaseDriver>
// another datasource configuration
And here's a sample code for a service class. Which uses the application.properities - dataSource configuration.
#Service
public class FooService {
#PersistenceContext
private EntityManager entityManager;
public List<Foo> findAllByFooForm(FooForm fooForm) {
// JPA CriteriaBuilder query accroding to FooForm
return entityManager.createQuery(query).getResultList();
}
}
Finally, here's a sample code for unit test of a service class.
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = Application.class)
public class FooServiceTest {
#AutoWired
private FooService fooService
#Test
public void testFindAllByFooForm() {
// Test statements
}
}

There are a few approaches which can be combined to give you good control over this.
First of all, if you create src/test/resources/application.properties, then that will only be available on the classpath during testing. It will override any properties that you have defined in src/main/resouces/application.properties.
If you are using an in-memory database to support those tests, then you can ensure that different import.sql files are loaded, through the use of the following property:
spring.jpa.properties.hibernate.hbm2ddl.import_files=import-test1.sql
That annotation takes a comma-separated list of import scripts, so you can have a base set of data loaded by one script and additional (test-specific perhaps) data loaded by others.
If you wish to connect to a different database in each test, or cause different import scripts to be used, then you can use profiles to trigger this. If you create a properties file application-test1.properties, then the test itself can cause that to be loaded using the annotation: #ActiveProfiles({"test1"}).

Related

Override method not being called from warehouse mobile application ax

I have been working on some requirement for advance warehouse mobile application in AX. The requirement was to do something when item is scanned. So in order to perform this I have registeroverridemethod of leave when item text box is build. The build methods is below:
//This method is updated in WhsWorkExecuteForm
protected void createTextBox(
container _textBox,
boolean _password = false)
{
FormBuildStringControl stringControl;
stringControl = controlGroup.addControl(FormControlType::String,this.elementName(_textBox));
if (this.elementHasError(_textBox))
{
stringControl.colorScheme(FormColorScheme::RGB);
stringControl.backgroundColor(WHSWorkExecuteForm::errorBackgroundColor());
}
stringControl.text(this.elementData(_textBox));
stringControl.label(this.elementLabel(_textBox));
stringControl.passwordStyle(_password);
stringControl.enabled(this.elementEnabled(_textBox));
//Below code is added to register override method
if(this.elementName(_textBox) == #ItemId)
{
stringControl.registerOverrideMethod(methodStr(FormStringControl,Leave),methodStr(WHSWorkExecuteForm,DynamicButtonControl_modified),this);
}
}
This method is being called when I run the warehouse app from AX AOT i.e. Action Menu item -> WHSWorkExecute but it is not working from browser. I have run the incremental CIL as well but no change.
Any idea? do I need to do changes in DisplayIEOS.aspx as well?
The web browser part of the Warehouse Mobile Device Portal is driven by xml files that are exchanged between the AOS and IIS website. You can read more about that in Warehouse Mobile Device Portal Architecture
The WHSWorkExecute form in the AOT of the Dynamics AX desktop client is basically a quick&dirty "emulator" of the web client. It enables you to test changes in the WHSWorkExecute framework logic that drives the mobile device functionality without having to set up the components that enable the web client. But changing this form at run time with FormBuild classes like in your code will have no effect on the web client, because this has no effect on the xml data sent to the website.
Instead, you should use the methods provided by the WHSWorkExecute framework to add controls. See Creating Custom Solutions with the Warehouse Mobile Device Portal, it has a section on the buildControl method of the framework.
How to handle a modified event of a control depends on what you want to do. The second link describes briefly how you could implement some client side only logic.
If you need to execute logic on the AOS, you would have to modify one of the specialized build methods or create your own. The second link also has some guidance on this. Registering override methods for FormControl objects will not work, because again this will not change the xml data sent to the web client.

ef core migration can't use secret manager

When I create .net core web applications, I use the secret manager during testing. I am generally able to create a new web project (mvc and web api), right click on the project and select "manage user secrets". This opens a json file where I add the secrets. I then use this in my startup.cs something like this:
services.AddDbContext<ApplicationDbContext>(options =>
options.UseMySql(Configuration["connectionString"]));
The website works fine with this and connects well to the database. However when I try using ef core migration commands such as add-migration, they don't seem to be able to access the connection string from the secret manager. I get the error saying "connection string can't be null". The error is gone when I hard code Configuration["connectionString"] with the actual string. I have checked online and checked the .csproj file, they already contain the following lines:
<UserSecretsId>My app name</UserSecretsId>
And later:
<ItemGroup>
<DotNetCliToolReference Include="Microsoft.EntityFrameworkCore.Tools.DotNet" Version="2.0.1" />
<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />
Is there anything I need to add so the migrations can access the connection string?
Update
I only have one constructor in the context class:
public ApplicationDBContext(DbContextOptions<ApplicationDBContext> options) : base(options)
{
}
I am currently coming across this exact problem as well. I have come up with a solution that works for now, but one may consider messy at best.
I have created a Configuration Class that provides the Configuration Interface when requested:
public static class Configuration
{
public static IConfiguration GetConfiguration()
{
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
}
}
In the Migration, you can then get the Configuration File and access its UserSecrets like this:
protected override void Up(MigrationBuilder migrationBuilder)
{
var conf = Configuration.GetConfiguration();
var secret = conf["Secret"];
}
I have tested creating a SQL Script with these User Secrets, and it works (you obviously wouldn't want to keep the Script laying around since it would expose the actual secret).
Update
The above config can also be set up into Program.cs class in the BuildWebHost method:
var config = new ConfigurationBuilder().AddUserSecrets<Startup>().Build();
return WebHost.CreateDefaultBuilder(args).UseConfiguration(config)...Build()
Or in the Startup Constructor if using that Convention
Update 2 (explanation)
It turns out this issue is because the migration scripts runs with the environment set to "Production". The secret manager is pre-set to only work in "Development" environment (for a good reason). The .AddUserSecrets<Startup>() function simply adds the secrets for all environment.
To ensure that this isn't set to your production server, there are two solutions I have noticed, one is suggested here: https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell
Set env:ASPNETCORE_ENVIRONMENT before running to specify the ASP.NET Core environment.
This solution would mean there is no need to set .AddUserSecrets<Startup>() on every project created on the computer in future. However if you happen to be sharing this project across other computers, this needs to be configured on each computer.
The second solution is to set the .AddUserSecrets<Startup>() only on debug build like this:
return new ConfigurationBuilder()
.AddJsonFile("appsettings.json", true, true)
#if DEBUG
.AddUserSecrets<Startup>()
#endif
.AddEnvironmentVariables()
.Build();
Additional Info
The Configuration Interface can be passed to Controllers in their Constructor, i.e.
private readonly IConfiguration _configuration;
public TestController(IConfiguration configuration)
{
_configuration = configuration;
}
Thus, any Secrets and Application Setting are accessible in that Controller by accessing _configuration["secret"].
However, if you want to access Application Secrets from, for example, a Migration-File, which exists outside of the Web Application itself, you need to adhere to the original answer because there's no easy way (that I know of) to access those secrets otherwise (one use case I can think of would be seeding the Database with an Admin and a Master Password).
To use migrations in NetCore with user secrets we can also set a class (SqlContextFactory) to create its own instance of the SqlContext using a specified config builder. This way we do not have to create some kind of workaround in our Program or Startup classes. In the below example SqlContext is an implementation of DbContext/IdentityDbContext.
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;
using Microsoft.Extensions.Configuration;
public class SqlContextFactory : IDesignTimeDbContextFactory<SqlContext>
{
public SqlContext CreateDbContext(string[] args)
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false)
.AddUserSecrets<Startup>()
.AddEnvironmentVariables()
.Build();
var builder = new DbContextOptionsBuilder<SqlContext>();
builder.UseSqlServer(config.GetConnectionString("DefaultConnection"));
return new SqlContext(builder.Options);
}
}
Since I have noticed a lot of people running into this confusion, I am writing a simplified version of this resolution.
The Problem/Confusion
The secret manager in .net core is designed to work only in the Development environment. When running your app, your launchSettings.json file ensures that your ASPNETCORE_ENVIRONMENT variable is set to "Development". However, when you run EF migrations it doesn't use this file. As a result, when you run migrations, your web app does not run on the Development environment and thus no access to the secret manager. This often causes confusion as to why EF migrations can't use the secret manager.
The Resolution
Make sure your environment variable "ASPNETCORE_ENVIRONMENT" is set to "Development" in your computer.
The way of using .AddUserSecrets<Startup>() will make a circular reference if we having our DbContext in a separate class library and using DesignTimeFactory
The clean way of doing that is:
public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<AppDbContext>
{
public AppDbContext CreateDbContext(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
#if DEBUG
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{project path}/appsettings.Development.json",
optional: true, reloadOnChange: true)
#else
.AddJsonFile(#Directory.GetCurrentDirectory() +
"{startup project path}/appsettings.json",
optional: true, reloadOnChange: true)
#endif
.AddEnvironmentVariables()
.Build();
var connectionString = configuration.GetConnectionString("DefaultConnection");
var builder = new DbContextOptionsBuilder<AppDbContext>();
Console.WriteLine(connectionString);
builder.UseSqlServer(connectionString);
return new AppDbContext(builder.Options);
}
}
The Explanation:
Secret Manager is meant to be in the development time only, so this will not affect the migration in case if you have it in a pipeline in QA or Production stages, so to fix that we will use the dev connection string which exists in appsettings.Development.json during the #if Debug.
The benefit of using this way is to decouple referencing the Web project Startup class while using class library as your Data infrastructure.

Is there a way to map an object graph with #Query?

I'm am trying to migrate my SDN3 embedded configuration to using SDN 3.3.0 with a Neo4j instance in server mode (communicating via the REST API then).
When the DB was embedded making a lot of small hits to the DB was not a big deal as Neo4j is capable of handling this kind of queries super fast.
However now that I run my Neo4j separately from my application (ie. in server mode) making a lot of small queries is not advisable because of the network overhead.
User user = userRespository.findOne(123);
user.fetch(user.getFriends());
user.fetch(user.getManager());
user.fetch(user.getAgency());
This will trigger quite a few queries, especially if I want to get, not a single user, but a list of users.
Can I use the #Query annotation and fetch the user and the related entities and map it into an User object?
I was thinking of something like this:
#Query("MATCH (u:User)-[r:FRIEND]->(f) RETURN u,r,f"
Is such a thing possible with Spring Data Neo4j? Will it be possible with Spring Data Neo4j 4?
You can define a class for query result using the #QueryResult directive and let the method for the query return an object of that class, i.e.:
#QueryResult
public interface UserWithFriends {
#ResultColumn("u")
User getUser();
#ResultColumn("f")
List<User> friends();
}
#Query("MATCH (u:User)-[:FRIEND]->(f) WHERE u.name={name} RETURN u,f")
UserWithFriends getUserByName(#Param("name") String name);

Generate Symfony2 fixtures from DB?

Is it possible to generate fixtures from an existing DB in Symfony2/Doctrine? How could I do that?
Example:
I have defined 15 entities and my symfony2 application is working. Now some people are able to browse to the application and by using it it had inserted about 5000 rows until now. Now I want the stuff inserted as fixtures, but I don’t want to do this by hand. How can I generate them from the DB?
There's no direct manner within Doctrine or Symfony2, but writing a code generator for it (either within or outside of sf2) would be trivial. Just pull each property and generate a line of code to set each property, then put it in your fixture loading method. Example:
<?php
$i = 0;
$entities = $em->getRepository('MyApp:Entity')->findAll();
foreach($entities as $entity)
{
$code .= "$entity_{$i} = new MyApp\Entity();\n";
$code .= "$entity_{$i}->setMyProperty('" . addslashes($entity->getMyProperty()); . "'); \n");
$code .= "$manager->persist($entity_{$i}); \n $manager->flush();";
++$i;
}
// store code somewhere with file_put_contents
As I understand your question, you have two databases: the first is already in production and filled with 5000 rows, the second one is a new database you want to use for new test and development. Is that right ?
If it is, I suggest you to create in you test environment two entity manager: the first will be the 'default' one, which will be used in your project (your controllers, etc.). The second one will be used to connect to your production database. You will find here how to deal with multiple entity manager : http://symfony.com/doc/current/cookbook/doctrine/multiple_entity_managers.html
Then, you should create a Fixture class which will have access to your container. There is an "how to" here : http://symfony.com/doc/current/bundles/DoctrineFixturesBundle/index.html#using-the-container-in-the-fixtures.
Using the container, you will have access to both entity manager. And this is the 'magic': you will have to retrieve the object from your production database, and persist them in the second entity manager, which will insert them in your test database.
I point your attention to two points:
If there are relationship between object, you will have to take care to those dependencies: owner side, inversed side, ...
If you have 5000 rows, take care on the memory your script will use. Another solution may be use native sql to retrieve all the rows from your production database and insert them in your test database. Or a SQL script...
I do not have any code to suggest to you, but I hope this idea will help you.
I assume that you want to use fixtures (and not just dump the production or staging database in the development database) because a) your schema changes and the dumps would not work if you update your code or b) you don't want to dump the hole database but only want to extend some custom fixtures. An example I can think of is: you have 206 countries in your staging database and users add cities to those countries; to keep the fixtures small you only have 5 countries in your development database, however you want to add the cities that the user added to those 5 countries in the staging database to the development database
The only solution I can think of is to use the mentioned DoctrineFixturesBundle and multiple entity managers.
First of all you should configure two database connections and two entity managers in your config.yml
doctrine:
dbal:
default_connection: default
connections:
default:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
staging:
...
orm:
auto_generate_proxy_classes: %kernel.debug%
default_entity_manager: default
entity_managers:
default:
connection: default
mappings:
AcmeDemoBundle: ~
staging:
connection: staging
mappings:
AcmeDemoBundle: ~
As you can see both entity managers map the AcmeDemoBundle (in this bundle I will put the code to load the fixtures). If the second database is not on your development machine, you could just dump the SQL from the other machine to the development machine. That should be possible since we are talking about 500 rows and not about millions of rows.
What you can do next is to implement a fixture loader that uses the service container to retrieve the second entity manager and use Doctrine to query the data from the second database and save it to your development database (the default entity manager):
<?php
namespace Acme\DemoBundle\DataFixtures\ORM;
use Doctrine\Common\DataFixtures\FixtureInterface;
use Doctrine\Common\Persistence\ObjectManager;
use Symfony\Component\DependencyInjection\ContainerAwareInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Acme\DemoBundle\Entity\City;
use Acme\DemoBundle\Entity\Country;
class LoadData implements FixtureInterface, ContainerAwareInterface
{
private $container;
private $stagingManager;
public function setContainer(ContainerInterface $container = null)
{
$this->container = $container;
$this->stagingManager = $this->container->get('doctrine')->getManager('staging');
}
public function load(ObjectManager $manager)
{
$this->loadCountry($manager, 'Austria');
$this->loadCountry($manager, 'Germany');
$this->loadCountry($manager, 'France');
$this->loadCountry($manager, 'Spain');
$this->loadCountry($manager, 'Great Britain');
$manager->flush();
}
protected function loadCountry(ObjectManager $manager, $countryName)
{
$country = new Country($countryName);
$cities = $this->stagingManager->createQueryBuilder()
->select('c')
->from('AcmeDemoBundle:City', 'c')
->leftJoin('c.country', 'co')
->where('co.name = :country')
->setParameter('country', $countryName)
->getQuery()
->getResult();
foreach ($cities as $city) {
$city->setCountry($country);
$manager->persist($city);
}
$manager->persist($country);
}
}
What I did in the loadCountry method was that I load the objects from the staging entity manager, add a reference to the fixture country (the one that already exists in your current fixtures) and persist it using the default entity manager (your development database).
Sources:
DoctrineFixturesBundle
How to work with Multiple Entity Managers
you could use https://github.com/Webonaute/DoctrineFixturesGeneratorBundle
It add ability to generate fixtures for single entity using commands like
$ php bin/console doctrine:generate:fixture --entity=Blog:BlogPost --ids="12 534 124" --name="bug43" --order="1"
Or you can create full snapshot
php app/console doctrine:generate:fixture --snapshot --overwrite
The Doctrine Fixtures are useful because they allow you to create objects and insert them into the database. This is especially useful when you need to create associations or say, encode a password using one of the password encoders. If you already have the data in a database, you shouldn't really need to bring them out of that format and turn it into PHP code, only to have that PHP code insert the same data back into the database. You could probably just do an SQL dump and then re-insert them into your database again that way.
Using a fixture would make more sense if you were initiating your project but wanted to use user input to create it. If you had in your config file the default user, you could read that and insert the object.
The AliceBundle can help you doing this. Indeed it allows to load fixtures with YAML (or PHP array) files.
For instance you can define your fixtures with:
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1->id'
Or with the same structure in a PHP array. It's WAY easier than generating working PHP code.
It also supports references:
Nelmio\Entity\User:
# ...
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1'
In the doctrine_fixture cookbook, you can see in the last example how to get the service container in your entity.
With this service container, you can retrieve the doctrine service, then the entity manager. With the entity manager, you will be able to get all the data from your database you need.
Hope this will help you!

Grails Integration testing: problems with get

I'm trying to write a simple integration test, but having some trouble with Domain Objects. I've read on unit testing but can't figure it out.
This is my simple test:
User user = User.get(1)
controller.params.userid = "1"
controller.session.user = user
controller.save();
The error message is:
groovy.lang.MissingMethodException: No
signature of method: static
com.baufest.insside.user.User.get() is
applicable for argument types:
(java.lang.Integer) values: 1
My guess is that I should mock the user object, but don't know how.
You say that you're integration testing, but it looks like you're unit testing. Is the test under test/integration or test/unit? Unit tests need mocking, but integration tests have an initialized Spring application context and Hibernate, and run against an in-memory database.
This is described in the user guide, which is at http://grails.org/doc/latest/ (you reference an older 1.1 version).
To mock the User class, just call mockDomain with one or more test instances either in setUp or in the test method:
def users = [new User(...), new User(...), ...]
mockDomain User, users
...
User user = User.get(1)