SpecFlow equivalent to parameterized test fixture - unit-testing

I’m using SpecFlow to write a set of tests, and I’d like to run each test multiple times, with different input data. I could do this with scenario outlines, but I want to run every scenario in the feature file with the same test cases.
I know I can use the Background to share the setup for one case, but I’m looking for something like a cross between Background and Scenario Outline, where I can supply a table of data to the Background and run the entire feature file once per row.
In NUnit, I’d use a parameterized test fixture to achieve this. Is there any equivalent in SpecFlow?

You can utilize specflow assist helpers to create data table object and use it in Background
Background:
Given I’ve Entered The Following Information
| FirstName| LastName|Email |
| Abcd1 | Xyz1 |abc1#xyz1.com|
| Abcd2 | Xyz2 |abc2#xyz2.com|
class Person
{
string FirstName { get; set; }
string LastName { get; set; }
string email { get; set; }
}
Usage:
[Binding]
[Given(#"I’ve Entered The Following Information")]
public void UseData(TechTalk.SpecFlow.Table table)
{
var enumeratePersons = table.CreateSet<Person>();
foreach (Person P in enumeratePersons ){
log.Info(P.FirstName + " " + P.LastName );
}
}
You might have to use properties or specflow context to share data between bindings. When Background is run , it will create the data object for each scenario but to use it across bindings is user's responsibility

Related

Extending SimpleNeo4jRepository in SDN 6

In SDN+OGM I used the following method to extend the base repository with additional functionality, specifically I want a way to find or create entities of different types (labels):
#NoRepositoryBean
public class MyBaseRepository<T> extends SimpleNeo4jRepository<T, String> {
private final Class<T> domainClass;
private final Session session;
public SpacBaseRepository(Class<T> domainClass, Session session) {
super(domainClass, session);
this.domainClass = domainClass;
this.session = session;
}
#Transactional
public T findOrCreateByName(String name) {
HashMap<String, String> params = new HashMap<>();
params.put("name", name);
params.put("uuid", UUID.randomUUID().toString());
// we do not use queryForObject in case of broken data with non-unique names
return this.session.query(
domainClass,
String.format("MERGE (x:%s {name:$name}) " +
"ON CREATE SET x.creationDate = timestamp(), x.uuid = $uuid " +
"RETURN x", domainClass.getSimpleName()),
params
).iterator().next();
}
}
This makes it so that I can simply add findOrCreateByName to any of my repository interfaces without the need to duplicate a query annotation.
I know that SDN 6 supports the automatic creation of a UUID very nicely through #GeneratedValue(UUIDStringGenerator.class) but I also want to add the creation date in a generic way. The method above allows to do that in OGM but in SDN the API changed and I am a bit lost.
Well, sometimes it helps to write down things. I figured out that the API did not change that much. Basically the Session is replaced with Neo4jOperations and the Class is replaced with Neo4jEntityInformation.
But even more important is that SDN 6 has #CreatedDate which makes my entire custom code redundant.

How do you properly test for loop queries within test classes?

I have a Schedulable class which will get called once per night. I have run the code anonymously and everything works as it should. The problem I am having is that I cannot get proper test coverage on it! I have written a test class that I believe should work, but for some reason any lines within my for-loops are not being covered.
I assume that it is because no data is being returned from these queries, however there are thousands of records that should be returned. I have run the queries on the production environment without any issues.
Is there a separate process for running queries in a schedulable class?
Here's part of my class:
global class UpdateUnitPrice implements Schedulable{
global void execute(SchedulableContext sc){
// OwnerId -> List of Strings with Row Contents
Map<Id,Map<Id,Map<String,String>>> updateContainer = new Map<Id,Map<Id,Map<String,String>>>{}; // Covered
List<Id> ownerContainer = new List<Id>{}; // Covered
String EmailMessage; // Covered
String EmailLine; // Covered
String EmailAddedLines; // Covered
String CurrentEmailLine; // Covered
String NewEmailLine; // Covered
List<Id> opportunityList = new List<Id>{}; // Covered
for(Opportunity thisOpp :[SELECT Id,Name FROM Opportunity WHERE Order_Proposed__c = null])
{
// Thousands of records should be returned
opportunityList.add(thisOpp.Id); // NOT COVERED!!
}
List<OpportunityLineItem> OppLineItemList = new List<OpportunityLineItem>{}; // Covered
for(OpportunityLineItem thisOppProd : [SELECT Id,OpportunityId,Opportunity.OwnerId,Opportunity.Name,Product_Name__c,UnitPrice,ListPrice
FROM OpportunityLineItem
WHERE OpportunityId IN :opportunityList
AND UnitPrice_lt_ListPrice__c = 'True'
ORDER BY OpportunityId ASC])
{
. . . // NO LINES COVERED WITHIN THIS LOOP
}
. . .
}
}
Here's my test class:
#isTest
private class UpdateUnitPriceTest {
static testMethod void myUnitTest() {
Test.startTest();
// Schedule the test job
String jobId = System.schedule('UpdateUnitPrice','0 0 0 3 9 ? 2022',new UpdateUnitPrice());
// Get the information from the CronTrigger API object
CronTrigger ct = [SELECT Id, CronExpression, TimesTriggered, NextFireTime FROM CronTrigger WHERE id = :jobId];
// Verify the expressions are the same
System.assertEquals(ct.CronExpression,'0 0 0 3 9 ? 2022');
// Verify the job has not run
System.assertEquals(0, ct.TimesTriggered);
// Verify the next time the job will run
System.assertEquals('2022-09-03 00:00:00', String.valueOf(ct.NextFireTime));
Test.stopTest();
}
}
Am I supposed to specifically reference something within these for-loops for them to fire? This Class should be able to just run everything on it's own without inserting records for testing. What am I missing?
Thanks in advance for any help given!
There was a lovely feature added to API 24.0 which requires test classes to include a small (new) line of code in order to view queried data. I have no idea why this was implemented, but it sure did trip me up.
For our test classes to run properly, they now must have the following at the top:
#isTest (SeeAllData = true)
Previously, all that was needed was:
#isTest
You can read more on test classes and this new "feature" here: Apex Test Class Annotations

Is it possible to create an email-attachment on a Silverlight email?

I need to be able to send an email from a silverlight client-side application.
I've got this working by implementing a webservice which is consumed by the application.
The problem is that now I need to be able to add an attachment to the emails that are being sent.
I have read various posts, tried a dozen times to figure it out by myself, but to no prevail.
So now I find myself wondering if this is even possible?
The main issue is that the collection of attachments needs to be serializable. So, going by this, ObservableCollection - of type(FileInfo) is not working, ObservableCollection - of type (object) is not working... I've tried using List - of type(Stream), which serializes, but then i do not know how to create the file on the webservice side, as the stream-object does not have a name (which is the first thing I tried to assign to the Attachment object which will then be added to the message.attachments)... I'm kind of stuck in a rut here.
Can anybody maybe shed some light on this please?
I figured out how to do this, and it wasn't really as difficult as it appeared.
Create the following in your webservice-namespace:
`
[Serializable]
public class MyAttachment
{
[DataMember]
public string Name { get; set; }
[DataMember]
public byte[] Bytes { get; set; }
}`
Then add the following to your web-method parameters:
MyAttachment[] attachment
Add the following in the execution blocks of your web-method:`
foreach (var item in attachment)
{
Stream attachmentStream = new MemoryStream(item.Bytes);
Attachment at = new Attachment(attachmentStream, item.Name);
msg.Attachments.Add(at);
}`
Create the following property (or something similar) at your client-side:
`
private ObservableCollection<ServiceProxy.MyAttachment> _attachmentCollection;
public ObservableCollection<ServiceProxy.MyAttachment> AttachmentCollection
{
get { return _attachmentCollection; }
set { _attachmentCollection = value; NotifyOfPropertyChange(() => AttachmentCollection); }
}`
New up the public property (AttachmentCollection) in the constructor.
Add the following where your OpenFileDialog is supposed to return files:`
if (openFileDialog.File != null)
{
foreach (FileInfo fi in openFileDialog.Files)
{
var tempItem = new ServiceProxy.MyAttachment();
tempItem.Name = fi.Name;
var source = fi.OpenRead();
byte[] byteArray = new byte[source.Length];
fi.OpenRead().Read(byteArray, 0, (int)source.Length);
tempItem.Bytes = byteArray;
source.Close();
AttachmentCollection.Add(tempItem);
}
}`
Then finally where you call your web-method to send the email, add the following (or something similar):
MailSvr.SendMailAsync(FromAddress, ToAddress, Subject, MessageBody, AttachmentCollection);
This works for me, the attachment is sent with the mail, with all of its data exactly like the original file.

How can I avoid duplicating data in a document database like RavenDB?

Given that document databases, such as RavenDB, are non-relational, how do you avoid duplicating data that multiple documents have in common? How do you maintain that data if it's okay to duplicate it?
With a document database you have to duplicate your data to some degree. What that degree is will depend on your system and use cases.
For example if we have a simple blog and user aggregates we could set them up as:
public class User
{
public string Id { get; set; }
public string Name { get; set; }
public string Username { get; set; }
public string Password { get; set; }
}
public class Blog
{
public string Id { get; set; }
public string Title { get; set; }
public class BlogUser
{
public string Id { get; set; }
public string Name { get; set; }
}
}
In this example I have nested a BlogUser class inside the Blog class with the Id and Name properties of the User Aggregate associated with the Blog. I have included these fields as they are the only fields the Blog class is interested in, it doesn't need to know the users username or password when the blog is being displayed.
These nested classes are going to dependant on your systems use cases, so you have to design them carefully, but the general idea is to try and design Aggregates which can be loaded from the database with a single read and they will contain all the data required to display or manipulate them.
This then leads to the question of what happens when the User.Name gets updated.
With most document databases you would have to load all the instances of Blog which belong to the updated User and update the Blog.BlogUser.Name field and save them all back to the database.
Raven is slightly different as it support set functions for updates, so you are able to run a single update against RavenDB which will up date the BlogUser.Name property of the users blogs without you have to load them and update them all individually.
The code for doing the update within RavenDB (the manual way) for all the blog's would be:
public void UpdateBlogUser(User user)
{
var blogs = session.Query<Blog>("blogsByUserId")
.Where(b.BlogUser.Id == user.Id)
.ToList();
foreach(var blog in blogs)
blog.BlogUser.Name == user.Name;
session.SaveChanges()
}
I've added in the SaveChanges just as an example. The RavenDB Client uses the Unit of Work pattern and so this should really happen somewhere outside of this method.
There's no one "right" answer to your question IMHO. It truly depends on how mutable the data you're duplicating is.
Take a look at the RavenDB documentation for lots of answers about document DB design vs. relational, but specifically check out the "Associations Management" section of the Document Structure Design Considerations document. In short, document DBs use the concepts of reference by IDs when they don't want to embed shared data in a document. These IDs are not like FKs, they are entirely up to the application to ensure the integrity of and resolve.

How do I update with a newly-created detached entity using NHibernate?

Explanation:
Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other.
A -> B -> C -> D -> E
Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc...
Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this:
var instanceOfC;
using (var session = SessionFactory.OpenSession())
{
// get the instance of C with Id = 3
instanceOfC = session.Linq<C>().Where(x => x.Id == 3);
}
SendToUIAndLetUserUpdateData(instanceOfC);
using (var session = SessionFactory.OpenSession())
{
// re-attach the detached entity and update it
session.Update(instanceOfC);
}
In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database.
Problem:
This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again.
The problem is that now I'm using a web service and a browser, sending over JSON data. The entity gets serialized into a string, and de-serialized into a new entity. It's no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one (and updated fields). If I use this entity to update, it will wipe out the relationship to B and D because they don't exist in this new transient entity.
Question:
My question is, how do I serialize detached entities over the web to a client, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.
I solved this problem by using an intermediate class to hold data coming in from the web service, then copying its properties to the database entity. For example, let's say I have two entities like so:
Entity Classes
public class Album
{
public virtual int Id { get; set; }
public virtual ICollection Photos { get; set; }
}
public class Photo
{
public virtual int Id { get; set; }
public virtual Album Album { get; set; }
public virtual string Name { get; set; }
public virtual string PathToFile { get; set; }
}
Album contains a collection of Photo objects, and Photo has a reference back to the Album it's in, so it's a bidirectional relationship. I then create a PhotoDTO class:
DTO Class
public class PhotoDTO
{
public virtual int Id { get; set; }
public virtual int AlbumId { get; set; }
public virtual string Name { get; set; }
// note that the DTO does not have a PathToFile property
}
Now let's say I have the following Photo stored in the database:
Server Data
new Photo
{
Id = 15,
Name = "Fluffy Kittens",
Album = Session.Load<Album>(3)
};
The client now wants to update the photo's name. They send over the following JSON to the server:
Client Data
PUT http://server/photos/15
{
"id": 15,
"albumid": 3,
"name": "Angry Kittens"
}
The server then deserializes the JSON into a PhotoDTO object. On the server side, we update the Photo like this:
Server Code
var photoDTO = DeserializeJson();
var photoDB = Session.Load(photoDTO.Id); // or use the ID in the URL
// copy the properties from photoDTO to photoDB
photoDB.Name = photoDTO.Name;
photoDB.Album = Session.Load<Album>(photoDTO.AlbumId);
Session.Flush(); // save the changes to the DB
Explanation
This was the best solution I've found because:
You can choose which properties the client is allowed to modify. For example, PhotoDTO doesn't have a PathToFile property, so the client can never modify it.
You can also choose whether to update a property or not. For example, if the client didn't send over an AlbumId, it will be 0. You can check for that and not change the Album if the ID is 0. Likewise, if the user doesn't send over a Name, you can choose not to update that property.
You don't have to worry about the lifecycle of an entity because it will always be retrieved and updated within the scope of a single session.
AutoMapper
I recommend using AutoMapper to automatically copy the properties from the DTO to the entity, especially if your entites have a lot of properties. It saves you the trouble of having to write every property by hand, and has a lot of configurability.