Realm Migration: Move an Object from one Object to another Object - swift3

I have three objects:
class Customer: Object {
dynamic var solution: Solution!;
...
}
class Solution: Object {
dynamic var data: Data!;
...
}
class Data: Object {
...
}
Now i need to move the Data Object from Solution to Customer so that it becomes:
class Customer: Object {
dynamic var solution: Solution!;
dynamic var data: Data!;
...
}
I have no idea how I have to implement my Realm Migration method so that everything works fine and that I wont lose data.

I did some experiments with the Realm migrations sample app and came up with this potential solution:
In a migration block, you can only interact with your Realm file via the migration object. Any attempts to directly access the Realm file mid-migration will result in an exception.
That being said, it's possible to have nested calls to migration.enumerateObjects referencing different Realm model object classes. As such, it should simply be a matter of initially enumerating through the Customer objects, and in each iteration, enumerate through the Solution objects to find the corresponding one with the right data value. Once found, it should be possible to set the Customer object with the data from the Solution object.
Realm.Configuration.defaultConfiguration = Realm.Configuration(
schemaVersion: 1,
migrationBlock: { migration, oldSchemaVersion in
if (oldSchemaVersion < 1) {
migration.enumerateObjects(ofType: Customer.className()) { oldCustomerObject, newCustomerObject in
migration.enumerateObjects(ofType: Solution.className()) { oldSolutionObject, newSolutionObject in
//Check that the solution object is the one referenced by the customer
guard oldCustomerObject["solution"].isEqual(oldSolutionObject) else { return }
//Copy the data
newCustomerObject["data"] = oldSolutionObject["data"]
}
}
}
}
})
I feel I need to stress that this code is by no means tested and guaranteed to work in its present state. So I recommend you make sure you thoroughly test it on some dummy data you wouldn't miss beforehand. :)

Swift 4, Realm 3
I had to migrate a Realm object that linked to another object. I wanted to remove the explicit link and replace it with an object ID. TiM's solution got me most of the way there, and just needed a little refinement.
var config = Realm.Configuration()
config.migrationBlock = { migration, oldSchemaVersion in
if oldSchemaVersion < CURRENT_SCHEMA_VERSION {
// enumerate the first object type
migration.enumerateObjects(ofType: Message.className()) { (oldMsg, newMsg) in
// extract the linked object and cast from Any to DynamicObject
if let msgAcct = oldMsg?["account"] as? DynamicObject {
// enumerate the 2nd object type
migration.enumerateObjects(ofType: Account.className()) { (oldAcct, newAcct) in
if let oldAcct = oldAcct {
// compare the extracted object to the enumerated object
if msgAcct.isEqual(oldAcct) {
// success!
newMsg?["accountId"] = oldAcct["accountId"]
}
}
}
}
}
}

Related

Spock - How to work with repeated interactions

For few test cases I'm trying to follow a DRY principle, where only the interactions are different with same test case conditions. I'm not able to find a way to implement multiple methods in the interaction { } block.
As mentioned in http://spockframework.org/spock/docs/1.3/interaction_based_testing.html#_explicit_interaction_blocks, I'm using interaction { } in the then: block like below:
Java Code:
// legacy code (still running on EJB 1.0 framework, and no dependency injection involved)
// can't alter java code base
public voidGetData() {
DataService ds = new DataService();
ds = ds.findByOffset(5);
Long len = ds.getOffset() // happy path scenario; missing a null check
// other code
}
// other varieties of same code:
public voidGetData2() {
ItemEJB tmpItem = new ItemEJB();
ItemEJB item = tmpItem.findByOffset(5);
if(null != item) {
Long len = item.getOffset();
// other code
}
}
public voidGetData3() {
ItemEJB item = new ItemEJB().findByOffset(5);
if(null != item) {
Long len = item.getOffset();
// other code
}
}
Spock Test:
def "test scene1"() {
given: "a task"
// other code ommitted
DataService mockObj = Mock(DataService)
when: "take action"
// code omitted
then: "action response"
interaction {
verifyNoDataScenario() // How to add verifyErrorScenario() interaction to the list?
}
}
private verifyDataScenario() {
1 * mockObj.findByOffset(5) >> mockObj // the findByOffset() returns an object, so mapped to same mock instance
1 * mockObj.getOffset() >> 200
}
private verifyErrorScenario() {
1 * mockObj.findByOffset(5) >> null // the findByOffset() returns null
0 * mockObj.getOffset() >> 200 // this won't be executed, and should ie expected to throw NPE
}
The interaction closure doesn't accept more than one method call. I'm not sure if it's design limitation. I believe more can be done in the closure than just mentioning the method name. I also tried interpolating the mockObj as a variable and use data pipe / data table, but since it's referring the same mock instance, it's not working. I'll post that as a separate question.
I ended up repeating the test case twice just to invoke different interaction methods. Down the line I see more scenarios, and wanted to avoid copy & paste approach. Appreciate any pointers to achieve this.
Update:
Modified shared java code as the earlier DataService name was confusing.
As there's no DI involved, and I didn't find a way to mock method variables, so I mock them using PowerMockito, e.g. PowerMockito.whenNew(DataService.class).withNoArguments().thenReturn(mockObj)
Your application code looks very strange. Is the programming style in your legacy application really that bad? First a DataService object is created with a no-arguments constructor, just to be overwritten in the next step by calling a method on that instance which again returns a DataService object. What kind of programmer creates code like that? Or did you just make up some pseudo code which does not have much in common with your real application? Please explain.
As for your test code, it also does not make sense because you instantiate DataService mockObj as a local variable in your feature method (test method), which means that in your helper method mockObj cannot be accessed. So either you need to pass the object as a parameter to the helper methods or you need to make it a field in your test class.
Last, but not least, your local mock object is never injected into the class under test because, as I said in the first paragraph, the DataService object in getData() is also a local variable. Unless your application code is compeletely fake, there is no way to inject the mock because getData() does not have any method parameter and the DataService object is not a field which could be set via setter method or constructor. Thus, you can create as many mocks as you want, the application will never have any knowledge of them. So your stubbing findByOffset(long offset) (why don't you show the code of that method?) has no effect whatsoever.
Bottom line: Please provide an example reflecting the structure of your real code, both application and test code. The snippets you provide do not make any sense, unfortunately. I am trying to help, but like this I cannot.
Update:
In my comments I mentioned refactoring your legacy code for testability by adding a constructor, setter method or an overloaded getData method with an additional parameter. Here is an example of what I mean:
Dummy helper class:
package de.scrum_master.stackoverflow.q58470315;
public class DataService {
private long offset;
public DataService(long offset) {
this.offset = offset;
}
public DataService() {}
public DataService findByOffset(long offset) {
return new DataService(offset);
}
public long getOffset() {
return offset;
}
#Override
public String toString() {
return "DataService{" +
"offset=" + offset +
'}';
}
}
Subject under test:
Let me add a private DataService member with a setter in order to make the object injectable. I am also adding a check if the ds member has been injected or not. If not, the code will behave like before in production and create a new object by itself.
package de.scrum_master.stackoverflow.q58470315;
public class ToBeTestedWithInteractions {
private DataService ds;
public void setDataService(DataService ds) {
this.ds = ds;
}
// legacy code; can't alter
public void getData() {
if (ds == null)
ds = new DataService();
ds = ds.findByOffset(5);
Long len = ds.getOffset();
}
}
Spock test:
Now let us test both the normal and the error scenario. Actually I think you should break it down into two smaller feature methods, but as you seem to wish to test everything (IMO too much) in one method, you can also do that via two distinct pairs of when-then blocks. You do not need to explicitly declare any interaction blocks in order to do so.
package de.scrum_master.stackoverflow.q58470315
import spock.lang.Specification
class RepeatedInteractionsTest extends Specification {
def "test scene1"() {
given: "subject under test with injected mock"
ToBeTestedWithInteractions subjectUnderTest = new ToBeTestedWithInteractions()
DataService dataService = Mock()
subjectUnderTest.dataService = dataService
when: "getting data"
subjectUnderTest.getData()
then: "no error, normal return values"
noExceptionThrown()
1 * dataService.findByOffset(5) >> dataService
1 * dataService.getOffset() >> 200
when: "getting data"
subjectUnderTest.getData()
then: "NPE, only first method called"
thrown NullPointerException
1 * dataService.findByOffset(5) >> null
0 * dataService.getOffset()
}
}
Please also note that testing for exceptions thrown or not thrown adds value to the test, the interaction testing just checks internal legacy code behaviour, which has little to no value.

react-apollo: How to deal with immutability in update and updateQuery?

I am using react-apollo in my React application. Now it comes to pagination to fetch more data. Independent from the pagination, how do you update the state in updateQuery or update? Treating the deeply nested data structure as immutable makes it verbose, but I wouldn't want to add a helper library to it.
fetchMore({
variables: {
cursor: cursor,
},
updateQuery: (previousResult, { fetchMoreResult }) => {
return {
...previousResult,
author: {
...previousResult.author,
articles: {
...previousResult.author.articles,
...fetchMoreResult.author.articles,
edges: [
...previousResult.author.articles.edges,
...fetchMoreResult.author.articles.edges,
],
}
}
}
},
Is it okay to mutate the previousResult instead or does it go against the philosophy of Apollo?
const { pageInfo, edges } = fetchMoreResult.author.articles;
previousResult.author.articles.edges.concat(edges);
previousResult.author.articles.pageInfo = pageInfo;
return previousResult;
Or is there another way to update the state?
From the docs:
Note that the function must not alter the prev object (because prev is compared with the new object returned to see what changes the function made and hence what prop updates are needed).
I would just bite the bullet and use immutability-helper like the docs recommend. Barring that, you could make a copy of the object first (Object.assign({}, fetchMoreResult)) and then you can do what you want to the copy.

Protocol Property Allowed With Realm?

I'm trying to model my data.
I have a class that contains an optional property of type ExcerciseContent.
import RealmSwift
class Excercise: Object {
var content: ExcerciseContent?
}
The idea is that an Excercise contains content, a duration, and one of two: an Audio or Text.
protocol ExcerciseContent {
var duration: Int { get }
}
protocol AudioExcerciseContent: ExcerciseContent {
var audio: String { get }
}
protocol TextExcerciseContent: ExcerciseContent {
var text: String { get }
}
I found a similar question, however I would like to know if this still applies, and what the response means by "Realm needs to know what the concrete object type that will be linked to is at initialization time."
I've declared the protocol, shouldn't Realm know the object type? Or is it that the object type could be different every time, and so that's why it can't be done?
Realm needs to know what the concrete object type that will be linked to is at initialization time..
Your content property should either be an another Realm Object or one of the supported property types.

Looking for testable design in described case

I have a system, which gets lists of objects from external system in some ABC-format, converts it to internal representation and passes to external service:
class ABCService() {
public ABCService(ExtService extService) {
this.extService = extService;
}
public void do(ABCData [] abcObjs) throws NoDataException {
if (abcObjs.length == 0) {
throw NoDataException();
} else {
List<Data> objs = new ArrayList<>();
for (ABCData abcObj : abcObjs) {
Data obj = Parser.parse(abcObj); // static call
objs.add(obj);
}
extService.do(objs);
}
}
}
When it comes to testing ABCService, we can test two things:
If no data is passed to "do", service throws an exception;
If some data is passed to "do", service should call extService and pass exactly the same number of objects, it has received from test caller.
But, though Parser factory is also tested, there is no guarantee, that output "objs" array is somehow connected to input abcObjs (e.g. method has created list with the predefined length, but method "forgets" to populate the list).
I my opinion those two test cases don't fully cover method's workflow leaving some of it dangerously untested.
How to modify ABCService design to increase it's testability?
The major testing difficulty in this code is that you have two collaborators and one of them is static.
If you can convert your Parser to a non-static (or perhaps wrap it in a non-static) and inject that as you do the extService, you could test that the parser is called the right number of times with the right arguments. Stubbing in the return values from the parser, you could also verify that your extService is called with the appropriately transformed objects instead of just the correct number of objects.
The problem you encountered is trying to handle two tasks in one function. The function do can be logically separated into two different member functions, so that you can use unittest for each of them.
By using refactoring, you can extract out the parsing and populating logic into another member function.
class ABCService() {
public void do(ABCData [] abcObjs) throws NoDataException {
extService.do(populateList(abcObjs));
}
List<Data> popuateList(ABCData[] abcObjs) {
if (abcObjs.length == 0) {
throw NoDataException();
} else {
List<Data> objs = new ArrayList<>();
for (ABCData abcObj : abcObjs) {
Data obj = Parser.parse(abcObj); // static call
objs.add(obj);
return objs;
}
}
}
while your current unittest can still remain for the "do" function, and additionally, you can add a unittest case for "populateList" function to ensure it generate correct data list

Why is my IQueryable LINQtoObject being treated as LINQtoSQL and throwing no supported translation to SQL

I have a LINQ dbml class that I am wrapping in a POCO. I have built overloaded constructors that take the DBML class and init. the wrapper objects properties based on the dbml object passed in.
For example
public class MyPerson{
public MyPerson(DBMLPerson p)
{
this.ID = p.ID;
this.Name = p.Name;
}
}
if I then do something like this where I return an IQueryable
{
return from p in datacontext.DBMLPerson
select new MyPerson(p){};
}
When I try to do further queries on that Iquearble I get "System.NotSupportedException: The member 'MyPerson.ID' has no supported translation to SQL.."
However if I do this
{
return from p in datacontext.DBMLPerson
select new MyPerson(){
ID = p.ID;
Name = p.Name;
};
}
I don't get an error at all and everything works perfect. Basically I want to have my class handle the conversion from LINQ object to POCO itself.
Basically I have to use the Object Initializer or I am unable to match on that field.
Ok not sure this will actually help anyone but but myself but my whole problem is the I shouldn't be using IQuerable after a certain point(outside of my repository)
iqueryable-can-kill-your-dog-steal-your-wife-kill-your-will-to-live-etc