"Cache data may be lost" warning when merging non-normalized data in Apollo Client 3 - apollo

I'm upgrading my application with Apollo Client from v2 to v3 and I can't find the correct solution to the following problem.
I have a schema with a product and inside this product, a price. This price is not a simple number as it contains the duty free value, the all taxes included value and the VAT.
type Product {
id: ID
price: Price
}
type Price {
dutyFree: Float
allTaxesIncluded: Float
VAT: Float
}
In Apollo Client 2, whenever there was no explicit id or _id property, the InMemoryCache created a fallback fake identifier to normalize data, based on the path to the object.
In Apollo Client 3, this fallback fake identifier is no longer generated. Instead you have two options to handle non-normalized data. The first is to use the new TypePolicy option and indicates explicitly the data you receive should not be normalize. In that case, data will be linked to the parent normalized data.
The doc :
Objects that are not normalized are instead embedded within their parent object in the cache. You can't access these objects directly, but you can access them via their parent.
new InMemoryCache({
typePolicies: {
Price {
keyFields: false
}
}
})
All happy, I though my problem was solved. Well, wrong ... I can create a product in my app and add a price. But whenever I change an existing price, I get the following warning :
Cache data may be lost when replacing the price field of a Product object.
Because, when I get my Product after an update, the InMemoryCache does not know how to merge the field Price because no id is defined, which is the point of non-normalized data.
I know there is the second option to explicitly define a merge function for my Product.price field, but this example is a simpler version of the reality. I have a large number of fields through multiple objects which are typed Price, and manually defining a merge function for each and everyone one of them (even by externalizing the common logic in a function) is something I find quite inefficient and source of errors.
So my question is : what did I misunderstood about the keyFields: false option and what can I do to solve this problem without having to resort to define a merge function to 50+ fields in my app ?
Thanks for the help :)

I'm not sure you've misunderstood keyFields: false. My understanding is that when the Product is updated in the cache, InMemoryCache must handle any differences in the Price objects embedded in the price field of the old Product and the new Product. If there isn't a TypePolicy to define how that should be done, the cache logs a warning.
Starting in Apollo Client 3.3, merge functions can be defined for types in addition to fields. Here's an example from their docs:
const cache = new InMemoryCache({
typePolicies: {
Book: {
fields: {
// No longer necessary!
// author: {
// merge: true,
// },
},
},
Author: {
merge: true,
},
},
});
Since you don't want to define a merge function on a field-by-field basis, you might try defining the merge function for the Price type instead.

Related

apollo-server - Conditionally exclude fields from selection set

I have a situation where I would like to conditionally exclude a field from a query selection before I hit that query's resolver.
The use case being that my underlying API only exposes certain 'fields' based on the user's locale, and calls made to this API will throw errors if fields are requested that are not included of that locale.
I have tried an approach with directives,
type Person {
id: Int!
name: String!
medicare: String #locale(locales: ["AU"])
}
type query {
person(id: Int!): Person
}
And using the SchemaDirectiveVisitor.visitFieldDefinition, I override field.resolve for the medicare field to return null when the user locale doesn't match any of the locales defined on the directive.
However, when a client with a non "AU" locale executes the following
query {
person(id: 111) {
name
medicareNumber
}
}
}
the field resolver for medicare is never called and the query resolver makes a request to the underlying API, appending the fields in the selection set (including the invalid medicareNumber) as query parameters. The API call returns an error object at this point.
I believe this makes sense as it seems that the directive resolver is on the FieldDefinition and would only be called when the person resolver returns a valid result.
Is there a way to achieve this sort of functionality, with or without directives?
In general, I would caution against this kind of schema design. As a client, if I include a field in the selection set, I expect to see that field in the response -- removing the field from the selection set server-side goes against the spec and can cause unnecessary confusion (especially on a larger team or with a public API).
If you are examining the requested fields in order to determine the parameters to pass to your API call, then forcing a certain field to resolve to null won't do anything -- that field will still be included in the selection set. In fact, there's really no way to create a schema directive that will impact the selection set of a request.
The best approach here would be to 1) ensure any potentially-null fields are nullable in the schema and 2) explicitly filter the selection set wherever your selection-set-to-parameters logic is.
EDIT:
Schema directives won't show up as part of the schema object returned in the info, so they can't be used as flags. My suggestion would be to maintain a separate in-memory map. For example:
const fieldsByLocale = {
US: {
Person: ['name', 'medicareNumber'],
},
AU: {
Person: ['name'],
},
}
then you could just access the appropriate list to filter with fieldsByLocale[context.locale][info.returnType]. This filtering logic is specific to your data source (in this case, the external API), so this is a bit cleaner than "polluting" the schema with information that pertains to the storage layer. If the APIs change, or you switch to a different source for this information altogether (like a database), you can update the resolvers without touching your type definitions. In fact, this way, the filtering logic can easily live inside a domain/service layer instead of your resolvers.

Ember.js: Summarize model records into one record

I thought I had this figured out using reduce(), but the twist is that I need to roll up multiple properties on each record, so every time I am returning an object, and the problem I'm having is that previousValue is an Ember Object, and I'm returning a plain object, so it works fine on the first loop, but the second time through, a is no longer an Ember object, so I get an error saying a.get is not a function. Sample code:
/*
filter the model to get only one food category, which is determined by the user selecting a choice that sets the property: theCategory
*/
var foodByCategory = get(this, 'model').filter(function(rec) {
return get(rec, 'category') === theCategory;
});
/*
Now, roll up all the food records to get a total
of all cost, salePrice, and weight
*/
summary = foodByCategory.reduce(function(a,b){
return {
cost: a.get('cost') + b.get('cost'),
salePrice: a.get('salePrice') + b.get('salePrice'),
weight: a.get('weight') + b.get('weight')
};
});
Am I going about this all wrong? Is there a better way to roll up multiple records from the model into one record, or do I just need to either flatten out the model records into plain objects first, or alternatively, return an Ember object in the reduce()?
Edit: doing return Ember.Object.create({...}) does work, but I still would like some opinion on whether this is the best way to achieve the goal, or if Ember provides functions that will do this, and if so, if they're any better than reduce.
Assuming this.get('model') returns an Ember.Enumerable, you can use filterBy instead of filter:
var foodByCategory = get(this, 'model').filterBy('category', theCategory);
As for your reduce, I don't know of any Ember built-ins that would improve it. The best I can think of is using multiple, separate mapBy and reduce calls:
summary = {
cost: foodByCategory.mapBy('cost').reduce(...),
salePrice: foodByCategory.mapBy('salePrice').reduce(...),
...
};
But that's probably less performant. I wouldn't worry too much about using Ember built-ins to do standard data manipulation... most Ember projects I know of still use a utility library (like Lodash) alongside Ember itself, which usually ends being more effective when writing this sort of data transformation.

Ember Data: (best practice) Dynamic Parameter for find()

maybe it's just a brain bug on my side, but im really confused for many days now.
I have a search formula with many configurable changing parameters like this:
ID, name, lastname, date1,
There is no hierarchical order of these parameters, the user can configure them in and out of the form.
The ember way for queryparameter is : { ID: ..., lastname: ..., date1: ... }, but what can i do, if i don't know what parameters can face up? There are for different modules in our application from 10 to 40 Parameters configurable....
I need help to find the "best-practice" to solve this problem.
I would be delighted, if someone could give me an impact how to solve this!
Best regards, Jan
If I understood you correctly, you want to find a reusable solution to not make a huge list of ifs for different query params for #store.find.
To make it reusable, you can stay with single find as follows:
this.store.find('myModel', queryHash)
And build the queryHash before the call. You can for example have a set of checkboxes and a computed property that is based on all the checkboxes values. This computed property will build your hash, e.g.:
queryHash: Ember.computed "lastName", "date1", function() {
query = {}
if(this.get("lastName")) { query.lastName = this.get("lastName"); }
if(this.get("date1")) { query.date1 = this.get("date1"); }
query
});
The disadvantage is that you need to know all the possible options that are available (but does not need to be checked) in current route for the user (e.g. via some kind of inputs or form).
On the other hand, if you cannot say what are names (or how many of them there are), you can at least hold all user-provided data in some kind of an array and enumarate through it, adding all the proper hash keys with values to the query object.

Clean store in between find operations

Let's say I do the following request:
App.Phones.find({'country' : 'DE'});
My backend replies with some telephone numbers. Now I do:
App.Phones.find({'country' : 'ES'});
Now I get other telephone numbers. But:
App.Phones.all();
Has accumulated the "old" numbers and the new ones. Is it possible to clean the store between calls to find? How?
I have tried with App.Phones.clean();, without success (has no method 'clean')
EDIT
This is quite strange but: calling record.destroy(); (as suggested by intuitivepixel) on an object does not remove it from the store, it just marks it as destroyed=true. That means, the drop-down is still showing that option. Actually, walking the records (all()) shows that the records are still there after being destroyed. Maybe Ember will remove them from the store eventually, but that does not help me at all: since my select is bound to all(), I need them to be removed right now.
Even worse: since the object is there, but destroyed, the select shows it, but it does not allow to select it!
I can think of a very ugly hack where I create an Ember.A with filtered records (removing the destroyed records), like this:
Destroy all records (the old ones)
Request new records from the backend
When the records are received (.then), walk the records in the store (.all()), that is, the destroyed and the new ones.
Add the records in the array which are not destroyed
Bind the select to this filtered array.
This looks extremely ugly, and I am really surprised that Ember is not able to just fully and reliably clean the store for a certain record type.
I guess you could do the following to clean the in the store saved Phones records:
App.Phones.find({}); //this will invalidate your cache
But obviously this will make a new request retrieving all the phone numbers.
Depending on what you want to achieve, you could use find() in the application route, then either all() or a filter() in other routes to retrieve just DE, ES etc. In other words there is no such method available to do something like: App.Phones.clean().
Update
Another way (manually) I can think of to remove the records of one type from the cache could be to delete them one by one beetwen your find() operations, for example create a simple utility function which contains the below code, and call it beetwen your calls to find():
App.Phones.all().forEach(function(record) {
record.destroy();
});
Hope it helps.
So, this is the (unsatisfying, ugly, hacky, non-intuitive) code that I have come up with. It is doing the job, but I have a very bad feeling about this:
getNewPhones : function (countryCode, subtype, city) {
// Mark old records as destroyed
App.Availablephone.all().forEach(function(phone, index) {
phone.destroy();
console.log('Destroyed phone %o', phone);
});
// Not possible to set the availablePhones to .all(), because old records are still there
//App.Availablephone.find({country : countryCode, subtype : subtype, city : city});
//this.set('availablePhones', App.Availablephone.all());
// So, hack is in order:
// 1. request new data
// 2. filter all records to just get the newly received ones (!!!)
var _this = this;
App.Availablephone.find({country : countryCode, subtype : subtype, city : city}).then(function(recordArray) {
var availablePhones = Ember.A();
App.Availablephone.all().forEach(function(phone, index) {
if(!phone.isDestroyed) {
console.log('Adding phone=%o', phone);
availablePhones.push(phone);
}
});
_this.set('availablePhones', availablePhones);
});
},
Comments / critiques / improvements suggestions are very much welcome!

Presenting missing values as null or not at all in JSON

I am building a web service API, using JSON as the data language. Designing the structure of the data returned from the service, I am having some trouble deciding how to deal with missing values.
Consider this example: I have a product in my web store for which the price is yet unknown, maybe because the product has not yet been released. Do I include price: null (as shown below) or do I simply omit the price property on this item?
{
name: 'OSX 10.6.10',
brand: 'Apple',
price: null
}
My main concern is making the API as easy to consume as possible. The explicit null value makes it clear that a price can be expected on a product, but at the other hand it seems like wasted bytes. There could be a whole bunch of properties that are completely irrelevant to this particular product, while relevant for other products – should I show these as explicitly null as well?
{
name: 'OSX 10.6.10',
price: 29.95,
color: null,
size: null
}
Are there any "best practices" on web service design, favoring explicit or implicit null values? Any de-facto standard? Or does it depend entirely on the use case?
FWIW, my personal opinion:
Do I include price: null (as shown below) or do I simply omit the price property on this item?
I would set the values of "standard" fields to null. Although JSON is often used with JavaScript and there, missing properties can be handled similarly as the ones set to null, this must not be the case for other languages (e.g. Java). Having to test first whether a field is present seems inconvenient. Setting the values to null but having the fields present would be more consistent.
There could be a whole bunch of properties that are completely irrelevant to this particular product, while relevant for other products – should I show these as explicitly null as well?
I would only include those fields that are relevant for a product (e.g. not pages for a CD). It's the client's task to deal with these "optional" fields properly. If you have no value for a certain field which is relavant to a product, set it to null too.
As already said, the most important thing is to be consistent and to clearly specify which fields can be expected. You can reduce the data size using gzip compression.
i don't know which is "best practice". But i usually don't send fields that i don't need.
When i read response, i check if value exists:
if (res.size) {
// response has size
}