Related
The question is about the interaction of a mutation, optimistic response, and a watchQuery.
I have a mutation "myMutation" which has an "optimisticResponse" and an implemented "update" function.
Every time I do a mutation query the "update" function is called twice, the first time with optimistic response data and the second one with real data. All is Ok and all as described in the documentation.
Into my "update" function I modify "myQuery" cache data through using readQuery/writeQuery methods.
Every time I modify "myQuery" cache data a watchQuery (based on "myQuery") subscription is called. All is Ok and all as described in the documentation.
But the problem is that I cannot distinguish into my watchQuery whether I receive optimistic response data or real response data. It is crucial for me because the reaction must be different since valuable part of data can be provided by a server only.
I should show a GUI element with a special style when I receive an optimistic response and I should prohibit any interactions with it until I receive a real response.
Unfortunately, I can't solve this matter. At a glance, there is no difference between optimistic and real responses. I've googled a lot and haven't found a solution. The only idea I have is adding a special field to my GraphQL data which will show whether a response is received from a server or not. But it looks ugly and smells bad. I am sure, there must be a simple correct way to overcome the problem.
Maybe there is an easier way or there will be one in the future but here is what I know.
The data in optimisticResponse is only provided during the first call to update. That is where you can flag to your update function that it is dealing with optimistic data. You can put any data you want there. I put isOptimistic: true,.
To deal with the watchQuery issue, I recommend you make use of apollo-link-state to add a client-only field or fields to the areas of your data model where optimistic upserts should be known to the display. Don't include isOptimistic in your mutation query so you know it's from the server and not the optimistic response and force it to false if it's not true. See this example:
const SUBMIT_COMMENT_MUTATION = gql`
mutation submitComment($repoFullName: String!, $commentContent: String!) {
submitComment(repoFullName: $repoFullName, commentContent: $commentContent) {
postedBy {
login
html_url
}
createdAt
content
}
}
`;
const CommentsPageWithMutations = ({ currentUser }) => (
<Mutation mutation={SUBMIT_COMMENT_MUTATION}>
{(mutate) => (
<CommentsPage
submit={(repoFullName, commentContent) =>
mutate({
variables: { repoFullName, commentContent },
optimisticResponse: {
__typename: 'Mutation',
submitComment: {
__typename: 'Comment',
postedBy: currentUser,
createdAt: new Date(),
content: commentContent,
isOptimistic: true, // Only provided to update on the optimistic call
},
},
update: (proxy, { data: { submitComment } }) => {
// Make sure CommentAppQuery includes isOptimistic for each comment added by apollo-link-state
// submitComment.isOptimistic will be undefined here if it's from the server
const newComment = {
...submitComment,
isOptimistic: submitCommit.isOptimistic ? true : false,
};
// Read the data from our cache for this query.
const data = proxy.readQuery({ query: CommentAppQuery });
// Add our comment from the mutation to the end.
data.comments.push(newComment);
// Write our data back to the cache.
proxy.writeQuery({ query: CommentAppQuery, data });
},
})
}
/>
)}
</Mutation>
);
See https://www.apollographql.com/docs/link/links/state.html.
I couldn't get this to work on Apollo 3.X by only adding a property on the optimistic response, the property was getting stripped away. To get it to work I had to add a local variable to the query.
fragment object on Object {
id
isOptimistic #client
...
Once that is done, I was able to add the local-only flag to my optimistic response.
const optimisticResponse = {
object: {
id: "temp-id",
isOptimistic: true,
...
}
}
How is it possible to access the loopback context (or simple Express req object) from within the model's logic?
It is critical to be able to know more about the request itself (current user identity more than anything else) inside the model's logic. When I override a built-in method (via custom script or from the model.js file) or develop a custom remote method, I would like to access the Express req object.
As loopback.getCurrentContext() is declared to be buggy, I cannot use it.
Ideas?
PS:
I find this page confusing: http://loopback.io/doc/en/lb2/Using-current-context.html
First it's said (and marked in red as important!) it is not recommended to use LoopBackContext.getCurrentContext() and then it's used it in each example!?
What's the point to give examples that do not work? Should we simply ignore the complete page? If so, what about the context? :)
Any clarification on this topic is much appreciated.
You can get access to express req object by using remote hooks
var loopback = require('loopback');
module.exports = function (MyModel) {
MyModel.beforeRemote('findOne', function (ctx, model, next) {
//access to ctx.req
console.log(ctx.req.headers)
next()
})
MyModel.beforeRemote('my-custom-remote-method', function (ctx, model, next) {
console.log(ctx.req.headers)
next()
})
}
Sure, you can use a beforeRemote hook to modify the ctx.args property. This property is the input of the remote method (that is, custom or built-in). This way, you can copy a part of the request inside this property, and it will be passed to the build-in method
Example 1 with the built-in method findOne.
MyModel.beforeRemote('findOne', function (ctx, model, next) {
ctx.args.filter.extrafield = ctx.req.headers['some-header'];
next();
});
Then override the findOne method since it's what you want to do
MyModel.on('dataSourceAttached', function(obj){
var findOne = MyModel.findOne;
MyModel.findOne = function(filter, cb) {
console.log(filter.extrafield); // Print what was in the header
return findOne.apply(this, arguments);
};
});
And finally call the method with curl
curl -H "some-header: 'hello, world!'" localhost:3000/api/MyModel/findOne
Example 2 with a custom remote printToken, to help you understand further
MyModel.beforeRemote('printToken', function (ctx, model, next) {
ctx.args.token = ctx.req.headers['some-header'];
next();
});
MyModel.printToken = function(token, cb) {
console.log(token);
cb();
}
MyModel.remoteMethod(
'printToken',
{
accepts: {arg: 'token', type: 'string', optional: true}
}
);
Then call the remote with curl, and pass the expected header
curl -H "some-header: 'hello, world!'" localhost:3000/api/MyModel/printToken
EDIT: There is a simpler solution that only works for custom remote
When defining your remote method, it is possible to tell loopback to pass elements of the http request to your remote directly as an input argument
MyModel.remoteMethod(
'printToken',
{
accepts: [
{arg: 'req', type: 'object', 'http': {source: 'req'}},
{arg: 'res', type: 'object', 'http': {source: 'res'}}
]
}
);
This way, your remote can access the req and res objects. This is documented here
I have written a (very) simple RESTFul Web service to retrieve data from MongoDB using Node, Express and Mongoose.
On the server side, I have this code:
router.route('/products').post(function(req,res){
var product = new Product(req.body);
product.save(function(err){
if(err)
res.send(err);
res.send({message:'Product Added'});
});
When I submit a request from my Ember client, the req.body contains something like the following:
{ attributes:
{ category: 1,
name: 'y',
price: 1,
active: false,
notes: null } }
The attribute names are exactly the same as my mongoose schema. I get no error but the document created in MongoDB is empty (just get the _id and __v fields).
What am I doing wrong. Should I convert the req.body further into ???
A couple things that will help debug:
1) From a quick glance (I haven't used mongoose before) it looks like call back function passed to save takes two arguments.
2) I don't know if your code got cut off, but the sample above was missing a matching });
3) I made the function short circuit itself on error, so you will not see 'Product added' unless that is truly the case.
Try these fixes.
router.route('/products').post(function(req,res){
var product = new Product(req.body);
product.save(function(err, product){
if(err){
return res.send(err);
}
return res.send({message:'Product Added'});
});
});
The issue was related to my lack of familiarity with Ember and Node+Express. The data received in the server is slightly different from what I had first indicated: (first line was missing)
{ product:
{ attributes:
{ category: ... } } }
On the server side I can access my data using req.body.product.attributes (instead of req.body):
router.route('/products').post(function(req,res){
var product = new Product(req.body.product.attributes);
product.save(function(err){
if(err)
res.send(err);
res.send({message:'Product Added'});
});
UPDATE
Note that this question applies to Ember Data pre-1.0 beta, the mechanism for loading relationships via URL has changed significantly post-1.0 beta!
I asked a much longer question a while back, but since the library has changed since then, I'll ask a much simpler version:
How do you use DS.Adapter.findHasMany? I am building an adapter and I want to be able to load the contents of a relationship on get of the relationship property, and this looks like the way to do it. However, looking at the Ember Data code, I don't see how this function can ever be called (I can explain in comments if needed).
There's not an easy way with my backend to include an array of ids in the property key in the JSON I send--the serializer I'm using doesn't allow me to hook in anywhere good to change that, and it would also be computationally expensive.
Once upon a time, the Ember Data front page showed an example of doing this "lazy loading"...Is this possible, or is this "Handle partially-loaded records" as listed on the Roadmap, and can't yet be done.?
I'm on API revision 11, master branch as of Jan 15.
Update
Okay, the following mostly works. First, I made the following findHasMany method in my adapter, based on the test case's implementation:
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
this.ajax(url, "GET", {
success: function(json) {
var serializer = this.get('serializer');
var pluralRoot = serializer.pluralize(root);
var hashes = json[pluralRoot]; //FIXME: Should call some serializer method to get this?
store.loadMany(type, hashes);
// add ids to record...
var ids = [];
var len = hashes.length;
for(var i = 0; i < len; i++){
ids.push(serializer.extractId(type, hashes[i]));
}
store.loadHasMany(record, relationship.key, ids);
}
});
}
Prerequisite for above is you have to have a well-working extractId method in your serializer, but the built-in one from RESTAdapter will probably do in most cases.
This works, but has one significant problem that I haven't yet really gotten around in any attempt at this lazy-loading approach: if the original record is reloaded from the server, everything goes to pot. The simplest use case that shows this is if you load a single record, then retrieve the hasMany, then later load all the parent records. For example:
var p = App.Post.find(1);
var comments = p.get('comments');
// ...later...
App.Post.find();
In the case of only the code above, what happens is that when Ember Data re-materializes the record it recognizes that there was already a value on the record (posts/1), tries to re-populate it, and follows a different code path which treats the URL string in the JSON hash as an array of single-character IDs. Specifically, it passes the value from the JSON to Ember.EnumerableUtils.map, which understandably enumerates the string's characters as array members.
Therefore, I tried to work around this by "patching" DS.Model.hasManyDidChange, where this occurs, like so:
// Need this function for transplanted hasManyDidChange function...
var map = Ember.EnumerableUtils.map;
DS.Model.reopen({
});
(^ Never mind, this was a really bad idea.)
Update 2
I found I had to do (at least) one more thing to solve the problem mentioned above, when a parent model is re-loaded from the server. The code path where the URL was getting split into single-characters was in DS.Model.reloadHasManys. So, I overrode this method with the following code:
DS.Model.reopen({
reloadHasManys: function() {
var relationships = get(this.constructor, 'relationshipsByName');
this.updateRecordArraysLater();
relationships.forEach(function(name, relationship) {
if (relationship.kind === 'hasMany') {
// BEGIN FIX FOR OPAQUE HASMANY DATA
var cachedValue = this.cacheFor(relationship.key);
var idsOrReferencesOrOpaque = this._data.hasMany[relationship.key] || [];
if(cachedValue && !Ember.isArray(idsOrReferencesOrOpaque)){
var adapter = this.store.adapterForType(relationship.type);
var reloadBehavior = relationship.options.reloadBehavior;
relationship.name = relationship.name || relationship.key; // workaround bug in DS.Model.clearHasMany()?
if (adapter && adapter.findHasMany) {
switch (reloadBehavior) {
case 'ignore':
//FIXME: Should probably replace this._data with references/ids, currently has a string!
break;
case 'force':
case 'reset':
default:
this.clearHasMany(relationship);
cachedValue.set('isLoaded', false);
if (reloadBehavior == 'force' || Ember.meta(this).watching[relationship.key]) {
// reload the data now...
adapter.findHasMany(this.store, this, relationship, idsOrReferencesOrOpaque);
} else {
// force getter code to rerun next time the property is accessed...
delete Ember.meta(this).cache[relationship.key];
}
break;
}
} else if (idsOrReferencesOrOpaque !== undefined) {
Ember.assert("You tried to load many records but you have no adapter (for " + type + ")", adapter);
Ember.assert("You tried to load many records but your adapter does not implement `findHasMany`", adapter.findHasMany);
}
} else {
this.hasManyDidChange(relationship.key);
}
//- this.hasManyDidChange(relationship.key);
// END FIX FOR OPAQUE HASMANY DATA
}
}, this);
}
});
With that addition, using URL-based hasManys is almost usable, with two main remaining problems:
First, inverse belongsTo relationships don't work correctly--you'll have to remove them all. This appears to be a problem with the way RecordArrays are done using ArrayProxies, but it's complicated. When the parent record gets reloaded, both relationships get processed for "removal", so while a loop is iterating over the array, the belongsTo disassociation code removes items from the array at the same time and then the loop freaks out because it tries to access an index that is no longer there. I haven't figured this one out yet, and it's tough.
Second, it's often inefficient--I end up reloading the hasMany from the server too often...but at least maybe I can work around this by sending a few cache headers on the server side.
Anyone trying to use the solutions in this question, I suggest you add the code above to your app, it may get you somewhere finally. But this really needs to get fixed in Ember Data for it to work right, I think.
I'm hoping this gets better supported eventually. On the one hand, the JSONAPI direction they're going explicitly says that this kind of thing is part of the spec. But on the other hand, Ember Data 0.13 (or rev 12?) changed the default serialized format so that if you want to do this, your URL has to be in a JSON property called *_ids... e.g. child_object_ids ... when it's not even IDs you're sending in this case! This seems to suggest that not using an array of IDs is not high on their list of use-cases. Any Ember Data devs reading this: PLEASE SUPPORT THIS FEATURE!
Welcome further thoughts on this!
Instead of an array of ids, the payload needs to contain "something else" than an array.
In the case of the RESTAdapter, the returned JSON is like that:
{blog: {id: 1, comments: [1, 2, 3]}
If you want to handle manually/differently the association, you can return a JSON like that instead:
{blog: {id: 1, comments: "/posts/1/comments"}
It's up to your adapter then to fetch the data from the specified URL.
See the associated test: https://github.com/emberjs/data/blob/master/packages/ember-data/tests/integration/has_many_test.js#L112
I was glad to find this post, helped me. Here is my version, based off the current ember-data and your code.
findHasMany: function(store, record, relationship, details) {
var adapter = this;
var serializer = this.get('serializer');
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
return this.ajax(url, "GET", {}).then(function(json) {
adapter.didFindMany(store, type, json);
var list = $.map(json[relationship.key], function(o){ return serializer.extractId(type, o);});
store.loadHasMany(record, relationship.key, list);
}).then(null, $.rejectionHandler);
},
for the reload issue, I did this, based on code I found in another spot, inside the serializer I overrode:
materializeHasMany: function(name, record, hash, relationship) {
var type = record.constructor,
key = this._keyForHasMany(type, relationship.key),
cache = record.cacheFor('data');
if(cache) {
var hasMany = cache.hasMany[relationship.key];
if (typeof(hasMany) == 'object' || hasMany instanceof Object) {
record.materializeHasMany(name, hasMany);
return;
}
}
var value = this.extractHasMany(type, hash, key);
record.materializeHasMany(name, value);
}
I'm still working on figuring out paging, since some of the collections I'm working with need it.
I got a small step closer to getting it working with revision 13 and based myself on sfossen's findHasMany implementation. For an Ember model 'Author' with a hasMany relationship 'blogPosts', my rest api looks like '/api/authors/:author_id/blog_posts'. When querying the rest api for an author with id 11 the blog_posts field reads '/authors/11/blog_posts'.
I now see the related blog posts being returned by the server, but Ember still throws an obscure error that it can not read 'id' from an undefined model object when rendering the page. So I'm not quite there yet, but at least the related data is correctly requested from the rest service.
My complete adapter:
App.Adapter = DS.RESTAdapter.extend({
url: 'http://localhost:3000',
namespace: 'api',
serializer: DS.RESTSerializer.extend({
keyForHasMany: function(type, name) {
return Ember.String.underscore(name);
},
extractHasMany: function(record, json, relationship) {
var relationShip = relationship + '_path';
return { url : json[relationShip] }
}
}),
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = this.url + '/' + this.namespace + details.url;
var serializer = this.get('serializer');
return this.ajax(url, "GET", {}).then(
function(json) {
var relationship_key = Ember.String.underscore(relationship.key);
store.loadMany(type, json[relationship_key]);
var list = $.map(json[relationship_key], function(o){
return serializer.extractId(type, o);}
);
store.loadHasMany(record, relationship.key, list);
}).then(null, $.rejectionHandler);
}
});
Here is my solution but it is on Ember-data 0.14, so the world has moved on, even if we are still on this code base:
findHasMany: function(store, record, relationship, details) {
if(relationship.key !== 'activities') {
return;
}
var type = relationship.type,
root = this.rootForType(type),
url = this.url + details.url,
self = this;
this.ajax(url, "GET", {
data: {page: 1}
}).then(function(json) {
var data = record.get('data'),
ids = [],
references = json[relationship.key];
ids = references.map(function(ref){
return ref.id;
});
data[relationship.key] = ids;
record.set('data', data);
self.didFindMany(store, type, json);
record.suspendRelationshipObservers(function() {
record.hasManyDidChange(relationship.key);
});
}).then(null, DS.rejectionHandler);
},
I found replacing the data with the ids worked for me.
ember-data.js: https://github.com/emberjs/data/tree/0396411e39df96c8506de3182c81414c1d0eb981
In short, when there is an error, I want to display error messages in the view, and then the user can 1) cancel, which will rollback the transaction 2) correct the input errors and successfully commit the transaction, passing the validations on the server.
Below is a code snippet from the source. It doesn't include an error callback.
updateRecord: function(store, type, record) {
var id = get(record, 'id');
var root = this.rootForType(type);
var data = {};
data[root] = this.toJSON(record);
this.ajax(this.buildURL(root, id), "PUT", {
data: data,
context: this,
success: function(json) {
this.didUpdateRecord(store, type, record, json);
}
});
},
Overall, what is the flow of receiving an error from the server and updating the view? It seems that an error callback should put the model in an isError state, and then the view can display the appropriate messages. Also, the transaction should stay dirty. That way, the transaction can use rollback.
It seems that using store.recordWasInvalid is going in the right direction, though.
This weekend I was trying to figure the same thing out. Going off what Luke said, I took a closer look at the ember-data source for the latest commit (Dec 11).
TLDR; to handle ember-data update/create errors, simply define becameError() and becameInvalid(errors) on your DS.Model instance. The cascade triggered by the RESTadapter's AJAX error callback will eventually call these functions you define.
Example:
App.Post = DS.Model.extend
title: DS.attr "string"
body: DS.attr "string"
becameError: ->
# handle error case here
alert 'there was an error!'
becameInvalid: (errors) ->
# record was invalid
alert "Record was invalid because: #{errors}"
Here's the full walk through the source:
In the REST adapter, the AJAX callback error function is given here:
this.ajax(this.buildURL(root, id), "PUT", {
data: data,
context: this,
success: function(json) {
Ember.run(this, function(){
this.didUpdateRecord(store, type, record, json);
});
},
error: function(xhr) {
this.didError(store, type, record, xhr);
}
});
didError is defined here and it in turn calls the store's recordWasInvalid or recordWasError depending on the response:
didError: function(store, type, record, xhr) {
if (xhr.status === 422) {
var data = JSON.parse(xhr.responseText);
store.recordWasInvalid(record, data['errors']);
} else {
store.recordWasError(record);
}
},
In turn, store.recordWasInvalid and store.recordWasError (defined here) call the record (a DS.Model)'s handlers. In the invalid case, it passes along error messages from the adapter as an argument.
recordWasInvalid: function(record, errors) {
record.adapterDidInvalidate(errors);
},
recordWasError: function(record) {
record.adapterDidError();
},
DS.Model.adapterDidInvalidate and adapterDidError (defined here) simply send('becameInvalid', errors) or send('becameError') which finally leads us to the handlers here:
didLoad: Ember.K,
didUpdate: Ember.K,
didCreate: Ember.K,
didDelete: Ember.K,
becameInvalid: Ember.K,
becameError: Ember.K,
(Ember.K is just a dummy function for returning this. See here)
So, the conclusion is, you simply need to define functions for becameInvalid and becameError on your model to handle these cases.
Hope this helps someone else; the docs certainly don't reflect this right now.
DS.RESTAdapter just got a bit more error handling in this commit but we are still not yet at a point where we have a great recommendation for error handling.
If you are ambitious/crazy enough to put apps in production today with ember-data (as I have been!), it is best to make sure that the likelihood of failures in your API is extremely low. i.e. validate your data client-side.
Hopefully, we can update this question with a much better answer in the coming months.
I just ran into such a situation, not sure if this is already explained anywhere.
I am using:
Em.VERSION : 1.0.0
DS.VERSION : "1.0.0-beta.6"
Ember Validations (dockyard) : Version: 1.0.0.beta.1
Ember I18n
The model was initially mixedin with Validation mixin.
App.Order = DS.Model.extend(Ember.Validations.Mixin, {
.....
someAttribute : DS.attr('string'),
/* Client side input validation with ember-validations */
validations : {
someAttribute : {
presence : {
message : Ember.I18n.t('translations.someAttributeInputError')
}
}
}
});
In the template, corresponding handlebars is added. (note that ember validations will automatically add errors to model.errors.<attribute> in case of input validations, I will be using same trade-off in server validations as well)
<p>{{t 'translations.myString'}}<br>
{{view Ember.TextField valueBinding="attributeName"}}
{{#if model.errors.attributeName.length}}<small class="error">{{model.errors.attributeName}}</small>{{/if}}
</p
Now, we will be saving the Order
App.get('order').save().then(function () {
//move to next state?
}, function(xhr){
var errors = xhr.responseJSON.errors;
for(var error in errors){ //this loop is for I18n
errors[error] = Ember.I18n.t(errors[error]);
}
controller.get('model').set('errors', errors); //this will overwrite current errors if any
});
Now if there is some validation error thrown from server, the returned packet being used is
{"errors":{"attributeName1":"translations.attributeNameEror",
"another":"translations.anotherError"}}
status : 422
It is important to use status 422
So this way, your attribute(s) can be validated client side and again on server side.
Disclaimer : I am not sure if this is the best way!
Since there's currently no good solution in stock Ember-Data, I made my own solution by adding an apiErrors property to DS.Model and then in my RestAdapter subclass (I already needed my own) I added error callbacks to the Ajax calls for createRecord and updateRecord that save the errors and put the model in the "invalid" state, which is supposed to mean client-side or server-side validations failed.
Here's the code snippets:
This can go in application.js or some other top-level file:
DS.Model.reopen({
// Added for better error handling on create/update
apiErrors: null
});
This goes in the error callbacks for createRecord and updateRecord in a RestAdapter subclass:
error: function(xhr, textStatus, err) {
console.log(xhr.responseText);
errors = null;
try {
errors = JSON.parse(xhr.responseText).errors;
} catch(e){} //ignore parse error
if(errors) {
record.set('apiErrors',errors);
}
record.send('becameInvalid');
}