Apollo iOS how to handle partial decoding failure - apollo

I'm trying to see if there is a way to do more robust handling of partial decoding failures of Apollo generated Swift classes. Currently, if even one field of one object in an array fails to parse from the network response, the entire collection of objects fails to parse and our iOS client gets no data.
Our graphql is defined something like:
query mobile_getCollections() {
getCollections() {
// ... other fields
items {
activeRange {
expires // Int!
starts // Int!
}
}
}
}
So the Apollo generated Swift code is expecting non-nil Ints when decoding these values. However, due to a backend error (that we would like to make the mobile clients more resilient to), the API will occasionally send us a malformed date String instead of a unix timestamp Int. This causes parsing of the entire mobile_getCollections result to fail, because the Apollo generated query class typing can't be perfectly satisfied.
Ideally, I'd like to just throw out the one item in the collection that failed to be parsed correctly and leave the remaining items intact. Is it possible to do something like that using Apollo?
(Yes, I know the real answer here is to fix the backend, but is there anything I can do in the mean time to more gracefully handle similar partial parsing failure issues?)

Related

Emberjs inside of get computed making request to backend multiple times cause infinite loop

I have a table basically in every row i have get function that makes a backend request with store service. But somehow when there is one row it works expect, but when there is multiple rows it always try to recalculate get function which makes sending infinite request to backend. I am using glimmer component
I cannot use model relation on ember side at this point, there is deep chain on backend side. Thats why i am making backend request.
get <function_name>() {
return this.store.query('<desired_model_name>', { <dependent1_id>: <dependent1_id_from_args>, <dependent2_id>: <dependent2_id_from_args> });
}
I fixed this problem with using constructor. But do you have any idea why this get function re-calculate all the time? Dependent_ids are constant.
Weird thing is when results are [] empty array it does not re calculate every time. Even the query results are same it still try to recalculate every time and making infinite request to backend.
But do you have any idea why this get function re-calculate all the time?
When something like this happens, it's because you're reading #tracked data that is changed later (maybe when the query finishes).
because getters are re-ran every access, you'll want to throw #cached on top of it,
// cached is available in ember-source 4.1+
// or as early as 3.13 via polyfill:
// https://github.com/ember-polyfills/ember-cached-decorator-polyfill
import { cached } from '#glimmer/tracking';
// ...
#cached
get <function_name>() {
return this.store.query(/* ... */);
}
this ensures a stable object reference on the getter that the body of the getter only re-evaluates if tracked data accessed within the getter is changed.
Weird thing is when results are [] empty array it does not re calculate every time. Even the query results are same it still try to recalculate every time and making infinite request to backend.
Given this observation, it's possible that when query finishes, that it's changing tracked data that it, itself is consuming during initial render -- in which case, you'd still have an infinite loop, even with #cached (because tracked-data is changing that was accessed during render).
To get around that is fairly hard in a getter.
Using a constructor is an ok solution for getting your initial data, but it means you opt out of reactive updates with your query (if you need those, like if the query changes or anything).
If you're using ember-source 3.25+ and you're wanting something a little easier to work with, maybe ember-data-resourecs suits your needs
the above code would be:
import { query } from 'ember-data-resources';
// ...
// in the class body
data = query(this, 'model name', () => ({ query stuff }));
docs here
This builds off some primitives from ember-resources which implement the Resource pattern, which will be making a strong appearance in the next edition of Ember.

How to return Transaction id, time stamp on execution of invoke function in chaincode?

I need guidance in returning transaction id, time stamp on the client interface after each invoke function call.
I have found that stub.GetTxID() is used to for getting transaction id, but peer.response only take one argument, so i am not able to return the TxID on the client interface.
You can create a response object to capture relevant information, marshal it into json and return it back, something like this:
type ChaincodeResponse struct {
txID string
time *timestamp.Timestamp
}
and then
// rest of the invoke code skipped, here is
// the relevant part:
resp, err := json.Marshal(ChaincodeResponse{
txID: stub.GetTxID(),
time: stub.GetTxTimestamp(),
})
// return json representation of relevant information
// in response
return shim.Success(resp)
I'm working on something at the moment that requires all of our transactions to be timestamped. I tried some things based on your code above but I think the api has moved on considerably since 2017.
Currently, I'm adding a created: stub.GetTxTimestamp() field to all of the things we're putting on the ledger and then reading them later in any queries. Though I'm wondering if the timestamps are already generated and stored, therefore making this unnecessary - do you know if a timestamp is still automatically stored on each item put on the ledger?

Intercepting error handling with loopback

Is there somewhere complete, consistent and well documented source of information on error handling in loopback?
Things like error codes and their meaning, relation with http statuses. I've already read their docs and have not found anything like this.
I would like to translate all the messages to add multi language support to my app. I would also like to add my custom messages, with their code and to use it consistently with other loopback errors.
In order to achieve this, I need to intercept all the errors (I've done this already) and to know all the possible different codes, so I can translate them.
For example, if there is an error with code 555, I have to know what it means and treat it accordingly.
Any ideas?
I need to "catch" all the messages and translate them
This is the beginning of an answer. You can write an error-handling middleware that will intercept any error returned by the server. You will need in turn to implement the logic for making the translation.
module.exports = function() {
return function logError(err, req, res, next) {
if (err) {
console.log('ERR', req.url, err);
}
next();
};
};
This middleware must be configured to be called in the final phase. Save the code above in log-error.js for instance, then modify server/middleware.json
{ "final": { "./middleware/log-error": {} } }
I need a full list of loopback codes/messages
I'm pretty sure there is no such thing. Errors are build and returned all over the place in the code, not centralized anywhere.

Dynamic messages with gettext (AngularJS)

I have a application with a Django backend and an AngularJS front-end.
I use the angular-gettext plugin along with Grunt to handle translations.
The thing is, I sometimes received dynamic strings from my backend through the API. For instance a MySQL error about a foreign key constraint or duplicate key entry.
How can I add this strings to the .pot file or non harcoded string in general ?
I've tried to following but of course it cannot work :
angular.module('app').factory('HttpInterceptor', ['$q', '$injector', '$rootScope', '$cookieStore', 'gettext', function ($q, $injector, $rootScope, $cookieStore, gettext) {
responseError: function (rejection) {
gettext('static string'); //it works
gettext(rejection.data.error); //does not work
$rootScope.$emit('errorModal', rejection.data);
}
// Return the promise rejection.
return $q.reject(rejection);
}
};
}]);
})();
One solution I could think of would be to write every dynamic strings into a JSON object. Send this json to server and from there, write a static file containing these strings so gettext can extract them.
What do you suggest ?
I also use angular-gettext and have strings returned from the server that need to be translated. We did not like the idea of having a separate translation system for those messages so we send them over in the default language like normal.
To allow this to work we did two things. We created a function in our backend which we can call to retrieve all the possible strings to translate. In our case it's mainly static data that only changes once in a while. Ideally this would be automated but it's fine for now.
That list is formatted properly through code into html with the translate tag. This file is not deployed, it is just there to allow the extraction task to find the strings.
Secondly we created a filter to do the translation on the interpolated value, so instead of translating {{foo}} it will translate the word bar if that's was the value of foo. We called this postTranslate and it's a simple:
angular
.module('app')
.filter('postTranslate', ['gettextCatalog', function (gettextCatalog) {
return function (s) {
return gettextCatalog.getString(s);
};
}]);
As for things that are not in the database we have another file for those where we manually put them in. So your error messages may go here.
If errors are all you are worried about though, you may rather consider not showing all the error messages directly and instead determine what user friendly error message to show. That user friendly error message is in the front end and therefore circumvents all of this other headache :)

Retrieve service information from WFS GetCapabilities request with GeoExt

This is probably a very simple question but I just can't seem to figure it out.
I am writing a Javascript app to retrieve layer information from a WFS server using a GetCapabilities request using GeoExt. GetCapabilities returns information about the WFS server -- the server's name, who runs it, etc., in addition to information on the data layers it has on offer.
My basic code looks like this:
var store = new GeoExt.data.WFSCapabilitiesStore({ url: serverURL });
store.on('load', successFunction);
store.on('exception', failureFunction);
store.load();
This works as expected, and when the loading completes, successFunction is called.
successFunction looks like this:
successFunction = function(dataProxy, records, options) {
doSomeStuff();
}
dataProxy is a Ext.data.DataProxy object, records is a list of records, one for each layer on the WFS server, and options is empty.
And here is where I'm stuck: In this function, I can get access to all the layer information regarding data offered by the server. But I also want to extract the server information that is contained in the XML fetched during the store.load() (see below). But I can't figure out how to get it out of the dataProxy object, where I'm sure it must be squirreled away.
Any ideas?
The fields I want are contained in this snippet:
<ows:ServiceIdentification>
<ows:Title>G_WIS_testIvago</ows:Title>
<ows:Abstract/>
<ows:Keywords>
<ows:Keyword/>
</ows:Keywords>
<ows:ServiceType>WFS</ows:ServiceType>
<ows:ServiceTypeVersion>1.1.0</ows:ServiceTypeVersion>
<ows:Fees/>
<ows:AccessConstraints/>
Apparently,GeoExt currently discards the server information, undermining the entire premise of my question.
Here is a code snippet that can be used to tell GeoExt to grab it. I did not write this code, but have tested it, and found it works well for me:
https://github.com/opengeo/gxp/blob/master/src/script/plugins/WMSSource.js#L37