Apollo duplicates first result to every node in array of edges - apollo

I am working on a react app with react-apollo
calling data through graphql when I check in browser network tab response it shows all elements of the array different
but what I get or console.log() in my app then all elements of array same as the first element.
I don't know how to fix please help

The reason this happens is because the items in your array get "normalized" to the same values in the Apollo cache. AKA, they look the same to Apollo. This usually happens because they share the same Symbol(id).
If you print out your Apollo response object, you'll notice that each of the objects have Symbol(id) which is used by Apollo cache. Your array items probably have the same Symbol(id) which causes them to repeat. Why does this happen?
By default, Apollo cache runs this function for normalization.
export function defaultDataIdFromObject(result: any): string | null {
if (result.__typename) {
if (result.id !== undefined) {
return `${result.__typename}:${result.id}`;
}
if (result._id !== undefined) {
return `${result.__typename}:${result._id}`;
}
}
return null;
}
Your array item properties cause multiple items to return the same data id. In my case, multiple items had _id = null which caused all of these items to be repeated. When this function returns null the docs say
InMemoryCache will fall back to the path to the object in the query,
such as ROOT_QUERY.allPeople.0 for the first record returned on the
allPeople root query.
This is the behavior we actually want when our array items don't work well with defaultDataIdFromObject.
Therefore the solution is to manually configure these unique identifiers with the dataIdFromObject option passed to the InMemoryCache constructor within your ApolloClient. The following worked for me as all my objects use _id and had __typename.
const client = new ApolloClient({
link: authLink.concat(httpLink),
cache: new InMemoryCache({
dataIdFromObject: o => (o._id ? `${o.__typename}:${o._id}`: null),
})
});

Put this in your App.js
cache: new InMemoryCache({
dataIdFromObject: o => o.id ? `${o.__typename}-${o.id}` : `${o.__typename}-${o.cursor}`,
})

I believe the approach in other two answers should be avoided in favor of following approach:
Actually it is quite simple. To understand how it works simply log obj as follows:
dataIdFromObject: (obj) => {
let id = defaultDataIdFromObject(obj);
console.log('defaultDataIdFromObject OBJ ID', obj, id);
}
You will see that id will be null in your logs if you have this problem.
Pay attention to logged 'obj'. It will be printed for every object returned.
These objects are the ones from which Apollo tries to get unique id ( you have to tell to Apollo which field in your object is unique for each object in your array of 'items' returned from GraphQL - the same way you pass unique value for 'key' in React when you use 'map' or other iterations when rendering DOM elements).
From Apollo dox:
By default, InMemoryCache will attempt to use the commonly found
primary keys of id and _id for the unique identifier if they exist
along with __typename on an object.
So look at logged 'obj' used by 'defaultDataIdFromObject ' - if you don't see 'id' or '_id' then you should provide the field in your object that is unique for each object.
I changed example from Apollo dox to cover three cases when you may have provided incorrect identifiers - it is for cases when you have more than one GraphQL types:
dataIdFromObject: (obj) => {
let id = defaultDataIdFromObject(obj);
console.log('defaultDataIdFromObject OBJ ID', obj, id);
if (!id) {
const { __typename: typename } = obj;
switch (typename) {
case 'Blog': {
// if you are using other than 'id' and '_id' - 'blogId' in this case
const undef = `${typename}:${obj.id}`;
const defined = `${typename}:${obj.blogId}`;
console.log('in Blogs -', undef, defined);
return `${typename}:${obj.blogId}`; // return 'blogId' as it is a unique
//identifier. Using any other identifier will lead to above defined problem.
}
case 'Post': {
// if you are using hash key and sort key then hash key is not unique.
// If you do query in DB it will always be the same.
// If you do scan in DB quite often it will be the same value.
// So use both hash key and sort key instead to avoid the problem.
// Using both ensures ID used by Apollo is always unique.
// If for post you are using hashKey of blogID and sortKey of postId
const notUniq = `${typename}:${obj.blogId}`;
const notUniq2 = `${typename}:${obj.postId}`;
const uniq = `${typename}:${obj.blogId}${obj.postId}`;
console.log('in Post -', notUniq, notUniq2, uniq);
return `${typename}:${obj.blogId}${obj.postId}`;
}
case 'Comment': {
// lets assume 'comment's identifier is 'id'
// but you dont use it in your app and do not fetch from GraphQl, that is
// you omitted 'id' in your GraphQL query definition.
const undefnd = `${typename}:${obj.id}`;
console.log('in Comment -', undefnd);
// log result - null
// to fix it simply add 'id' in your GraphQL definition.
return `${typename}:${obj.id}`;
}
default: {
console.log('one falling to default-not good-define this in separate Case', ${typename});
return id;
}
I hope now you see that the approach in other two answers are risky.
YOU ALWAYS HAVE UNIQUE IDENTIFIER. SIMPLY HELP APOLLO BY LETTING KNOW WHICH FIELD IN OBJECT IT IS. If it is not fetched by adding in query definition add it.

An alternative option to the accepted is to instead of dataIdFromObject, which appears to be for everything in the query, I was able to provide a keyFields function per type that required it.
const client = new ApolloClient({
cache: new InMemoryCache({
typePolicies: {
ItemType: {
keyFields: (obj) =>
obj.id + "-" + obj.language.id,
},
},
}),
});
In the above example ItemType can be whichever Type is specified in your schema. I happened to be joining a non-unique ID with a language to make a unique key but you can do it however you wish.

Related

Best attribute to use from AWS CognitoUser class for primary key in DynamoDB

I am trying to make the primary key of my dynamodb table something like user_uuid. The user is being created in AWS Cognito and I can't seem to find a uuid like field as part of the CognitoUser class. I am trying to avoid using the username as the pk.
Can someone guide me to the right solution? I can't seem to find anything on the internet regarding a user_uuid field and for some reason I can't even find the documentation of CognitoUser class that is imported from "amazon-cognito-identity-js";
Depends if you plan to use email or phone as a 'username'. In that case, I would use the sub because it never changes. But, the sub is not k-sortable so that requires the use of an extra DB item and index/join to make users sortable by date added. If you plan to generate your GUID/KSUID, and only use email/phone as an alias, then I would use the 'username' as a common id between your DB and userpool.
Good luck with your project!
FWIW - the KSUID generators found in wild are massively overbuilt. 3000+ lines of code and 80+ dependencies. I made my own k-sortable and prefixed pseudo-random ID gen for Cognito users. Here's the code.
export function idGen(prefix: any) {
const validPrefix = [
'prefix1',
'prefix2'
];
//check if prefix argument is supplied
if (!prefix) {
return 'error! must supply prefix';
}
//check if value is a valid type
else if (validPrefix.indexOf(prefix) == -1) {
return 'error! prefix value supplied must be: ' + validPrefix;
} else {
// generate epoch time in seconds
const epoch = Math.round(Date.now() / 1000);
// convert epoch time to 6 character base36 string
const time = epoch.toString(36);
// generate 20 character base36 pseudo random string
const random =
Math.random().toString(36).substring(2, 12) +
Math.random().toString(36).substring(2, 12);
// combine prefix, strings, insert : divider and return id
return prefix + ':' + time + random;
}
}
Cognito user unique identifiers can be saved to a database using a combination of the "sub" value and the username, please refer to this question for a more lengthy discussion.
In the description of amazon-cognito-identity-js (found here, use case 5), they show how to get the userAttributes of a CognitoUser. One of the attributes is the sub value, which you can get at for example like this:
user.getUserAttributes(function(err, attributes) {
if (err) {
// Handle error
} else {
// Do something with attributes
const sub = attributes.find(obj => obj.Name === 'sub').Value;
}
});
I couldn't find any documentation on the available user attributes either, I recommend using the debugger to look at the user attributes returned from the function.

Apollo Optimistic UI update with different ID field name

I am trying to make use of the optimistic update functionality of Apollo described in https://www.apollographql.com/docs/react/features/optimistic-ui.html (in a React-Native app). Unfortunately as far as I can gather this only works if you are updating records which use a field named "id" as their primary key. Unfortunately I have many cases where that field has a different name. Is there any way I can tell Apollo to work with a different id field name?
Assuming you use apollo's InMemoryCache you can pass dataIdFromObject that can return a value for an id, when initializing the cache. The default, that uses always the id would looks like this:
import { InMemoryCache } from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
dataIdFromObject: o => o.id
});
To change this you can create a function that checks for the .__typename key in the incoming object to return the correct field according of the GraphQL type;
import { InMemoryCache } from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
dataIdFromObject: ({__typename, id, ...rest}) => {
switch(__typename){
case 'Foo': return rest.foo,
case 'Bar': return rest.bar,
default: return id,
}
}
});

Apollo Link State Default Resolver Not Working (#client query parameter variables)

Example here: https://codesandbox.io/s/j4mo8qpmrw
Docs here: https://www.apollographql.com/docs/link/links/state.html#default
TLDR: This is a todo list, the #client query parameters don't filter out the list.
This is the query, taking in $id as a parameter
const GET_TODOS = gql`
query todos($id: Int!) {
todos(id: $id) #client {
id
text
}
}
`;
The query passes the variable in there
<Query query={GET_TODOS} variables={{ id: 1 }}>
/* Code */
</Query>
But the default resolver doesn't use the parameter, you can see it in the codesandbox.io example above.
The docs say it should work, but I can't seem to figure what I'm missing. Thanks in advance!
For simple use cases, you can often rely on the default resolver to fetch the data you need. However, to implement something like filtering the data in the cache or manipulating it (like you do with mutations), you'll need to write your own resolver. To accomplish what you're trying to do, you could do something like this:
export const resolvers = {
Query: {
todos: (obj, args, ctx) => {
const query = gql`
query GetTodos {
todos #client {
id
text
}
}
`
const { todos } = ctx.cache.readQuery({ query })
return todos.filter(todo => todo.id === args.id)
},
},
Mutation: {},
}
EDIT: Every Type we define has a set of fields. When we return a particular Type (or List of Types), each field on that type will utilize the default resolver to try to resolve its own value (assuming that field was requested). The way the default resolver works is simple -- it looks at the parent (or "root") object's value and if it finds a property matching the field name, it returns the value of that property. If the property isn't found (or can't be coerced into whatever Scalar or Type the field is expecting) it returns null.
That means we can, for example, return an object representing a single Todo and we don't have to define a resolver for its id or text fields, as long as the object has id and text properties on it. Looking at it another way, if we wanted to create an arbitrary field on Todo called textWithFoo, we could leave the cache defaults as is, and create a resolver like
(obj, args, ctx) => obj.text + ' and FOO!'
In this case, a default resolver would do us no good because the objects stored in the cache don't have a textWithFoo property, so we write our own resolver.
What's important to keep in mind is that a query like todos is just a field too (in this case, it's a field on the Query Type). It behaves pretty much the same way any other field does (including the default resolver behavior). With apollo-link-state, though, the data structure you define under defaults becomes the parent or "root" value for your queries.
In your sample code, your defaults include a single property (todos). Because that's a property on the root object, we can fetch it with a query called todos and still get back the data even without a resolver. The default resolver for the todos field will look in the root object (in this case your cache), see a property called todos and return that.
On the flip side, a query like todo (singular) doesn't have a matching property in the root (cache). You need to write a resolver for it to have it return data. Similarly, if you want to manipulate the data before returning it in the query (with or without arguments), you need to include a resolver.

Apollo-client's cacheRedirect vs dataIdFromObject

I'm trying to prevent re-fetch of previously cached data. But the documentation provides a couple of ways of achieving this through cacheRedirects and dataIdFromObject. I'm trying to understand when one technique is used over the other.
He's an example flow using dataIdFromObject -- would this provide enough context for Apollo to fetch the detail view data from cache, or do I additionally need a cacheRedirect to link the uuid query?
List view query:
query ListView {
books {
uuid
title
abstract
}
}
Detail view query:
query DetailView {
book(uuid: $uuid) {
uuid
title
abstract
}
}
cache constructor args with dataIdFromObject:
new InMemoryCache({
dataIdFromObject: object => {
switch (object.__typename) {
case 'book': return `book:${object.uuid}`;
default: return defaultDataIdFromObject(object); // default handling
}
}
});
I believe you are incorrect when you say
But the documentation provides a couple of ways of achieving this
through cacheRedirects and dataIdFromObject.
I believe only cacheRedirects achieve what you want.
dataIdFromObject allows you to customize how ApolloClient should uniquely identify your objects. By default, ApolloClient assumes your objects have either a id or _id property, and it combines the object __typename with the id property to create a unique identifier.
By providing a dataIdFromObject function, you can customize this unique identifier. For example, if all of you objects have an id which is a uuid, then you could supply a dataIdFromObject function which simply instructs ApolloClient to use the object's id property, without appending __typename.

Adding item to filtered result from ember-data

I have a DS.Store which uses the DS.RESTAdapter and a ChatMessage object defined as such:
App.ChatMessage = DS.Model.extend({
contents: DS.attr('string'),
roomId: DS.attr('string')
});
Note that a chat message exists in a room (not shown for simplicity), so in my chat messages controller (which extends Ember.ArrayController) I only want to load messages for the room the user is currently in:
loadMessages: function(){
var room_id = App.getPath("current_room.id");
this.set("content", App.store.find(App.ChatMessage, {room_id: room_id});
}
This sets the content to a DS.AdapterPopulatedModelArray and my view happily displays all the returned chat messages in an {{#each}} block.
Now it comes to adding a new message, I have the following in the same controller:
postMessage: function(contents) {
var room_id = App.getPath("current_room.id");
App.store.createRecord(App.ChatMessage, {
contents: contents,
room_id: room_id
});
App.store.commit();
}
This initiates an ajax request to save the message on the server, all good so far, but it doesn't update the view. This pretty much makes sense as it's a filtered result and if I remove the room_id filter on App.store.find then it updates as expected.
Trying this.pushObject(message) with the message record returned from App.store.createRecord raises an error.
How do I manually add the item to the results? There doesn't seem to be a way as far as I can tell as both DS.AdapterPopulatedModelArray and DS.FilteredModelArray are immutable.
so couple of thoughts:
(reference: https://github.com/emberjs/data/issues/190)
how to listen for new records in the datastore
a normal Model.find()/findQuery() will return you an AdapterPopulatedModelArray, but that array will stand on its own... it wont know that anything new has been loaded into the database
a Model.find() with no params (or store.findAll()) will return you ALL records a FilteredModelArray, and ember-data will "register" it into a list, and any new records loaded into the database will be added to this array.
calling Model.filter(func) will give you back a FilteredModelArray, which is also registered with the store... and any new records in the store will cause ember-data to "updateModelArrays", meaning it will call your filter function with the new record, and if you return true, then it will stick it into your existing array.
SO WHAT I ENDED UP DOING: was immediately after creating the store, I call store.findAll(), which gives me back an array of all models for a type... and I attach that to the store... then anywhere else in the code, I can addArrayObservers to those lists.. something like:
App.MyModel = DS.Model.extend()
App.store = DS.Store.create()
App.store.allMyModels = App.store.findAll(App.MyModel)
//some other place in the app... a list controller perhaps
App.store.allMyModels.addArrayObserver({
arrayWillChange: function(arr, start, removeCount, addCount) {}
arrayDidChange: function(arr, start, removeCount, addCount) {}
})
how to push a model into one of those "immutable" arrays:
First to note: all Ember-Data Model instances (records) have a clientId property... which is a unique integer that identifies the model in the datastore cache whether or not it has a real server-id yet (example: right after doing a Model.createRecord).
so the AdapterPopulatedModelArray itself has a "content" property... which is an array of these clientId's... and when you iterate over the AdapterPopulatedModelArray, the iterator loops over these clientId's and hands you back the full model instances (records) that map to each clientId.
SO WHAT I HAVE DONE
(this doesn't mean it's "right"!) is to watch those findAll arrays, and push new clientId's into the content property of the AdapterPopulatedModelArray... SOMETHING LIKE:
arrayDidChange:function(arr, start, removeCount, addCount){
if (addCount == 0) {return;} //only care about adds right now... not removes...
arr.slice(start, start+addCount).forEach(function(item) {
//push clientId of this item into AdapterPopulatedModelArray content list
self.getPath('list.content').pushObject(item.get('clientId'));
});
}
what I can say is: "its working for me" :) will it break on the next ember-data update? totally possible
For those still struggling with this, you can get yourself a dynamic DS.FilteredArray instead of a static DS.AdapterPopulatedRecordArray by using the store.filter method. It takes 3 parameters: type, query and finally a filter callback.
loadMessages: function() {
var self = this,
room_id = App.getPath('current_room.id');
this.store.filter(App.ChatMessage, {room_id: room_id}, function (msg) {
return msg.get('roomId') === room_id;
})
// set content only after promise has resolved
.then(function (messages) {
self.set('content', messages);
});
}
You could also do this in the model hook without the extra clutter, because the model hook will accept a promise directly:
model: function() {
var self = this,
room_id = App.getPath("current_room.id");
return this.store.filter(App.ChatMessage, {room_id: room_id}, function (msg) {
return msg.get('roomId') === room_id;
});
}
My reading of the source (DS.Store.find) shows that what you'd actually be receiving in this instance is an AdapterPopulatedModelArray. A FilteredModelArray would auto-update as you create records. There are passing tests for this behaviour.
As of ember.data 1.13 store.filter was marked for removal, see the following ember blog post.
The feature was made available as a mixin. The GitHub page contains the following note
We recommend that you refactor away from using this addon. Below is a short guide for the three filter use scenarios and how to best refactor each.
Why? Simply put, it's far more performant (and not a memory leak) for you to manage filtering yourself via a specialized computed property tailored specifically for your needs