Given a simple graphql schema that looks something like:
type Contact {
id: ID!
name: String
}
type Query {
RecentContacts: [Contact]
Contact(id: ID!): Contact
}
If I query Recent contacts:
const GET_RECENT_CONTACTS = gql`
query RecentContacts {
RecentContacts {
id
name
}
}`
<Query client={client} query={GET_RECENT_CONTACTS}>
{({loading, error, data}) => { /* etc... */ }}
</Query>
And receive data for e.g. contacts with ids 1 and 2, which is cached like:
ROOT_QUERY
RecentContacts: [Contact]
0: Contact:1
id: 1
name: Jack
1: Contact:2
id: 2
name: Jill
Is there a way to let Apollo know that it can used the already-cached entries for queries Contact(id: 1) and Contact(id: 2) without needing to make another network request just to bring back data that already exists in the cache?
Specifically, I would like for this query to not have to make a network request after RecentContacts has been queried, since the data it needs is already in the cache (albeit returned from a call to a different query):
const GET_CONTACT = gql`
query Contact($id: ID!){
Contact(id: $id){
id
name
}
}
<Query client={client} query={GET_CONTACT} variables={{id: 1}}>
{({loading, error, data}) => {/* etc... */}}
</Query>
You can use cache redirects to do just that. Here's the example from the docs modified to work with your schema:
import { InMemoryCache } from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
User: (_, args, { getCacheKey }) =>
getCacheKey({ __typename: 'User', id: args.id })
},
},
});
Related
I am using AWS AppSync GraphQL and am trying to filter a list by a nested object's value.
My schema looks like this:
type Post #model {
id: ID
title: String
content: String
hidden: Boolean
}
type PinnedPost #model
{
id: ID!
userID: ID #index(name: "byUser", sortKeyFields: ["postID"])
user: User #hasOne (fields: ["userID"])
postID: ID
post: Post #hasOne (fields: ["postID"])
}
I would like to run a query to list the PinnedPost for a user, but filter out the hidden ones, like so:
const pinnedData = await API.graphql(graphqlOperation(
listPinnedPosts, {
filter: {
userID: {
eq: userInfo.attributes.sub
},
post: {
hidden: {
eq: false
},
}
}
}
))
I have updated the filterinput in my Schema through the AppSync Console to:
input ModelPinnedPostFilterInput {
id: ModelIDInput
userID: ModelIDInput
postID: ModelIDInput
post: ModelPostFilterInput
and: [ModelPinnedPostFilterInput]
or: [ModelPinnedPostFilterInput]
not: ModelPinnedPostFilterInput
}
There are no errors associated with it, but the nested filter is not being applied as it will return both true and false values for hidden.
This question was sort of answered before:
Appsync & GraphQL: how to filter a list by nested value
but it is not clear to me where I am supposed to edit the mapping template to allow this. How can I achieve this result?
I'm running an Angular 11 application that is integrated with AWS Amplify and Appsync using GraphQL and dynamoDB for the backend.
This is my Graphql schema:-
type School
#model
#auth(
rules: [{ allow: owner, ownerField: "admins", operations: [update, read] }]
) {
id: ID!
name: String!
admins: [Member]
classes: [Class] #connection(name: "SchoolClasses")
members: [Member] #connection(name: "SchoolMembers")
}
type Class
#model
#auth(
rules: [{ allow: owner, ownerField: "admins", operations: [update, read] }]
) {
id: ID!
name: String!
school: School #connection(name: "SchoolClasses")
admins: [Member]
members: [Member] #connection(name: "ClassMembers")
}
type Member #model #auth(rules: [{ allow: owner }]) {
id: ID!
name: String!
school: School #connection(name: "SchoolMembers")
class: Class #connection(name: "ClassMembers")
}
This is my client definition:-
const client = new AWSAppSyncClient({
url: awsconfig.aws_appsync_graphqlEndpoint,
region: awsconfig.aws_appsync_region,
auth: {
type: awsconfig.aws_appsync_authenticationType,
jwtToken: async () =>
(await Auth.currentSession()).getAccessToken().getJwtToken(),
},
complexObjectsCredentials: () => Auth.currentCredentials(),
cacheOptions: {
dataIdFromObject: (obj: any) => `${obj.__typename}:${obj.myKey}`,
},
});
This is my query method:-
client
.query({
query: ListSchools,
})
.then((data: any) => {
console.log('data from listSchools ', data);
console.log(data.data.listSchools.items);
});
};
This is my query definition:-
import gql from 'graphql-tag';
export default gql`
query ListSchools(
$filter: ModelSchoolFilterInput
$limit: Int
$nextToken: String
) {
listSchools(filter: $filter, limit: $limit, nextToken: $nextToken) {
items {
id
name
admins {
id
name
createdAt
updatedAt
owner
}
classes {
nextToken
}
members {
nextToken
}
createdAt
updatedAt
}
nextToken
}
}
`;
The output for data in the console looks like this:-
{
"data":{
"listSchools":{
"items":[],
"nextToken":null,
"__typename":"ModelSchoolConnection"
}
},
"loading":false,
"networkStatus":7,
"stale":false
}
As you can see, the items is an empty array. But currently I have 3 items in my dynamoDB table:-
What am I doing wrong?
I have checked the regions to see if it is querying a different region, but it is checking the correct region, so I should be seeing the results. Also, wouldn't it throw an error if we're querying the wrong table?
I figured it out. The issue was in the GraphQL Schema definition where I had set the #auth paramter to only allow a certain admin to access the list, that's why I was getting back an empty array. I removed the #auth parameter and it now gives back the proper list of items.
The Problem
Looking at this GraphQL query,
query {
asset {
name
interfaces {
created
ip_addresses {
value
network {
name
}
}
}
}
}
How do I define a resolver for just the network field on ip_addresses?
My First Thought
Reading docs the give examples of single nested queries, e.g
const resolverMap = {
Query: {
author(obj, args, context, info) {
return find(authors, { id: args.id });
},
},
Author: {
posts(author) {
return filter(posts, { authorId: author.id });
},
},
};
So I thought - why not just apply this pattern to nested properties?
const resolverMap = {
Query: {
asset,
},
Asset: {
interfaces: {
ip_addresses: {
network: () => console.log('network resolver called'),
},
},
},
};
But this does not work, when I run the query - I do not see the console log.
Further Testing
I wanted to make sure that a resolver will always be called if its on root level of the query return type.
My hypothesis:
Asset: {
properties: () => console.log('properties - will be called'), // This will get called
interfaces: {
created: () => console.log('created - wont be called'),
ip_addresses: {
network_id: () => console.log('network - wont be called'),
},
},
},
And sure enough my console showed
properties - will be called
The confusing part
But somehow apollo is still using default resolvers for created and ip_addresses, as I can see the returned data in playground.
Workaround
I can implement "monolith" resolvers as follows:
Asset: {
interfaces,
},
Where the interfaces resolver does something like this:
export const interfaces = ({ interfaces }) =>
interfaces.map(interfaceObj => ({ ...interfaceObj, ip_addresses: ip_addresses(interfaceObj) }));
export const ip_addresses = ({ ip_addresses }) =>
ip_addresses.map(ipAddressObj => ({
...ipAddressObj,
network: network(null, { id: ipAddressObj.network_id }),
}));
But I feel that this should be handled by default resolvers, as these custom resolvers aren't actually doing anything, but passing data down to another resolver.
The resolver map passed to the ApolloServer constructor is an object where each property is the name of a type in your schema. The value of this property is another object, wherein each property is a field for that type. Each of those properties then maps to a resolver function for that specified field.
You posted a query without posting your actual schema, so we don't know what any of your types are actually named, but assuming the network field is, for example, Network, your resolver map would need to look something like:
const resolver = {
// ... other types like Query, IPAddress, etc. as needed
Network: {
name: () => 'My network name'
}
}
You can, of course, introduce a resolver for any field in the schema. If the field returns an object type, you return a JavaScript Object and can let the default resolver logic handle resolving "deeper" fields:
const resolvers = {
IPAddress: {
network: () => {
return {
name: 'My network name',
}
}
}
}
Or...
const resolvers = {
Interface: {
ip_addresses: () => {
return [
{
value: 'Some value',
network: {
name: 'My network name',
},
},
]
}
}
}
Where you override the default resolver just depends at what point the data returned from your root-level field no longer matches your schema. For a more detailed explanation of the default resolver behavior, see this answer.
After making a mutation the UI does not update with a newly added item until the page is refreshed. I suspect the problem is in the update section of the mutation but I'm not sure how to troubleshoot further. Any advice is much appreciated.
Query (separate file)
//List.js
export const AllItemsQuery = gql`
query AllItemsQuery {
allItems {
id,
name,
type,
room
}
}
`;
Mutation
import {AllItemsQuery} from './List'
const AddItemWithMutation = graphql(createItemMutation, {
props: ({ownProps, mutate}) => ({
createItem: ({name, type, room}) =>
mutate({
variables: {name, type, room},
optimisticResponse: {
__typename: 'Mutation',
createItem: {
__typename: 'Item',
name,
type,
room
},
},
update: (store, { data: { submitItem } }) => {
// Read the data from the cache for this query.
const data = store.readQuery({ query: AllItemsQuery });
// Add the item from the mutation to the end.
data.allItems.push(submitItem);
// Write the data back to the cache.
store.writeQuery({ query: AllItemsQuery, data });
}
}),
}),
})(AddItem);
Looks promising, one thing that is wrong is the name of the result of the mutation data: { submitItem }. Because in the optimistic Response you declare it as createItem. Did you console.log and how does the mutation look like?
update: (store, {
data: {
submitItem // should be createItem
}
}) => {
// Read the data from our cache for this query.
const data = store.readQuery({
query: AllItemsQuery
});
// Add our comment from the mutation to the end.
data.allItems.push(submitItem); // also here
// Write our data back to the cache.
store.writeQuery({
query: AllItemsQuery,
data
});
}
I'm not entirely sure that the problem is with the optimisticResponse function you have above (that is the right approach), but I would guess that you're using the wrong return value. For example, here is a response that we're using:
optimisticResponse: {
__typename: 'Mutation',
updateThing: {
__typename: 'Thing',
thing: result,
},
},
So if I had to take a wild guess, I would say that you might want to try using the type within your return value:
optimisticResponse: {
__typename: 'Mutation',
createItem: {
__typename: 'Item',
item: { // This was updated
name,
type,
room
}
},
},
As an alternative, you can just refetch. There have been a few times in our codebase where things just don't update the way we want them to and we can't figure out why so we punt and just refetch after the mutation resolves (mutations return a promise!). For example:
this.props.createItem({
... // variables go here
}).then(() => this.props.data.refetch())
The second approach should work every time. It's not exactly optimistic, but it will cause your data to update.
I run into a problem when use Ember-data to save a model. The JSON structure for my model looks like:
{ post: {
id: 1,
name: 'post-1',
trigger: ['trigger-1', 'trigger-2'],
data: ['data-1', 'data-2']
}
}
Because 'data' and 'trigger' are reserved keywords for DS.Model, I created a mapping and renamed them to sc_data and sc_trigger as suggestion by Jurre using
Application.SERIALIZATION_KEY_MAPPINGS = {
'sc_data': 'data',
'sc_trigger': 'trigger'
};
Application.ApplicationSerializer = DS.ActiveModelSerializer.extend({
keyForAttribute: function (attr) {
if (Application.SERIALIZATION_KEY_MAPPINGS.hasOwnProperty(attr)) {
return Application.SERIALIZATION_KEY_MAPPINGS[attr];
} else {
return this._super(attr);
}
}
});
So my model for post looks like:
Application.Post = DS.Model.extend({
name: DS.attr('string'),
sc_trigger: DS.attr(),
sc_data: DS.attr()
});
the sc_trigger and sc_data are renmaed mapping for data and trigger.
It all worked fine when use this.store.find('post') and this.store.find('post', 1), i.e. GET calls. When I try to create a record using this.store.createRecord('post'), it creates a record with the correct attribute name sc_data and sc_trigger.
var newPost = this.store.create('post', {
name: 'test post',
sc_data: [],
sc_trigger: []
})
And the serialize function interprets the mapping correctly as well. newPost.serialize() returns
{
name: 'test post',
data: [],
trigger: []
}
But when I call newPost.save(), in the HTTP request body of the POST call, data and trigger field is missing. It only has
{
name: 'test post'
}
I have no idea why newPost.save() doesn't generate the correct request body when serialize() is working just fine.
Update
I managed to get around this by removing the keyForAttribute mapping and using
Application.ApplicationSerializer = DS.ActiveModelSerializer.extend({
attrs: {
sc_data: {key: 'data'},
sc_trigger: {key: 'trigger'}
}
});
This seems to be the suggested way to handle data with reserved keywords.
Which ember data version and emberjs version are you using?
Try saving with id like-
var newPost = this.store.create('post', {
id:1
name: 'test post',
sc_data: [],
sc_trigger: []
});
Save and create always expects id. So it's better to save/create record with id.