Nested resolvers with depth greater than 1 - apollo

The Problem
Looking at this GraphQL query,
query {
asset {
name
interfaces {
created
ip_addresses {
value
network {
name
}
}
}
}
}
How do I define a resolver for just the network field on ip_addresses?
My First Thought
Reading docs the give examples of single nested queries, e.g
const resolverMap = {
Query: {
author(obj, args, context, info) {
return find(authors, { id: args.id });
},
},
Author: {
posts(author) {
return filter(posts, { authorId: author.id });
},
},
};
So I thought - why not just apply this pattern to nested properties?
const resolverMap = {
Query: {
asset,
},
Asset: {
interfaces: {
ip_addresses: {
network: () => console.log('network resolver called'),
},
},
},
};
But this does not work, when I run the query - I do not see the console log.
Further Testing
I wanted to make sure that a resolver will always be called if its on root level of the query return type.
My hypothesis:
Asset: {
properties: () => console.log('properties - will be called'), // This will get called
interfaces: {
created: () => console.log('created - wont be called'),
ip_addresses: {
network_id: () => console.log('network - wont be called'),
},
},
},
And sure enough my console showed
properties - will be called
The confusing part
But somehow apollo is still using default resolvers for created and ip_addresses, as I can see the returned data in playground.
Workaround
I can implement "monolith" resolvers as follows:
Asset: {
interfaces,
},
Where the interfaces resolver does something like this:
export const interfaces = ({ interfaces }) =>
interfaces.map(interfaceObj => ({ ...interfaceObj, ip_addresses: ip_addresses(interfaceObj) }));
export const ip_addresses = ({ ip_addresses }) =>
ip_addresses.map(ipAddressObj => ({
...ipAddressObj,
network: network(null, { id: ipAddressObj.network_id }),
}));
But I feel that this should be handled by default resolvers, as these custom resolvers aren't actually doing anything, but passing data down to another resolver.

The resolver map passed to the ApolloServer constructor is an object where each property is the name of a type in your schema. The value of this property is another object, wherein each property is a field for that type. Each of those properties then maps to a resolver function for that specified field.
You posted a query without posting your actual schema, so we don't know what any of your types are actually named, but assuming the network field is, for example, Network, your resolver map would need to look something like:
const resolver = {
// ... other types like Query, IPAddress, etc. as needed
Network: {
name: () => 'My network name'
}
}
You can, of course, introduce a resolver for any field in the schema. If the field returns an object type, you return a JavaScript Object and can let the default resolver logic handle resolving "deeper" fields:
const resolvers = {
IPAddress: {
network: () => {
return {
name: 'My network name',
}
}
}
}
Or...
const resolvers = {
Interface: {
ip_addresses: () => {
return [
{
value: 'Some value',
network: {
name: 'My network name',
},
},
]
}
}
}
Where you override the default resolver just depends at what point the data returned from your root-level field no longer matches your schema. For a more detailed explanation of the default resolver behavior, see this answer.

Related

Apollo: Using executor function server side removes operation names

I currently have the following code in a codebase using "#apollo/client": "^3.4.17",
const getFrontEndApiSchema = async (authToken: string, hostname: string) => {
const executor = async ({
document,
variables,
}: Parameters<Parameters<typeof introspectSchema>[0]>[0]) => {
const fetchResult = await crossFetch(`${resolveApiUri(hostname)}/graphql`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authentication-Token': authToken,
},
body: JSON.stringify({ query: print(document), variables }),
})
return fetchResult.json()
}
return makeExecutableSchema({
typeDefs: wrapSchema({
schema: buildClientSchema(await unzipSchema()),
executor,
}),
})
}
export const getSchema = async () => {
const frontEndSchema = await getFrontEndApiSchema()
return stitchSchemas({
subschemas: frontEndSchema ? [frontEndSchema, schema] : [schema],
mergeDirectives: true,
})
}
const apolloClient = createApolloClient(
{
schema,
rootValue: { request: req },
},
getAuthenticationToken(req),
false,
)
Which works and fires off requests. However we noticed during a Telemetry exercise (whereby we are trying to track traces through individual operations in DataDog / NewRelic) that a single operation is effectively being split up into it's constituent queries and sent without it's parent operation name.
It's not so clear to me from reading the docs why I would need to this executor function for graphql queries rather than the standard Apollo link chain (similar to what i'm using for the client side apollo client).
So I removed the unneeded executor function to the following.
makeExecutableSchema({
typeDefs: wrapSchema({
schema: buildClientSchema(await unzipSchema()),
}),
})
This worked in so far as the operations where being made and return a result, however ostensibly it was returning results which matched those which would be returned if unauthenticated, (i.e. no authentication token set in the header).
I've checked my error link and have logged context headers and it appears to have the token.
I've also tried swapping the Schemalink for a normal link with no success.
export default function createApolloClient(
schema: SchemaLink.Options,
token: string,
isTest?: boolean,
) {
const link = from([
authLink(token),
serverErrorLink(),
...(__DEV__ ? [logLink(true)] : []),
new SchemaLink(schema),
])
return new ApolloClient({
link,
cache: createCache(),
ssrMode: true,
queryDeduplication: true,
...(!isTest && {
defaultOptions: {
watchQuery: {
fetchPolicy: 'cache-and-network',
},
query: { fetchPolicy: 'cache-first' },
},
}),
})
}
A typical graphql operation I'm sending
query myOperationName{
user {
id
firstName
}
query2{
id
}
query3{
id
}
}
When I do print(document) in the body of my original executor function I am getting
query2{
id
}
etc
So my question is how server side do I construct the correct Apollo client/ link chain combo such that operations are not stripped of their operation names? And any additional clarity on whether it's necessary to use the SchemaLink at all if my express server is on a different box to the api it talks to would be helpful

What is the best way to mock ember services that use ember-ajax in ember-cli-storybook to post and fetch data?

I'm using Ember CLI Storybook to create a story of a component than internally relies upon services that communicate to the internet, to fetch and post information to the backend. The way I'm doing that is using ember-ajax.
I see how to mock an ember model from this section but wondering if there is a workaround for ember ajax service.
I like to use mswjs.io for mocking remote requests. It uses a service worker so you can still use your network log as if you still used your real API.
I have an example repo here showing how to set it up: https://github.com/NullVoxPopuli/ember-data-resources/
But I'll copy the code, in case I change something.
Now, in tests, you'd want something like this: https://github.com/NullVoxPopuli/ember-data-resources/blob/main/tests/unit/find-record-test.ts#L17
module('findRecord', function (hooks) {
setupMockData(hooks);
But since you're using storybook, you'd instead want the contents of that function. (And without the setup/teardown hooks unique to tests)
https://github.com/NullVoxPopuli/ember-data-resources/blob/main/tests/unit/-mock-data.ts#L22
import { rest, setupWorker } from 'msw';
let worker;
export async function setupMockData() {
if (!worker) {
worker = setupWorker();
await worker.start();
// artificial timeout "just in case" worker takes a bit to boot
await new Promise((resolve) => setTimeout(resolve, 1000));
worker.printHandlers();
}
let data = [
{ id: '1', type: 'blogs', attributes: { name: `name:1` } },
{ id: '2', type: 'blogs', attributes: { name: `name:2` } },
{ id: '3', type: 'blogs', attributes: { name: `name:3` } },
];
worker.use(
rest.get('/blogs', (req, res, ctx) => {
let id = req.url.searchParams.get('q[id]');
if (id) {
let record = data.find((datum) => datum.id === id);
return res(ctx.json({ data: record }));
}
return res(ctx.json({ data }));
}),
rest.get('/blogs/:id', (req, res, ctx) => {
let { id } = req.params;
let record = data.find((datum) => datum.id === id);
if (record) {
return res(ctx.json({ data: record }));
}
return res(
ctx.status(404),
ctx.json({ errors: [{ status: '404', detail: 'Blog not found' }] })
);
})
);
}
Docs for msw: https://mswjs.io/

Avoid replacing/mixing user contexts in apollo-server datasources during the execution of asynchronous code

This question might be somewhat stupid but I like to make sure to not mess this up:
If you use this.context in your datasource, it's critical to create
a new instance in the dataSources function and to not share a single
instance. Otherwise, initialize may be called during the execution of
asynchronous code for a specific user, and replace the this.context
by the context of another user.
Source
Question: Is the following appropriate to avoid mixing/replacing user context during execution of async code?
main.js
const server = new ApolloServer({
typeDefs,
resolvers,
dataSources: () => {
return {
foo: new FooAPI() <- is this considered 'create a new instance in the dataSources function'?
}
},
context: (event) => {
return {
use_id: event.user_id <- (provided by auth middleware)
}
}
})
MyCustomDatasource.js
class MyCustomDatasource extends DataSource {
constructor ({ type, tableName, region, apiVersion }) {
super()
// db specific stuff
}
initialize (config) {
this.context = config.context <- save due to 'new instance in the dataSources function'?
this.cache = config.cache
.
.
.
}
FooAPI.js
class FooAPI extends MyCustomDatasource {
constructor () {
super({
// foo db params
})
}
// Query
async allByUser () {
return await super.query({
KCE: `
#user_id = :user_id
AND begins_with(#id, :id)
`,
EAN: {
'#user_id': 'user_id',
'#id': 'id'
},
EAV: {
':user_id': this.context.user_id, <- save due to 'new instance in the dataSources function'?
':id': 'foo'
},
Limit: 10
})
}
foo resolver
{
Query: {
foos (parent, args, context, info) {
const { foo } = context.dataSources
return foo.allByUser()
}
},

How can i work with GraphQL Mutation?

how can i work with resolvers for mutations after i create type Mutations in graphql-yoga?
i've tried to create resolvers for mutations, but when i run in graph playground, i the code return error.
and here's my code:
const { GraphQLServer } = require('graphql-yoga')
// 1
const typeDefs = `
type Query {
users: [User!]!
user(id: ID!): User
}
type Mutation {
createUser(name: String!): User!
}
type User {
id: ID!
name: String!
}
`
// 2
const resolvers = {
Query: {
users: () => User,
},
Mutation: {
// WHAT SHOULD I WRITE IN HERE?
}
}
// 3
const server = new GraphQLServer({
typeDefs,
resolvers,
})
server.start(() => console.log(`Server is running on http://localhost:4000`))
if someone know how can i do for resolvers mutation, can shared with me?
thanks
Resolver for createUser can be defined as follows:
const resolvers = {
Query: {
// Query resolvers
},
Mutation: {
createUser: (parent, args) => {
// Business logic. Maybe save record in database
// Return created user. I am returning dummy data for now, so that you can test it in playground
return {id: 1, name: "John}
}
}
}
Finally it works for me.
i used this:
const resolvers = {
Query: {
users: () => User
},
Mutation: {
createUser: (source, {input}) => {
let newUser = [];
newUser.id = id;
newUser.name = input.name;
User.push(newUser);
return newUser;
}
}
}

Optimistic UI Not Updating - Apollo

After making a mutation the UI does not update with a newly added item until the page is refreshed. I suspect the problem is in the update section of the mutation but I'm not sure how to troubleshoot further. Any advice is much appreciated.
Query (separate file)
//List.js
export const AllItemsQuery = gql`
query AllItemsQuery {
allItems {
id,
name,
type,
room
}
}
`;
Mutation
import {AllItemsQuery} from './List'
const AddItemWithMutation = graphql(createItemMutation, {
props: ({ownProps, mutate}) => ({
createItem: ({name, type, room}) =>
mutate({
variables: {name, type, room},
optimisticResponse: {
__typename: 'Mutation',
createItem: {
__typename: 'Item',
name,
type,
room
},
},
update: (store, { data: { submitItem } }) => {
// Read the data from the cache for this query.
const data = store.readQuery({ query: AllItemsQuery });
// Add the item from the mutation to the end.
data.allItems.push(submitItem);
// Write the data back to the cache.
store.writeQuery({ query: AllItemsQuery, data });
}
}),
}),
})(AddItem);
Looks promising, one thing that is wrong is the name of the result of the mutation data: { submitItem }. Because in the optimistic Response you declare it as createItem. Did you console.log and how does the mutation look like?
update: (store, {
data: {
submitItem // should be createItem
}
}) => {
// Read the data from our cache for this query.
const data = store.readQuery({
query: AllItemsQuery
});
// Add our comment from the mutation to the end.
data.allItems.push(submitItem); // also here
// Write our data back to the cache.
store.writeQuery({
query: AllItemsQuery,
data
});
}
I'm not entirely sure that the problem is with the optimisticResponse function you have above (that is the right approach), but I would guess that you're using the wrong return value. For example, here is a response that we're using:
optimisticResponse: {
__typename: 'Mutation',
updateThing: {
__typename: 'Thing',
thing: result,
},
},
So if I had to take a wild guess, I would say that you might want to try using the type within your return value:
optimisticResponse: {
__typename: 'Mutation',
createItem: {
__typename: 'Item',
item: { // This was updated
name,
type,
room
}
},
},
As an alternative, you can just refetch. There have been a few times in our codebase where things just don't update the way we want them to and we can't figure out why so we punt and just refetch after the mutation resolves (mutations return a promise!). For example:
this.props.createItem({
... // variables go here
}).then(() => this.props.data.refetch())
The second approach should work every time. It's not exactly optimistic, but it will cause your data to update.