What is the best way to mock ember services that use ember-ajax in ember-cli-storybook to post and fetch data? - ember.js

I'm using Ember CLI Storybook to create a story of a component than internally relies upon services that communicate to the internet, to fetch and post information to the backend. The way I'm doing that is using ember-ajax.
I see how to mock an ember model from this section but wondering if there is a workaround for ember ajax service.

I like to use mswjs.io for mocking remote requests. It uses a service worker so you can still use your network log as if you still used your real API.
I have an example repo here showing how to set it up: https://github.com/NullVoxPopuli/ember-data-resources/
But I'll copy the code, in case I change something.
Now, in tests, you'd want something like this: https://github.com/NullVoxPopuli/ember-data-resources/blob/main/tests/unit/find-record-test.ts#L17
module('findRecord', function (hooks) {
setupMockData(hooks);
But since you're using storybook, you'd instead want the contents of that function. (And without the setup/teardown hooks unique to tests)
https://github.com/NullVoxPopuli/ember-data-resources/blob/main/tests/unit/-mock-data.ts#L22
import { rest, setupWorker } from 'msw';
let worker;
export async function setupMockData() {
if (!worker) {
worker = setupWorker();
await worker.start();
// artificial timeout "just in case" worker takes a bit to boot
await new Promise((resolve) => setTimeout(resolve, 1000));
worker.printHandlers();
}
let data = [
{ id: '1', type: 'blogs', attributes: { name: `name:1` } },
{ id: '2', type: 'blogs', attributes: { name: `name:2` } },
{ id: '3', type: 'blogs', attributes: { name: `name:3` } },
];
worker.use(
rest.get('/blogs', (req, res, ctx) => {
let id = req.url.searchParams.get('q[id]');
if (id) {
let record = data.find((datum) => datum.id === id);
return res(ctx.json({ data: record }));
}
return res(ctx.json({ data }));
}),
rest.get('/blogs/:id', (req, res, ctx) => {
let { id } = req.params;
let record = data.find((datum) => datum.id === id);
if (record) {
return res(ctx.json({ data: record }));
}
return res(
ctx.status(404),
ctx.json({ errors: [{ status: '404', detail: 'Blog not found' }] })
);
})
);
}
Docs for msw: https://mswjs.io/

Related

Apollo: Using executor function server side removes operation names

I currently have the following code in a codebase using "#apollo/client": "^3.4.17",
const getFrontEndApiSchema = async (authToken: string, hostname: string) => {
const executor = async ({
document,
variables,
}: Parameters<Parameters<typeof introspectSchema>[0]>[0]) => {
const fetchResult = await crossFetch(`${resolveApiUri(hostname)}/graphql`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authentication-Token': authToken,
},
body: JSON.stringify({ query: print(document), variables }),
})
return fetchResult.json()
}
return makeExecutableSchema({
typeDefs: wrapSchema({
schema: buildClientSchema(await unzipSchema()),
executor,
}),
})
}
export const getSchema = async () => {
const frontEndSchema = await getFrontEndApiSchema()
return stitchSchemas({
subschemas: frontEndSchema ? [frontEndSchema, schema] : [schema],
mergeDirectives: true,
})
}
const apolloClient = createApolloClient(
{
schema,
rootValue: { request: req },
},
getAuthenticationToken(req),
false,
)
Which works and fires off requests. However we noticed during a Telemetry exercise (whereby we are trying to track traces through individual operations in DataDog / NewRelic) that a single operation is effectively being split up into it's constituent queries and sent without it's parent operation name.
It's not so clear to me from reading the docs why I would need to this executor function for graphql queries rather than the standard Apollo link chain (similar to what i'm using for the client side apollo client).
So I removed the unneeded executor function to the following.
makeExecutableSchema({
typeDefs: wrapSchema({
schema: buildClientSchema(await unzipSchema()),
}),
})
This worked in so far as the operations where being made and return a result, however ostensibly it was returning results which matched those which would be returned if unauthenticated, (i.e. no authentication token set in the header).
I've checked my error link and have logged context headers and it appears to have the token.
I've also tried swapping the Schemalink for a normal link with no success.
export default function createApolloClient(
schema: SchemaLink.Options,
token: string,
isTest?: boolean,
) {
const link = from([
authLink(token),
serverErrorLink(),
...(__DEV__ ? [logLink(true)] : []),
new SchemaLink(schema),
])
return new ApolloClient({
link,
cache: createCache(),
ssrMode: true,
queryDeduplication: true,
...(!isTest && {
defaultOptions: {
watchQuery: {
fetchPolicy: 'cache-and-network',
},
query: { fetchPolicy: 'cache-first' },
},
}),
})
}
A typical graphql operation I'm sending
query myOperationName{
user {
id
firstName
}
query2{
id
}
query3{
id
}
}
When I do print(document) in the body of my original executor function I am getting
query2{
id
}
etc
So my question is how server side do I construct the correct Apollo client/ link chain combo such that operations are not stripped of their operation names? And any additional clarity on whether it's necessary to use the SchemaLink at all if my express server is on a different box to the api it talks to would be helpful

Nested resolvers with depth greater than 1

The Problem
Looking at this GraphQL query,
query {
asset {
name
interfaces {
created
ip_addresses {
value
network {
name
}
}
}
}
}
How do I define a resolver for just the network field on ip_addresses?
My First Thought
Reading docs the give examples of single nested queries, e.g
const resolverMap = {
Query: {
author(obj, args, context, info) {
return find(authors, { id: args.id });
},
},
Author: {
posts(author) {
return filter(posts, { authorId: author.id });
},
},
};
So I thought - why not just apply this pattern to nested properties?
const resolverMap = {
Query: {
asset,
},
Asset: {
interfaces: {
ip_addresses: {
network: () => console.log('network resolver called'),
},
},
},
};
But this does not work, when I run the query - I do not see the console log.
Further Testing
I wanted to make sure that a resolver will always be called if its on root level of the query return type.
My hypothesis:
Asset: {
properties: () => console.log('properties - will be called'), // This will get called
interfaces: {
created: () => console.log('created - wont be called'),
ip_addresses: {
network_id: () => console.log('network - wont be called'),
},
},
},
And sure enough my console showed
properties - will be called
The confusing part
But somehow apollo is still using default resolvers for created and ip_addresses, as I can see the returned data in playground.
Workaround
I can implement "monolith" resolvers as follows:
Asset: {
interfaces,
},
Where the interfaces resolver does something like this:
export const interfaces = ({ interfaces }) =>
interfaces.map(interfaceObj => ({ ...interfaceObj, ip_addresses: ip_addresses(interfaceObj) }));
export const ip_addresses = ({ ip_addresses }) =>
ip_addresses.map(ipAddressObj => ({
...ipAddressObj,
network: network(null, { id: ipAddressObj.network_id }),
}));
But I feel that this should be handled by default resolvers, as these custom resolvers aren't actually doing anything, but passing data down to another resolver.
The resolver map passed to the ApolloServer constructor is an object where each property is the name of a type in your schema. The value of this property is another object, wherein each property is a field for that type. Each of those properties then maps to a resolver function for that specified field.
You posted a query without posting your actual schema, so we don't know what any of your types are actually named, but assuming the network field is, for example, Network, your resolver map would need to look something like:
const resolver = {
// ... other types like Query, IPAddress, etc. as needed
Network: {
name: () => 'My network name'
}
}
You can, of course, introduce a resolver for any field in the schema. If the field returns an object type, you return a JavaScript Object and can let the default resolver logic handle resolving "deeper" fields:
const resolvers = {
IPAddress: {
network: () => {
return {
name: 'My network name',
}
}
}
}
Or...
const resolvers = {
Interface: {
ip_addresses: () => {
return [
{
value: 'Some value',
network: {
name: 'My network name',
},
},
]
}
}
}
Where you override the default resolver just depends at what point the data returned from your root-level field no longer matches your schema. For a more detailed explanation of the default resolver behavior, see this answer.

Im trying to mock a function from a service but Jest keeps calling the actual function instead of the mock function

I'm using Jest to test a function from a service that uses axios to make some api calls. The problem is that Jest keeps calling the actual services function instead of the mocked service function. Here is all of the code:
The tests:
// __tests__/NotificationService.spec.js
const mockService = require('../NotificationService').default;
beforeEach(() => {
jest.mock('../NotificationService');
});
describe('NotificationService.js', () => {
it('returns the bell property', async () => {
expect.assertions(1);
const data = await mockService.fetchNotifications();
console.log(data);
expect(data).toHaveProperty('data.bell');
});
});
The mock:
// __mocks__/NotificationService.js
const notifData = {
bell: false,
rollups: [
{
id: 'hidden',
modifiedAt: 123,
read: true,
type: 'PLAYLIST_SUBSCRIBED',
visited: false,
muted: false,
count: 3,
user: {
id: 'hidden',
name: 'hidden'
},
reference: {
id: 'hidden',
title: 'hidden',
url: ''
}
}
],
system: [],
total: 1
};
export default function fetchNotifications(isResolved) {
return new Promise((resolve, reject) => {
process.nextTick(() =>
isResolved ? resolve(notifData) : reject({ error: 'It threw an error' })
);
});
}
The service:
import axios from 'axios';
// hardcoded user guid
export const userId = 'hidden';
// axios instance with hardcoded url and auth header
export const instance = axios.create({
baseURL: 'hidden',
headers: {
Authorization:
'JWT ey'
}
});
/**
* Notification Service
* Call these methods from the Notification Vuex Module
*/
export default class NotificationService {
/**
* #GET Gets a list of Notifications for a User
* #returns {AxiosPromise<any>}
* #param query
*/
static async fetchNotifications(query) {
try {
const res = await instance.get(`/rollups/user/${userId}`, {
query: query
});
return res;
} catch (error) {
console.error(error);
}
}
}
I've tried a couple of variations of using require instead of importing the NotificationService, but it gave some other cryptic errors...
I feel like I'm missing something simple.
Help me please :)
The problem is that Jest keeps calling the actual services function instead of the mocked service function.
babel-jest hoists jest.mock calls so that they run before everything else (even import calls), but the hoisting is local to the code block as described in issue 2582.
I feel like I'm missing something simple.
Move your jest.mock call outside the beforeEach and it will be hoisted to the top of your entire test so your mock is returned by require:
const mockService = require('../NotificationService').default; // mockService is your mock...
jest.mock('../NotificationService'); // ...because this runs first
describe('NotificationService.js', () => {
it('returns the bell property', async () => {
...
});
});

Mocha tests, clean disk database before every file runs

I am using Sails 1.x.
Is it possible to reset the Sails.js database before each test file runs? I want it to be in state after sails.lift() completes before each run. I followed the docs here - https://sailsjs.com/documentation/concepts/testing - but did not end up with any solution like this.
The only solution I'm having right now is to change the lifecyle.test.js before and after to run beforeEvery and afterEvery - https://sailsjs.com/documentation/concepts/testing - so this is lifting everytime before test. But it takes a long time to lift.
This is very simple to do. You just need to specify to add test connection in your connections on datasourses (depends on the version of Sails.js), setup it as active during the test and provide migration strategy 'drop' which is just rebuild your DB every time on startup
models: {
connection: 'test',
migrate: 'drop'
},
My connections Sails.js 0.12.14
module.exports.connections = {
prod: {
adapter: 'sails-mongo',
host: 'localhost',
port: 27017,
database: 'some-db'
},
test: {
adapter: 'sails-memory'
},
};
My simplified lifecycle.test.js
let app;
// Before running any tests...
before(function(done) {
// Lift Sails and start the server
const Sails = require('sails').constructor;
const sailsApp = new Sails();
sailsApp.lift({
models: {
connection: 'test',
migrate: 'drop'
},
}, function(err, sails) {
app = sails;
return done(err, sails);
});
});
// After all tests have finished...
after(async function() {
// here you can clear fixtures, etc.
// (e.g. you might want to destroy the records you created above)
try {
await app.lower()
} catch (err) {
await app.lower()
}
});
In Sails 1 it's even simpler
const sails = require('sails');
before((done) => {
sails.lift({
datastores: {
default: {
adapter: 'sails-memory'
},
},
hooks: { grunt: false },
models: {
migrate: 'drop'
},
}, (err) => {
if (err) { return done(err); }
return done();
});
});
after(async () => {
await sails.lower();
});
I'm using this in my bootstrap test file to make the database clean.
const sails = require('sails');
before((done) => {
sails.lift({
log: {
level: 'silent'
},
datastores: {
default: {
adapter: 'sails-disk',
inMemoryOnly: true
}
},
models: {
migrate: 'drop',
archiveModelIdentity: false
}
}, function (err, sails) {
if (err) return done(err);
done(err, sails);
});
});
after(async () => {
await sails.lower();
});
beforeEach((done) => {
sails.once('hook:orm:reloaded', done);
sails.emit('hook:orm:reload');
});

Unit testing Sails/Waterline models with mocha/supertest: toJSON() issue

I'm setting up unit tests on my Sails application's models, controllers and services.
I stumbled upon a confusing issue, while testing my User model. Excerpt of User.js:
module.exports = {
attributes: {
username: {
type: 'string',
required: true
},
[... other attributes...] ,
isAdmin: {
type: 'boolean',
defaultsTo: false
},
toJSON: function() {
var obj = this.toObject();
// Don't send back the isAdmin attribute
delete obj.isAdmin;
delete obj.updatedAt;
return obj;
}
}
}
Following is my test.js, meant to be run with mocha. Note that I turned on the pluralize flag in blueprints config. Also, I use sails-ember-blueprints, in order to have Ember Data-compliant blueprints. So my request has to look like {user: {...}}.
// Require app factory
var Sails = require('sails/lib/app');
var assert = require('assert');
var request = require('supertest');
// Instantiate the Sails app instance we'll be using
var app = Sails();
var User;
before(function(done) {
// Lift Sails and store the app reference
app.lift({
globals: true,
// load almost everything but policies
loadHooks: ['moduleloader', 'userconfig', 'orm', 'http', 'controllers', 'services', 'request', 'responses', 'blueprints'],
}, function() {
User = app.models.user;
console.log('Sails lifted!');
done();
});
});
// After Function
after(function(done) {
app.lower(done);
});
describe.only('User', function() {
describe('.update()', function() {
it('should modify isAdmin attribute', function (done) {
User.findOneByUsername('skippy').exec(function(err, user) {
if(err) throw new Error('User not found');
user.isAdmin = false;
request(app.hooks.http.app)
.put('/users/' + user.id)
.send({user:user})
.expect(200)
.expect('Content-Type', /json/)
.end(function() {
User.findOneByUsername('skippy').exec(function(err, user) {
assert.equal(user.isAdmin, false);
done();
});
});
});
});
});
});
Before I set up a policy that will prevent write access on User.isAdmin, I expect my user.isAdmin attribute to be updated by this request.
Before running the test, my user's isAdmin flag is set to true. Running the test shows the flag isn't updated:
1) User .update() should modify isAdmin attribute:
Uncaught AssertionError: true == false
This is even more puzzling since the following QUnit test, run on client side, does update the isAdmin attribute, though it cannot tell if it was updated, since I remove isAdmin from the payload in User.toJSON().
var user;
module( "user", {
setup: function( assert ) {
stop(2000);
// Authenticate with user skippy
$.post('/auth/local', {identifier: 'skippy', password: 'Guru-Meditation!!'}, function (data) {
user = data.user;
}).always(QUnit.start);
}
, teardown: function( assert ) {
$.get('/logout', function(data) {
});
}
});
asyncTest("PUT /users with isAdmin attribute should modify it in the db and return the user", function () {
stop(1000);
user.isAdmin = true;
$.ajax({
url: '/users/' + user.id,
type: 'put',
data: {user: user},
success: function (data) {
console.log(data);
// I can not test isAdmin value here
equal(data.user.firstName, user.firstName, "first name should not be modified");
start();
},
error: function (reason) {
equal(typeof reason, 'object', 'reason for failure should be an object');
start();
}
});
});
In the mongoDB console:
> db.user.find({username: 'skippy'});
{ "_id" : ObjectId("541d9b451043c7f1d1fd565a"), "isAdmin" : false, ..., "username" : "skippy" }
Yet even more puzzling, is that commenting out delete obj.isAdmin in User.toJSON() makes the mocha test pass!
So, I wonder:
Is the toJSON() method on Waterline models only used for output filtering? Or does it have an effect on write operations such as update().
Might this issue be related to supertest? Since the jQuery.ajax() in my QUnit test does modify the isAdmin flag, it is quite strange that the supertest request does not.
Any suggestion really appreciated.