I am new to Jest and unit testing, I have an express API deployed on serverless(Lambda) on AWS.Express api uses dynamodb for crud operations
Note:- my api is based out of express and not just plain node, because on jest website they are telling ways for plain nodejs
I am able to do unit test on express on the methods which doesnt use dynamodb.However it fails for the methods which are using dynamodb, as to my understanding this has something to do with dynamodb being remote, because the code present in app.js corresponds to dyanmo db which is hosted on aws using lambda.
How do I go about it?
Note:- my api is based out of express and not just plain node
const isUrl = require('is-url');
const AWS = require('aws-sdk');
const { nanoid } = require('nanoid/async');
const express = require('express');
const router = express.Router();
const dynamoDb = new AWS.DynamoDB.DocumentClient();
// URL from users
router.post('/', async (req, res, next) => {
// urlId contains converted short url characters generated by nanoid
const urlId = await nanoid(8);
const { longUrl } = req.body;
// Veryfying url Format using isUrl, this return a boolean
const checkUrl = isUrl(longUrl);
if (checkUrl === false) {
res.status(400).json({ error: 'Invalid URL, please try again!!!' });
}
const originalUrl = longUrl;
const userType = 'anonymous'; // user type for anonymous users
const tableName = 'xxxxxxxxxxxxx'; // table name for storing url's
const anonymousUrlCheckParams = {
TableName: tableName,
Key: {
userId: userType,
originalUrl,
},
};
dynamoDb.get(anonymousUrlCheckParams, (err, data) => {
const paramsForTransaction = {
TransactItems: [
{
Put: {
TableName: tableName,
Item: {
userId: userType,
originalUrl,
convertedUrl: `https://xxxxxxxxxxxxxxxx/${urlId}`,
},
},
},
{
Put: {
TableName: tableName,
Item: {
userId: urlId,
originalUrl,
},
ConditionExpression: 'attribute_not_exists(userId)',
},
},
],
};
if (err) {
console.log(err);
res
.status(500)
.json({ error: 'Unknown Server Error, Please Trimify Again!' });
} else if (Object.keys(data).length === 0 && data.constructor === Object) {
dynamoDb.transactWrite(paramsForTransaction, async (error) => {
if (error) {
// err means converted value as userId is repeated twice.
console.log(error);
res
.status(500)
.json({ error: 'Unknown Server Error, Please trimify again. ' });
} else {
res.status(201).json({
convertedUrl: `https://xxxxxxxxxxxx/${urlId}`,
});
}
});
} else {
res.status(201).json({
convertedUrl: data.Item.convertedUrl,
});
}
});
});
module.exports = router;
my test.js
const request = require('supertest');
const app = require('../app');
test('Should convert url from anonymous user ', async () => {
await request(app)
.post('/anon-ops/convert')
.send({
longUrl: 'https://google.com',
})
.expect(201);
});
First off, if you're wanting to do unit testing. It doesn't really matter much if you're using express js or not, hence, the examples and information on the jest website are very valid to get you on your way.
How easy it is to do unit testing, mostly depends on how you have structured your code. For example, you could keep all your express js specific code in separate files and then only instantiate the files holding your actual business logic (which some might call a services layer) during your unit tests. That's at least one way where you could make it easier on yourself. Using a functional approach also makes your code easier to test or at the very least using dependency injection, so you can swap out dependencies during testing in order to test some functionality in isolation.
When it comes to DynamoDB, you've got two options. Either mocking or running a local version.
You can either mock the specific functions you're calling either using the jest mocks or using a mocking library such as sinon. Whichever you choose is mostly personal preference.
The second option is running a local version of DynamoDB in a docker container. This has the upside of also verifying your actual calls to the DynamoDB service (which you could do by verifying the mocks, but it's easy to make a mistake in the verification), however, it is more cumbersome to set up and your tests will be slower, so this might skew your test to be more integration tests than unit tests (but that distinction is an evening worth or arguing in itself).
If you want to go towards end-to-end testing of the entire API, you can have a look at the SuperTest NPM package.
(Edit) Added small example using sinon
const AWS = require('aws-sdk');
const sinon = require('sinon');
const ddb = new AWS.DynamoDB.DocumentClient();
const getStub = sinon.stub(AWS.DynamoDB.DocumentClient.prototype, "get");
getStub.callsFake((params, cb) => {
cb(null, {result: []});
});
ddb.get({foo: 'bar'}, (err, val) => {
console.log(val); // => { "result": [] }
})
Related
I'm building an AWS Lambda function and trying to write some integration tests for it. The Lambda function is running locally using serverless-offline plugin and simply receive a GET request with some query parameters. I'm using Jest and Supertest to write my integration tests as follow:
import request from 'supertest';
describe('User position handler', () => {
it('should return history', () => {
const server = request('http://0.0.0.0:4000');
return server
.get(`/local/position?type=month&period=3`)
.expect(200)
.expect((response) => {
console.log('RESPONSE', response);
expect(response.body).toHaveLength(3);
});
});
});
The problem is that when I run Jest with collect coverage option the code reached by the request sent with Supertest is not computed in the metrics. Running jest --collectCoverage the result is:
The question is that I know that, for example, infra/handlers/user-position.ts is being reached and covered more than 0% statements, but the coverage metrics don't show as expected. Also, I know that user-monthly-position.service.impl.ts is being reached at some point of the flow since this service is responsible for returning data from an external service and the response from Supertest is returning data. The green lines are from files covered by unit test that are using only Jest (and not Supertest, obviously)
I know that when using Supertest with Express framework I can pass an instance of the Express app. to the request function. This way I think that Jest can "inspect" or "instrument" the function call stack to measure the coverage (code sample below). But how can I do the same passing the URL of a running serverless-offline Lambda?
const request = require('supertest');
const assert = require('assert');
const express = require('express');
const app = express();
app.get('/user', function(req, res) {
res.status(200).json({ name: 'john' });
});
request(app)
.get('/user')
.expect('Content-Type', /json/)
.expect('Content-Length', '15')
.expect(200)
.end(function(err, res) {
if (err) throw err;
});
Here the code of my handler function:
export default async (event: APIGatewayEvent): Promise<APIGatewayProxyResult> => {
await cacheService.bootstrapCache();
const userMonthlyPositionService = new UserMonthlyPositionServiceImpl(
cacheService.connectionPool,
);
const getUserMonthlyPositionHistory = new GetUserMonthlyPositionHistory(
userMonthlyPositionService,
);
const result = await getUserMonthlyPositionHistory.execute({
cblc: 999999,
period: 3,
type: 'month',
});
return buildResponse(200, result);
};
My question is: how can I collect right code coverage metrics from Jest using Supertest and Serverless Framework? Am I forgetting a detail? Thanks!
I'm working with Next.js Server Side Rendering and AWS Amplify to get data. However, I've come to a roadblock, where I'm getting an error saying that there's no current user.
My question is why does the app need to have a user if the data is supposed to be read for the public?
What I'm trying to do is show data for the public, if they go to a user's profile page. They don't have to be signed into the app.
My current folder structure is:
/pages/[user]/index.js with getStaticProps and getStaticPaths:
export async function getStaticPaths() {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({ query: listUsers });
const paths = data.listUsers.items.map((user) => ({
params: { user: user.username },
}));
return {
fallback: true,
paths,
};
}
export async function getStaticProps({ params }) {
const SSR = withSSRContext();
const { data } = await SSR.API.graphql({
query: postsByUsername,
variables: {
username: params.username,
},
});
return {
props: {
posts: data.postsByUsername.items,
},
};
}
Finally figured it out. A lot of tutorials uses authMode: 'AMAZON_COGNITO_USER_POOLS ' // or AWS_IAM parameter in their graphql query for example in https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/
// Creating a post is restricted to IAM
const createdTodo = await API.graphql({
query: queries.createTodo,
variables: {input: todoDetails},
authMode: 'AWS_IAM'
});
But you rarely come across people who use authMode: API_KEY.
So I guess, if you want the public to read without authentication, you would just need to set authMode: 'API_KEY'...
Make sure you configure your amplify API to have public key as well.
I am trying to write unit testing for fastify application which also has custom fastify plugin.
Is there a way we can mock fastify plugin? I tried mocking using Jest and Sinon without much success.
Giorgios link to the file is broken, the mocks folder is now absent from the master branch. I dug the commit history to something around the time of his answer and I found a commit with the folder still there. I leave it here for those who will come in the future!
This is what works for me
Setup your plugin according to Fastify docs https://www.fastify.io/docs/latest/Reference/Plugins/
// establishDbConnection.ts
import fp from 'fastify-plugin';
import {FastifyInstance, FastifyPluginAsync} from 'fastify';
import { initDbConnection } from './myDbImpl';
const establishDbConnection: FastifyPluginAsync = async (fastify: FastifyInstance, opts) => {
fastify.addHook('onReady', async () => {
await initDbConnection()
});
};
export default fp(establishDbConnection);
mock the plugin with jest, make sure you wrap the mock function in fp() so that Fastify recognizes it as a plugin.
// myTest.ts
import fp from 'fastify-plugin';
const mockPlugin = fp(async () => jest.fn());
jest.mock('../../../fastifyPlugin/establishDbConnection', (() => {
return mockPlugin;
}));
Your question is a bit generic but if you are using Jest it must be enough for mocking a fastify plugin. You can take a look in this repo and more specifically this file . This is a mock file of fastify and you add the registered plugins and in the specific example addCustomHealthCheck and then in your test files you can just call jest.mock('fastify').
You do not give a specific use case and there are lot of reasons you might want to mock a plugin. The nature of the plugin to be mocked is important to giving a good answer. Because I don't know that specific information I will show how to mock a plugin that creates a decorator that stores data that can be retrieved with fastify.decorator-name. This is a common use case for plugins that connect to databases or store other widely needed variables.
In the below case, the goal is to test a query function that queries a db; a plugin stores the connection information via a fastify decorator. So, in order to unit test the query we specifically need to mock the client data for the connection.
First create an instance of fastify. Next, set up a mock to return the desired fake response. Then, instead of registering the component with fastify (which you could also do), simply decorate the required variables directly with mock information.
Here is the function to be tested. We need to mock a plugin for a database which creates a fastify decorator called db. Specifically, in the below case the function to be tested uses db.client:
const fastify = require("fastify")({ //this is here to gather logs
logger: {
level: "debug",
file: "./logs/combined.log"
}
});
const HOURS_FROM_LOADDATE = "12";
const allDataQuery = `
SELECT *
FROM todo_items
WHERE a."LOAD_DATE" > current_date - interval $1 hour
`;
const queryAll = async (db) => {
return await sendQuery(db, allDataQuery, [HOURS_FROM_LOADDATE]);
};
//send query to db and receive data
const sendQuery = async (db, query, queryParams) => {
var res = {};
try {
const todo_items = await db.client.any(query, queryParams);
res = todo_items;
} catch (e) {
fastify.log.error(e);
}
return res;
};
module.exports = {
queryByAsv
};
Following is the test case. We will mock db.client from the db plugin:
const { queryAll } = require("../src/query");
const any = {
any: jest.fn(() => {
return "mock response";
})
};
describe("should return db query", () => {
beforeAll(async () => {
// set up fastify for test instance
fastify_test = require("fastify")({
logger: {
level: "debug",
file: "./logs/combined.log",
prettyPrint: true
}
});
});
test("test Query All", async () => {
// mock client
const clientPromise = {
client: any
};
//
fastify_test.decorate("db", clientPromise);
const qAll = await queryAll(fastify_test.db);
expect(qAll).toEqual("mock response");
});
});
I'm using firebase-admin for authentication on my express backend. I have a middleware that checks if requests are authenticated.
public resolve(): (req, res, next) => void {
return async (req, res, next) => {
const header = req.header('Authorization');
if (!header || !header.split(' ')) {
throw new HttpException('Unauthorized', UNAUTHORIZED);
}
const token = header.split(' ')[1];
await admin.auth().verifyIdToken(token).then((decodedToken: any) => {
req.user = decodedToken;
next();
}).catch((error: any) => {
throw new HttpException(error, UNAUTHORIZED);
});
};
}
So far, I can only unit test my routes to make sure that they respond UNAUTHORIZED instead of NOT_FOUND.
it('GET /api/menu should return 401 ', done => {
const NOT_FOUND = 404;
const UNAUTHORIZED = 401;
supertest(instance)
.get('/api/menu')
.end((error, response: superagent.Response) => {
expect(response.status).not.toEqual(NOT_FOUND);
expect(response.status).toEqual(UNAUTHORIZED);
done();
});
});
But, I want to write more unit tests than this! I want to mock users so I can make AUTHORIZED requests! I want to use the type property I have on users to verify that users of a certain type can/cannot use certain routes. Does anyone have an idea of how I could do this with firebase-admin-node?
It looks like the firebase-admin-node repo generates tokens for unit tests here, but I'm not sure how I would apply that to my specific problem.
I have an application code that restricts documents in the following manner
Docs.allow({
insert: function(userId, doc){
return !!userId
},
update: function(userId, doc){
return userId && doc.owner == userId;
}
})
Currently, I can only run an integration test which makes actual http calls. I am not able to stub the components (Meteor current user) outside the system under test (allow / deny rules).
it("should succeed if user is authenticated", function(done) {
Meteor.loginWithPassword(’shawn#abc.com', ‘hahaha', function(err){
expect(err).toBe(undefined);
Doc = Docs.insert({title: 'abc',
category: 'Finance'},
function(err, id){
expect(err).toBeUndefined();
expect(id).not.toBeUndefined();
done();
});
});
});
it("should fail if user is not authenticated", function(done) {
Meteor.logout(function(){
doc = Docs.insert({title: 'abc',
category: 'Finance',
owner: '1232131'},
function(err, id){
expect(err).not.toBeUndefined();
done();
});
});
});
This makes my test incredibly slow, especially if there are many paths I want to test. Is there a way for me to move this test to a lower level unit test instead?
Building up on The Meteor Test Manual's answer... Stories.allow mock was defined AFTER the app code has loaded. Therefore, it has no effect.
As documented in https://github.com/Sanjo/meteor-jasmine#stubs,
Files in tests/jasmine folder (or a subfolder of it) that end with -stubs.js or -stub.js are treated as stubs and are loaded before the app code.
So to make The Meteor Test Manual's answer work, we have to define the stub / mock in a -stubs.js file. This is what I've done on z-security-stubs.js.
Note I prefixed the file name with 'z' because meteor load files in the same level of a subdirectory in alphabetical order. We have to ensure that our self-defined stubs load after the automatically generated package-stubs.js and packageMocksSpec.js made by Velocity.
With that in mind, z-security-stubs.js can contain something like this:
Mongo.Collection.prototype.allow = function(rules){
this._velocityAllow = rules;
}
Mongo.Collection.prototype.deny = function(rules){
this._velocityDeny = rules;
}
This retains a reference to our allow / deny security rules in a property of a collection instance (eg. Docs, Files, or whatever your collection is named);
Afterwards, we can refer to the security functions in this property and make assertions:
describe("Docs security rules", function() {
var allow;
beforeEach(function(){
allow = Docs._velocityAllow;
});
it("insert deny access to non-logged in users", function() {
var response = allow.insert(null, {});
expect(response).toBe(false);
});
it("insert allow access to logged in users", function() {
var response = allow.insert(true, {});
expect(response).toBe(true);
});
it("update allow access to logged in users who are owners", function() {
var response = allow.insert(2, {owner: 2});
expect(response).toBe(true);
});
it("update deny access to non-logged in users", function() {
var response = allow.update(null, {owner: 2});
expect(response).toBe(false);
});
it("update deny access to logged in users who are not owners", function() {
var response = allow.update(1, {owner: 2});
expect(response).toBe(false);
});
});
This test exercises the execution paths on the server code, which is where the allow/deny rules live.
The test here does not exercise the integration between the client the server, that's what your client integration test was doing quite well.
You can use the unit test to cover all the execution paths in the code, and then have less integration tests therefore you'll gain the speed you want.
You should much as much as possible into the lower levels, but not everything. You still want to make sure the integration also works.
describe('Doc security rules', function () {
var _oldAllowRules;
beforeAll(function () {
var _allowRules = null;
// keep hold of the old Docs.allow method so we can restore in the after step
_oldAllowRules = Docs.allow;
// override the Docs.allow method so we can isolate the allow rules code
Docs.allow = function (allowRules) {
_allowRules = allowRules;
};
});
afterAll(function () {
// restore the Docs.allow method
Docs.allow = _oldAllowRules;
});
it('insert deny access to non-logged in users', function () {
// Execute
// you can now exercise the allowRules directly
var response = _allowRules.insert(null, {});
// Verify
expect(response).toBe(false);
});
it('update allows for doc owner', function () {
// you can do it all with one step
expect(_allowRules.update(1234, {owner: 1234})).toBe(true);
});
it('update denies for logged out user', function () {
expect(_allowRules.update(null, {})).toBe(false);
});
it('update denies for non document owner', function () {
expect(_allowRules.update(1234, {owner: 5678})).toBe(false);
});
});