Use Cloud SDK in unit tests - unit-testing

I followed the guide here - https://sailsjs.com/documentation/concepts/testing
So my lifecycle.test.js looks like this:
var sails = require('sails');
// Before running any tests...
before(function(done) {
// Increase the Mocha timeout so that Sails has enough time to lift, even if you have a bunch of assets.
this.timeout(5000);
sails.lift({
// Your sails app's configuration files will be loaded automatically,
// but you can also specify any other special overrides here for testing purposes.
// For example, we might want to skip the Grunt hook,
// and disable all logs except errors and warnings:
hooks: { grunt: false },
log: { level: 'warn' },
}, function(err) {
if (err) { return done(err); }
// here you can load fixtures, etc.
// (for example, you might want to create some records in the database)
return done();
});
});
// After all tests have finished...
after(function(done) {
// here you can clear fixtures, etc.
// (e.g. you might want to destroy the records you created above)
sails.lower(done);
});
However in my test I am not able to use Cloud. I thought this was available to me because I saw in test suites here that they use Cloud - https://github.com/mikermcneil/ration/tree/master/test/suites
Do i have to modify my lifecycle.test.js to use Cloud?

Related

Clean up test fixtures

I have a CloudFormation stack that defines a GraphQL API powered by DynamoDB. I would like to run a test script that:
Creates a standard set of fixtures.
Runs various tests, including creating, modifying, and deleting data.
Deletes all of the fixtures and any other objects created during the test.
The “clean way” to do this would be to create a new stage for the tests, but this is extremely time-consuming (in terms of wall-clock time spent waiting for the result).
The “hard way” would be to keep precise track of every DynamoDB record created during the testing process and then delete them afterward one by one (and/or using many batch updates). This would be a huge pain to code, and the likelihood of error is very high.
An intermediate approach would be to use a dedicated pre-existing stage for integration tests, wipe it clean at the end of the tests, and make sure that only one set of tests is running at a time. This would require writing a script to manually clear out the tables, which sounds moderately less tedious and error-prone than the “hard way”.
Is there an established best practice for this? Are there other approaches I haven't considered?
How long does it take to deploy the stack?
If it is only a small portion of the time that takes to run the tests use the "clean way", otherwise use the intermediate approach of having a dedicated test stack already deployed.
You don't have to write scripts.
I actually wrote a testing library for this purpose exactly:
https://github.com/erezrokah/aws-testing-library/blob/master/src/jest/README.md#tohaveitem
Usage example (TypeScript):
import { clearAllItems } from 'aws-testing-library/lib/utils/dynamoDb';
import { invoke } from 'aws-testing-library/lib/utils/lambda';
describe('db service e2e tests', () => {
const region = 'us-east-1';
const table = 'db-service-dev';
beforeEach(async () => {
await clearAllItems(region, table);
});
afterEach(async () => {
await clearAllItems(region, table);
});
test('should create db entry on lambda invoke', async () => {
const result = await invoke(region, 'db-service-dev-create', {
body: JSON.stringify({ text: 'from e2e test' }),
});
const lambdaItem = JSON.parse(result.body);
expect.assertions(1);
await expect({ region, table, timeout: 0 }).toHaveItem(
{ id: lambdaItem.id },
lambdaItem,
);
});
});
If you do write the scripts yourself you might need to consider eventual consistency and retry (as the data might not be available directly after write)
I'd simply delete this "test stack" in the end of the test and let CloudFormation clean up DynamoDB for you -- check DeletionPolicy documentation.
You might want to trigger/hook stack deletion from your CI environment, whatever it is. As an example, I found this CodePipeline walkthrough: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-basic-walkthrough.html

Disable parse server logger during unit test of cloud code

all. I'm trying to implement an app that using parse server as backend.
And I'm trying to use mocha/chai to do the unit test for the cloud code function.
Like the code below.
const { expect } = require('chai');
const { server } = require('../index.js');
const Parse = require('parse/node');
let loggedUser;
let loggedUserSessionToken;
describe('SMS APIs', function() {
before('Initialize parse server.', function(done) {
Parse.initialize("appId");
Parse.serverURL = 'http://localhost:1337/parse';
done();
});
after('Close server', function(done) {
done();
server.close();
});
it('Pass', function(done) {
expect(1).to.equal(1);
done();
})
)};
After I run yarn mocha. The command line shows lots of log message. It is hard to read the mocha test result. like the picture below. Is there any method to turn off parse logger?
command line logger image
Take a look how the parse-server repo does it: helper.js
the key is to set 'silent: true' in the parse-server configuration.
I do this by using the wonderful config package, creating a test.js config that sets silent to true and then setting NODE_ENV=test when running my unit tests. Sounds like a lot to do, but this pattern is commonly reused for many things. Good luck!

Ember - using api/server data

I am new to ember and I like what I see so far. I have done the tutorial and found to to be pretty easy to get something up and running. My question has to do with using mirage vs real data. I used mirage to stub in some data but now I would like to link to to real data. I would think this should not be too hard since I have the models..etc set up I just need to call an api instead of mirage. I have not seen a clean example of how best to do this.
Thanks
You can turn mirage on/off for all requests per environment in config/environment.js e.g. off for development, on for testing,
// config/environment.js
if (environment === 'development') {
ENV['ember-cli-mirage'] = {
enabled: false
};
}
if (environment === 'test') {
ENV['ember-cli-mirage'] = {
enabled: true
};
}
Or if you leave mirage on for everything, allow specific endpoints to with passthrough:
http://www.ember-cli-mirage.com/docs/v0.3.x/configuration/#passthrough

Faking a module in angular 2 test

I have a function in angular 2 service which I would like to test.
service.ts
upload(){
let file = new Transfer();
file.upload(myfile).then( // my callback );
}
I would like to mock Transfer in my test using jasmine. I tried this in my
sevice.spec.ts
import { TransferMock as Transfer } from '../mocks/mocks' to mock it. But it is not working. This is how my test is instantiated .
describe('authentication service' , () => {
beforeEach(() => {
auth = new Service(<any>new HttpMock(), <any>new StorageMock())
});
it('initialize authentication',() => {
expect(auth).not.toEqual(null);
auth.upload('file'); //it fails here
});
})
edit
Transfer is not injected in the service. Only one function uses Transfer . So not injecting can reduce the initial loading time of the app i guess(would be happy to know other opinions) . So I would like to know if there is anyway to mock if its constructed this way ?
edit
Although I had accepted Martin's answer as it is the best practice, it has one issue which can happen when you use ionic-native plugins.If the plugin doesnt have browser support it can fail. In this case it happened when I inject it, with error FileTransfer is not defined . So I am back again, looking for suggestions.
In order to provide a mock for a class in a test, you need to inject the class in your implementation.
In your ngModule add Transfer to your providers. Then simply inject it into your service.
Then in your test you can use { provide: Transfer, useClass: TransferMock } in your TestBed providers.
Update
The primary purpose of Dependency Injection is to make code testable and to allow mocking - faking - stubbing of services.
Update
With Dependancy Injection you can configure a different set of providers for different environments.
For example, if you are running your application in the browser, and in a native mobile environment you can swap out your configuration.
In your module you could have something like this:
const TRANSFER_PROVIDER: any;
if (environment.browser) {
TRANSFER_PROVIDER = Transfer;
} else {
TRANSFER_PROVIDER = { provide: Transfer, useClass: NativeTransfer }
}
...
providers: [ TRANSFER_PROVIDER ]
NativeTransfer could be a simple stub that does nothing but prevent errors, or it could let the user know that this feature is not supported in their browser.

Unit test a polymer web component that uses firebase

I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.