I'm interested in using azure's DocumentDB, but I can't see how to sensibly develop, run unittests / integration tests, or how to have our continuous integration server run against it.
AFAICS there's no way to run a local version of the docdb server, you only run against a provisioned instance of docdb in azure.
This means that:
each developer must dev against their own provisioned instance of docdb
each time the developer runs integration tests it's against (their own) remote docdb
continuous integration: I have to assume there's a way to programatically provision another docdb instance for the build? Even then the CI server is running against the remote docdb
Any advice on how people are approaching this with docdb would be much appreciated.
You are correct that there is no version of DocumentDB that you run on your own computers. So, I write unit tests for all stored procedures (sprocs) using documentdb-mock (runs client side on node.js). I do test first design (TDD) with this client side testing which has no requirement for connecting to Azure, but it only tests sprocs.
I run a number of other tests on the live Azure platform. In addition to the client-side tests, I test sprocs live with a real documentdb collection. I also test all client-side SDK code (only used for reads as I do all writes in sprocs) on the live system.
I used to have a single collection per developer for live testing but the fact each test can't guarantee the state of the database meant that some tests failed intermittently, so I switched to creating and deleting a database and collection for each test. It's slightly slower but not as slow as you would expect. I use node-unit and below is my setup and tear down code. Some points about this code:
I preload all sprocs every time since I use sprocs for all writes. I only use the client-side SDK for reads. You could skip this if you don't use sprocs.
I am using the documentdb-utils WrappedClient because it provides some added functionality (429 retry, better async API, etc.). It's a drop in replacement for the standard library (although it does not yet support partitioned collections) but you don't need to use it for the example code below to work.
The delay in the tear down was added to fix some intermittent failures that occurred when the collection was removed but some operations were still pending.
Each test file looks like this:
path = require('path')
{DocumentClient} = require('documentdb')
async = require('async')
{WrappedClient, loadSprocs, getLinkArray, getLink} = require('documentdb-utils')
client = null
wrappedClient = null
collectionLinks = null
exports.underscoreTest =
setUp: (setUpCallback) ->
urlConnection = process.env.DOCUMENT_DB_URL
masterKey = process.env.DOCUMENT_DB_KEY
auth = {masterKey}
client = new DocumentClient(urlConnection, auth)
wrappedClient = new WrappedClient(client)
client.deleteDatabase('dbs/dev-test-database', () ->
client.createDatabase({id: 'dev-test-database'}, (err, response, headers) ->
databaseLink = response._self
client.createCollection(databaseLink, {id: '1'}, {offerType: 'S2'}, (err, response, headers) ->
collectionLinks = getLinkArray(['dev-test-database'], [1])
scriptsDirectory = path.join(__dirname, '..', 'sprocs')
spec = {scriptsDirectory, client, collectionLinks}
loadSprocs(spec, (err, result) ->
sprocLink = getLink(collectionLinks[0], 'createVariedDocuments')
console.log("sprocs loaded for test")
setUpCallback(err, result)
)
)
)
)
test1: (test) ->
...
test.done()
test2: (test) ->
...
test.done()
...
tearDown: (callback) ->
f = () ->
client.deleteDatabase('dbs/dev-test-database', () ->
callback()
)
setTimeout(f, 500)
A local version from a DocumentDB is now available : https://learn.microsoft.com/en-us/azure/documentdb/documentdb-nosql-local-emulator
Related
I want to Automate and integrate the the Selenium web driver tests that we developed for a website in AWS code build environment. We want to run these tests automated, in the AWS Code-Build and then Release (AWS Code-Deploy) if all good.
For example, We wrote all of our test cases using node. Let's assume that I have a basic test case like below
npm install selenium-webdriver In a file called google_test.js
const webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;
const driver = new webdriver.Builder()
.forBrowser('firefox')
.build();
driver.get('http://www.google.com');
driver.findElement(By.name('q')).sendKeys('webdriver');
driver.sleep(1000).then(function() {
driver.findElement(By.name('q')).sendKeys(webdriver.Key.TAB);
});
driver.findElement(By.name('btnK')).click();
driver.sleep(2000).then(function() {
driver.getTitle().then(function(title) {
if(title === 'webdriver - Google Search') {
console.log('Test passed');
} else {
console.log('Test failed');
}
driver.quit();
});
});
Then as you would expect We run this test in the command line,
node google_test
This works fine and great in a manual environment,
however, our challenge is to automate this and deploy automatically if tests were successful,
I wonder how we can archive this in a the AWS code-build setup. Even after doing all the research Im still confused on what is the best way to achieve this, Many suggests many different ways all looks very hacky and unreliable,
Problems/Questions
Since in an Automated AWS code build enr. We dont have browser access, so how can we actually see the output to see if the tests were successful?
What is the way/How we can detect the tests ran correctly to proceed to the next step of code-deploy? what signals can be generated and how?
If this is not possible, what is the recommended way of doing this?
I have an Azure Function which is triggered by an Azure Service Bus Queue.
The function is below.
How this Run method can be unit tested?
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
public static class AddContactFunction
{
[FunctionName("AddContactFunction")]
public static void Run([ServiceBusTrigger("AddContact", Connection = "AddContactFunctionConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I had the exact same doubts.
Adding Unit Tests is not too complicated, at the end of the day, its a function, so all we got to do is to call the Azure Function with the correct string, for parameter string myQueueItem.
Adding Integration tests needs some additional ground work. In the Github project, the author uses the TestFunctionHost class from Azure/azure-functions-host project.
I tried following this strategy, but the amount of code needed to setup all these is uncomfortably high for my liking. Not a lot of it is well documented, and some of the stuff needs developers to use Azure App Services myGet feed.
I wanted a simpler approach, and thankfully I found one.
Azure Functions is built on top of the Azure WebJobs SDK package, and leverages its JobHost class to run. So in our integration tests, all we need to do, is to setup this Host, and tell it where to look for the Azure Functions to load and run.
IHost host = new HostBuilder()
.ConfigureWebJobs()
.ConfigureDefaultTestHost<CLASS_CONTAINING_THE_AZURE_FUNCTIONS>(webjobsBuilder => {
webjobsBuilder.AddAzureStorage();
webjobsBuilder.AddServiceBus();
})
.ConfigureServices(services => {
services.AddSingleton<INameResolver>(resolver);
})
.Build();
using (host) {
await host.StartAsync();
// ..
}
...
Once this is done, we can send messages to ServiceBus and our Azure Functions will get triggered. Once can even set breakpoints in the Functions getting tested and debug issues!
I have blogged about the whole process here and I have also created a github repository at this link, to showcase test driven development with Azure Functions.
How this Run method can be unit tested?
The method is a static public method. You can unit test it by invoking the static method AddContactFunction.Run(/* parameters /*); You will not need a Service Bus namespace or a message for that matter as your function expects to receive a string from the SDK. Which you can provide and verify the logic works as expected.
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
This would be a much more sophisticated scenario. This would require to run Functions runtime and generate a real Service Bus message to trigger the functions as well as validate that the blob was written. There's no integration/end-to-end testing framework that is shipped with Functions and you'd need to come up with something custom. Azure Functions Core Tools could be helpful to achieve that.
I'm currently trying to write a unit test that verifies a certain number of documents exist.
This is what I have so far
test('Login with no account', () async {
Firestore _firestore = Firestore.instance;
final QuerySnapshot result = await _firestore
.collection(UserFirestoreField.Collection)
.where(UserFirestoreField.EmailAddress, isEqualTo: 'email#example.com')
.where(UserFirestoreField.Password, isEqualTo: 'wrongpassword')
.getDocuments();
final List<DocumentSnapshot> docs = result.documents;
print(docs);
});
The error I'm getting is
package:flutter/src/services/platform_channel.dart 314:7
MethodChannel.invokeMethod
MissingPluginException(No implementation found for method
Query#getDocuments on channel plugins.flutter.io/cloud_firestore)
I have the android emulator running with my app started.
Every guide I've seen talks about mocking a database, I want to actually check the real database.
Any way to do this in dart/flutter?
Thanks!
In Flutter, unit and widget tests run on your host machine which does not have the native part of your firebase plugin. This is why you are getting this error.
You really should mock the database in tests but if you really want to test your app as close to how it is run by a user you would run an integration test on an emulator.
You can also use a dart based Firebase plugin or use the Firebase REST API.
You can find more about this here: https://flutter.dev/docs/testing
You could JsonDecode into a local map and test the map.
I have a REST API server that uses Express, Mongoose and config. I want to unit test my API. Basically, bring up a temporary web server on port x, an empty mongo-database on port y, do some API calls (both gets and puts) and validate what I get back and then shutdown temporary server and drop test database once my tests finish. What is the best way to do this? I have been looking at mocha/rewire but not sure how to set up temporary server and db and not sure what the best practices are.
I use Jenkins (continuous integration server) and Mocha to test my app, and I found myself having the same problem as you. I configured Jenkins so it would execute this shell command:
npm install
NODE_ENV=testing node app.js &
npm mocha
pkill node
This runs the server for executing the tests, and then kills it. This also sets the NODE_ENV environment variable so I can run the server on a different port when testing, since Jenkins already uses port 8080.
Here is the code:
app.js:
...
var port = 8080
if(process.env.NODE_ENV === "testing")
port = 3000;
...
test.js:
var request = require('request'),
assert = require('assert');
describe('Blabla', function() {
describe('GET /', function() {
it("should respond with status 200", function(done) {
request('http://127.0.0.1:3000/', function(err,resp,body) {
assert.equal(resp.statusCode, 200);
done();
});
});
});
});
I found exactly what I was looking for: testrest. I don't like its .txt file based syntax - I adapted to use a .json file instead for my specs.
I'd recommend giving Buster.JS a try. You can do Asynchronous tests, mocks/stubs, and fire up a server.
There its also api-easy build on top of vows, seems to be easy to use the first, but the second its much flexible and powerful
There is no right way, but I did create a seed application for my personal directory structure and includes the vows tests suggested by #norman784.
You can clone it: git clone https://github.com/hboylan/express-mongoose-api-seed.git
Or with npm: npm install express-mongoose-api-seed
I'm trying to figure out how to shut down an instance of Express. Basically, I want the inverse of the .listen(port) call - how do I get an Express server to STOP listening, release the port, and shutdown cleanly?
I know this seems like it might be a strange query, so here's the context; maybe there's another way to approach this and I'm thinking about it the wrong way. I'm trying to setup a testing framework for my socket.io/nodejs app. It's a single-page app, so in my testing scripts (I'm using Mocha, but that doesn't really matter) I want to be able to start up the server, run tests against it, and then shut the server down. I can get around this by assuming that either the server is turned on before the test starts or by having one of the tests start the server and having every subsequent test assume it's up, but that's really messy. I would much prefer to have each test file start a server instance with the appropriate settings and then shut that instance down when the tests are over. That means there's no weird dependencies to running the test and everything is clean. It also means I can do startup/shutdown testing.
So, any advice about how to do this? I've thought about manually triggering exceptions to bring it down, but that seems messy. I've dug through Express docs and source, but can't seem to find any method that will shut down the server. There might also be something in socket.io for this, but since the socket server is just attached to the Express server, I think this needs to happen at the express layer.
Things have changed because the express server no longer inherits from the node http server. Fortunately, app.listen returns the server instance.
var server = app.listen(3000);
// listen for an event
var handler = function() {
server.close();
};
Use app.close(). Full example:
var app = require('express').createServer();
app.get('/', function(req, res){
res.send('hello world');
});
app.get('/quit', function(req,res) {
res.send('closing..');
app.close();
});
app.listen(3000);
Call app.close() inside the callback when tests have ended. But remember that the process is still running(though it is not listening anymore).
If after this, you need to end the process, then call process.exit(0).
Links:
app.close: http://nodejs.org/docs/latest/api/http.html#server.close (same applies for)
process.exit:
http://nodejs.org/docs/latest/api/process.html#process.exit
//... some stuff
var server = app.listen(3000);
server.close();
I have answered a variation of "how to terminate a HTTP server" many times on different node.js support channels. Unfortunately, I couldn't recommend any of the existing libraries because they are lacking in one or another way. I have since put together a package that (I believe) is handling all the cases expected of graceful HTTP server termination.
https://github.com/gajus/http-terminator
The main benefit of http-terminator is that:
it does not monkey-patch Node.js API
it immediately destroys all sockets without an attached HTTP request
it allows graceful timeout to sockets with ongoing HTTP requests
it properly handles HTTPS connections
it informs connections using keep-alive that server is shutting down by setting a connection: close header
it does not terminate the Node.js process
Usage with Express.js:
import express from 'express';
import {
createHttpTerminator,
} from 'http-terminator';
const app = express();
const server = app.listen();
const httpTerminator = createHttpTerminator({
server,
});
await httpTerminator.terminate();
More recent version of express support this solution:
const server = app.listen(port);
const shutdown = () => {
server.close();
}
You can easily do this by writing a bash script to start the server, run the tests, and stop the server. This has the advantage of allowing you to alias to that script to run all your tests quickly and easily.
I use such scripts for my entire continuous deployment process. You should look at Jon Rohan's Dead Simple Git Workflow for some insight on this.