I'm currently trying to write a unit test that verifies a certain number of documents exist.
This is what I have so far
test('Login with no account', () async {
Firestore _firestore = Firestore.instance;
final QuerySnapshot result = await _firestore
.collection(UserFirestoreField.Collection)
.where(UserFirestoreField.EmailAddress, isEqualTo: 'email#example.com')
.where(UserFirestoreField.Password, isEqualTo: 'wrongpassword')
.getDocuments();
final List<DocumentSnapshot> docs = result.documents;
print(docs);
});
The error I'm getting is
package:flutter/src/services/platform_channel.dart 314:7
MethodChannel.invokeMethod
MissingPluginException(No implementation found for method
Query#getDocuments on channel plugins.flutter.io/cloud_firestore)
I have the android emulator running with my app started.
Every guide I've seen talks about mocking a database, I want to actually check the real database.
Any way to do this in dart/flutter?
Thanks!
In Flutter, unit and widget tests run on your host machine which does not have the native part of your firebase plugin. This is why you are getting this error.
You really should mock the database in tests but if you really want to test your app as close to how it is run by a user you would run an integration test on an emulator.
You can also use a dart based Firebase plugin or use the Firebase REST API.
You can find more about this here: https://flutter.dev/docs/testing
You could JsonDecode into a local map and test the map.
Related
I want to Automate and integrate the the Selenium web driver tests that we developed for a website in AWS code build environment. We want to run these tests automated, in the AWS Code-Build and then Release (AWS Code-Deploy) if all good.
For example, We wrote all of our test cases using node. Let's assume that I have a basic test case like below
npm install selenium-webdriver In a file called google_test.js
const webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;
const driver = new webdriver.Builder()
.forBrowser('firefox')
.build();
driver.get('http://www.google.com');
driver.findElement(By.name('q')).sendKeys('webdriver');
driver.sleep(1000).then(function() {
driver.findElement(By.name('q')).sendKeys(webdriver.Key.TAB);
});
driver.findElement(By.name('btnK')).click();
driver.sleep(2000).then(function() {
driver.getTitle().then(function(title) {
if(title === 'webdriver - Google Search') {
console.log('Test passed');
} else {
console.log('Test failed');
}
driver.quit();
});
});
Then as you would expect We run this test in the command line,
node google_test
This works fine and great in a manual environment,
however, our challenge is to automate this and deploy automatically if tests were successful,
I wonder how we can archive this in a the AWS code-build setup. Even after doing all the research Im still confused on what is the best way to achieve this, Many suggests many different ways all looks very hacky and unreliable,
Problems/Questions
Since in an Automated AWS code build enr. We dont have browser access, so how can we actually see the output to see if the tests were successful?
What is the way/How we can detect the tests ran correctly to proceed to the next step of code-deploy? what signals can be generated and how?
If this is not possible, what is the recommended way of doing this?
I have an Azure Function which is triggered by an Azure Service Bus Queue.
The function is below.
How this Run method can be unit tested?
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
public static class AddContactFunction
{
[FunctionName("AddContactFunction")]
public static void Run([ServiceBusTrigger("AddContact", Connection = "AddContactFunctionConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I had the exact same doubts.
Adding Unit Tests is not too complicated, at the end of the day, its a function, so all we got to do is to call the Azure Function with the correct string, for parameter string myQueueItem.
Adding Integration tests needs some additional ground work. In the Github project, the author uses the TestFunctionHost class from Azure/azure-functions-host project.
I tried following this strategy, but the amount of code needed to setup all these is uncomfortably high for my liking. Not a lot of it is well documented, and some of the stuff needs developers to use Azure App Services myGet feed.
I wanted a simpler approach, and thankfully I found one.
Azure Functions is built on top of the Azure WebJobs SDK package, and leverages its JobHost class to run. So in our integration tests, all we need to do, is to setup this Host, and tell it where to look for the Azure Functions to load and run.
IHost host = new HostBuilder()
.ConfigureWebJobs()
.ConfigureDefaultTestHost<CLASS_CONTAINING_THE_AZURE_FUNCTIONS>(webjobsBuilder => {
webjobsBuilder.AddAzureStorage();
webjobsBuilder.AddServiceBus();
})
.ConfigureServices(services => {
services.AddSingleton<INameResolver>(resolver);
})
.Build();
using (host) {
await host.StartAsync();
// ..
}
...
Once this is done, we can send messages to ServiceBus and our Azure Functions will get triggered. Once can even set breakpoints in the Functions getting tested and debug issues!
I have blogged about the whole process here and I have also created a github repository at this link, to showcase test driven development with Azure Functions.
How this Run method can be unit tested?
The method is a static public method. You can unit test it by invoking the static method AddContactFunction.Run(/* parameters /*); You will not need a Service Bus namespace or a message for that matter as your function expects to receive a string from the SDK. Which you can provide and verify the logic works as expected.
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
This would be a much more sophisticated scenario. This would require to run Functions runtime and generate a real Service Bus message to trigger the functions as well as validate that the blob was written. There's no integration/end-to-end testing framework that is shipped with Functions and you'd need to come up with something custom. Azure Functions Core Tools could be helpful to achieve that.
I have a Portal application running on one port--http://localhost:10039. I am trying to unit test individual Ember.js applications, which are loaded into the Portal app via portlets.
What I'd like to be able to do is have those QUnit tests run against the full application, which is running on that other port I mentioned. However, Karma seems to not be fond of running the test suite on a port that isn't the same one on which the application is running.
For example:
test('Page loads in browser', function() {
visit('/login').then(function() {
ok(exists('#login-form'), 'Page loaded successfully');
});
});
... launches Karma successfully on port 9876, but yields...
Page loaded successfully# 42 ms
Expected: true
Result: false
Diff: true false
Source:
at http://localhost:9876/absolute/Users/me/Sites/app/node_modules/qunitjs/qunit/qunit.js:1933:13
at http://localhost:9876/base/tests/unit-tests.js:9:8
at isolate (http://localhost:9876/base/bower_components/ember/ember.js:36720:15)
at http://localhost:9876/base/bower_components/ember/ember.js:36703:16
at tryCatch (http://localhost:9876/base/bower_components/ember/ember.js:45817:16)
at invokeCallback (http://localhost:9876/base/bower_components/ember/ember.js:45829:17)
Is it possible to run my test suite on, say, http://localhost:9876, and have it run its tests against another website/port http://localhost:10039?
The closest I could come to an answer was Karma proxies, though the proxy seems to have no effect. Karma is still running its tests against links relative to its own port 9876.
I would like to add that I am open to other testing frameworks if this can only be done elsewhere--Jasmine, Mocha, etc.
Thanks!
karma is intended for running unit tests, so the code will be loaded in karma client (localhost:9876) and test cases executed there.
If you are planning to run certain end to end tests with your portal application, you could look into alternatives like selenium. In fact, your test above (testing for successful page loading) is a good fit with selenium.
I have a REST API server that uses Express, Mongoose and config. I want to unit test my API. Basically, bring up a temporary web server on port x, an empty mongo-database on port y, do some API calls (both gets and puts) and validate what I get back and then shutdown temporary server and drop test database once my tests finish. What is the best way to do this? I have been looking at mocha/rewire but not sure how to set up temporary server and db and not sure what the best practices are.
I use Jenkins (continuous integration server) and Mocha to test my app, and I found myself having the same problem as you. I configured Jenkins so it would execute this shell command:
npm install
NODE_ENV=testing node app.js &
npm mocha
pkill node
This runs the server for executing the tests, and then kills it. This also sets the NODE_ENV environment variable so I can run the server on a different port when testing, since Jenkins already uses port 8080.
Here is the code:
app.js:
...
var port = 8080
if(process.env.NODE_ENV === "testing")
port = 3000;
...
test.js:
var request = require('request'),
assert = require('assert');
describe('Blabla', function() {
describe('GET /', function() {
it("should respond with status 200", function(done) {
request('http://127.0.0.1:3000/', function(err,resp,body) {
assert.equal(resp.statusCode, 200);
done();
});
});
});
});
I found exactly what I was looking for: testrest. I don't like its .txt file based syntax - I adapted to use a .json file instead for my specs.
I'd recommend giving Buster.JS a try. You can do Asynchronous tests, mocks/stubs, and fire up a server.
There its also api-easy build on top of vows, seems to be easy to use the first, but the second its much flexible and powerful
There is no right way, but I did create a seed application for my personal directory structure and includes the vows tests suggested by #norman784.
You can clone it: git clone https://github.com/hboylan/express-mongoose-api-seed.git
Or with npm: npm install express-mongoose-api-seed
So, I have a GWT client, which interacts with a Python Google App Engine server. The client makes request to server resources, the server responds in JSON. It is simple, no RPC or anything like that. I am using Eclipse to develop my GWT code.
I have GWTTestCase test that I would like to run. Unfortunately, I have no idea how to actually get the google app engine server running per test. I had the bright idea below of trying to start the app engine server from the command line, but of course this does not work, as Process and ProcessBuilder are not classes that the GWT Dev kit actually contains.
package com.google.gwt.sample.quizzer.client;
import java.io.IOException;
import java.lang.ProcessBuilder;
import java.lang.Process;
import com.google.gwt.junit.client.GWTTestCase;
public class QuizzerTest extends GWTTestCase {
public String getModuleName() {
return "com.google.gwt.sample.quizzer.Quizzer";
}
public void gwtSetUp(){
ProcessBuilder pb = new ProcessBuilder("dev_appserver.py",
"--clear_datastore",
"--port=9000",
"server_python");
try {
p = pb.start();
} catch (IOException e) {
System.out.println("Something happened when starting the app server!");
}
public void gwtTearDown(){ p.destroy(); }
public void testSimple() {
//NOTE: do some actual network testing from the GWT client to GAE here
assertTrue(true);}
}
I get the following errors when compiling this file:
[ERROR] Line 21: No source code is available for type java.lang.Process; did you forget to inherit a required module?
[ERROR] Line 30: No source code is available for type java.lang.ProcessBuilder; did you forget to inherit a required module?
As you can see below, I basically want it to be the case that per test it:
Starts a datastore-empty instance of my GAE server
runs the test across the network, against this server instance.
Stop the server
Of course, report the result of the test back to me.
Does anyone have a good way of doing this? Partial solutions are welcome! Hacks are fine as well. Maybe some progress on this problem could be made by editing the ".launch" config file? The only important criteria is that I would like to "unit test" portions of my GWT code against my actual GAE Python server.
Thank you.
I would recommend creating an Ant target for this - take a look at this page for the full ant build file for GWT.
Then, as the first line of the testing target, add an execution task to start the server. Look here for exec docs.
Then set up that ant task in your IDE. This way you get the server running before your tests irrespective of where you run the tests from, and it can be integrated into your build process if you want.