How do you get the python Google App Engine development server (dev_server.py) running for unit test in GWT? - unit-testing

So, I have a GWT client, which interacts with a Python Google App Engine server. The client makes request to server resources, the server responds in JSON. It is simple, no RPC or anything like that. I am using Eclipse to develop my GWT code.
I have GWTTestCase test that I would like to run. Unfortunately, I have no idea how to actually get the google app engine server running per test. I had the bright idea below of trying to start the app engine server from the command line, but of course this does not work, as Process and ProcessBuilder are not classes that the GWT Dev kit actually contains.
package com.google.gwt.sample.quizzer.client;
import java.io.IOException;
import java.lang.ProcessBuilder;
import java.lang.Process;
import com.google.gwt.junit.client.GWTTestCase;
public class QuizzerTest extends GWTTestCase {
public String getModuleName() {
return "com.google.gwt.sample.quizzer.Quizzer";
}
public void gwtSetUp(){
ProcessBuilder pb = new ProcessBuilder("dev_appserver.py",
"--clear_datastore",
"--port=9000",
"server_python");
try {
p = pb.start();
} catch (IOException e) {
System.out.println("Something happened when starting the app server!");
}
public void gwtTearDown(){ p.destroy(); }
public void testSimple() {
//NOTE: do some actual network testing from the GWT client to GAE here
assertTrue(true);}
}
I get the following errors when compiling this file:
[ERROR] Line 21: No source code is available for type java.lang.Process; did you forget to inherit a required module?
[ERROR] Line 30: No source code is available for type java.lang.ProcessBuilder; did you forget to inherit a required module?
As you can see below, I basically want it to be the case that per test it:
Starts a datastore-empty instance of my GAE server
runs the test across the network, against this server instance.
Stop the server
Of course, report the result of the test back to me.
Does anyone have a good way of doing this? Partial solutions are welcome! Hacks are fine as well. Maybe some progress on this problem could be made by editing the ".launch" config file? The only important criteria is that I would like to "unit test" portions of my GWT code against my actual GAE Python server.
Thank you.

I would recommend creating an Ant target for this - take a look at this page for the full ant build file for GWT.
Then, as the first line of the testing target, add an execution task to start the server. Look here for exec docs.
Then set up that ant task in your IDE. This way you get the server running before your tests irrespective of where you run the tests from, and it can be integrated into your build process if you want.

Related

Browser Automation Testing with Selenium Web Driver in AWS Code Build

I want to Automate and integrate the the Selenium web driver tests that we developed for a website in AWS code build environment. We want to run these tests automated, in the AWS Code-Build and then Release (AWS Code-Deploy) if all good.
For example, We wrote all of our test cases using node. Let's assume that I have a basic test case like below
npm install selenium-webdriver In a file called google_test.js
const webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;
const driver = new webdriver.Builder()
.forBrowser('firefox')
.build();
driver.get('http://www.google.com');
driver.findElement(By.name('q')).sendKeys('webdriver');
driver.sleep(1000).then(function() {
driver.findElement(By.name('q')).sendKeys(webdriver.Key.TAB);
});
driver.findElement(By.name('btnK')).click();
driver.sleep(2000).then(function() {
driver.getTitle().then(function(title) {
if(title === 'webdriver - Google Search') {
console.log('Test passed');
} else {
console.log('Test failed');
}
driver.quit();
});
});
Then as you would expect We run this test in the command line,
node google_test
This works fine and great in a manual environment,
however, our challenge is to automate this and deploy automatically if tests were successful,
I wonder how we can archive this in a the AWS code-build setup. Even after doing all the research Im still confused on what is the best way to achieve this, Many suggests many different ways all looks very hacky and unreliable,
Problems/Questions
Since in an Automated AWS code build enr. We dont have browser access, so how can we actually see the output to see if the tests were successful?
What is the way/How we can detect the tests ran correctly to proceed to the next step of code-deploy? what signals can be generated and how?
If this is not possible, what is the recommended way of doing this?

Unit and Integration Test for Azure Function with ServiceBusTrigger

I have an Azure Function which is triggered by an Azure Service Bus Queue.
The function is below.
How this Run method can be unit tested?
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
public static class AddContactFunction
{
[FunctionName("AddContactFunction")]
public static void Run([ServiceBusTrigger("AddContact", Connection = "AddContactFunctionConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I had the exact same doubts.
Adding Unit Tests is not too complicated, at the end of the day, its a function, so all we got to do is to call the Azure Function with the correct string, for parameter string myQueueItem.
Adding Integration tests needs some additional ground work. In the Github project, the author uses the TestFunctionHost class from Azure/azure-functions-host project.
I tried following this strategy, but the amount of code needed to setup all these is uncomfortably high for my liking. Not a lot of it is well documented, and some of the stuff needs developers to use Azure App Services myGet feed.
I wanted a simpler approach, and thankfully I found one.
Azure Functions is built on top of the Azure WebJobs SDK package, and leverages its JobHost class to run. So in our integration tests, all we need to do, is to setup this Host, and tell it where to look for the Azure Functions to load and run.
IHost host = new HostBuilder()
.ConfigureWebJobs()
.ConfigureDefaultTestHost<CLASS_CONTAINING_THE_AZURE_FUNCTIONS>(webjobsBuilder => {
webjobsBuilder.AddAzureStorage();
webjobsBuilder.AddServiceBus();
})
.ConfigureServices(services => {
services.AddSingleton<INameResolver>(resolver);
})
.Build();
using (host) {
await host.StartAsync();
// ..
}
...
Once this is done, we can send messages to ServiceBus and our Azure Functions will get triggered. Once can even set breakpoints in the Functions getting tested and debug issues!
I have blogged about the whole process here and I have also created a github repository at this link, to showcase test driven development with Azure Functions.
How this Run method can be unit tested?
The method is a static public method. You can unit test it by invoking the static method AddContactFunction.Run(/* parameters /*); You will not need a Service Bus namespace or a message for that matter as your function expects to receive a string from the SDK. Which you can provide and verify the logic works as expected.
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
This would be a much more sophisticated scenario. This would require to run Functions runtime and generate a real Service Bus message to trigger the functions as well as validate that the blob was written. There's no integration/end-to-end testing framework that is shipped with Functions and you'd need to come up with something custom. Azure Functions Core Tools could be helpful to achieve that.

couldn't get Application context

I am writing vaadin test bench (5.0.2) UI test cases with spring boot application and I am writing as #RunWith(Junit4.class) as by writing #RunWith(SpringJUnit4ClassRunner.class) loads something behind the hood and it requires the spring application pre run but I want to test in different environment, which is already up and just one can run the test case and get the Application context from the configuration without running the project is it possible?
I have tried manier things like #Dirtiescontext #SpringRunner etc.But for #RunWith(Junit4.class) no annotations work at the class level so couldn't able to get the Application Context.
#WebAppConfiguration
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = UIConfiguration.class)
#TestPropertySource(locations = "classpath:testData/testdata.properties")
public abstract class BaseTestCase extends TestBenchTestCase {
//some basic configurations for loading Drivers.
}
I need a configuration for Vaadin based Spring boot project which can provide me Application Context without running anything behind the hood as the #RunWith(SpringJUnit4ClassRunner.class) loads the entire thing.

Tell Travis to skip a test, but continue to include it in my main test suite?

I am using Django 1.8 and I have a management command that geocodes some items in my database, which requires an internet connection.
I have written a test for this management command. However, the test runs the script, so it also requires an internet connection.
After pushing the test to GitHub, my CI is broken, because Travis doesn't have an outside internet connection so it fails on this test.
I want to keep this test, and I'd like to continue to include it in python manage.py test when run locally.
However, is there a way I can explicitly tell Travis not to bother with this particular test?
Alternatively, is there some other clean way that I can keep this test as part of my main test suite, but stop it breaking Travis?
Maybe you could decorate your test with #unittest.skipIf(condition, reason) to test for the presence of a Travis CI specific environment variable to skip it or not. For example:
import os
...
#unittest.skipIf("TRAVIS" in os.environ and os.environ["TRAVIS"] == "true", "Skipping this test on Travis CI.")
def test_example(self):
...
If the external resource is an HTTP endpoint, you should consider using vcrpy to record and replay the HTTP requests/responses.
This way you can continue running the same test suite in different environments. It'll also speed this test up.

embedded zookeeper for unit/integration test

Is there an embedded zookeeper so that we could use it in unit testing? It can be shipped with the test and run out of the box. Maybe we could mock some service and register to the embedded zookeeper
The Curator framework has TestingServer and TestingCluster classes (see https://github.com/Netflix/curator/wiki/Utilities) that are in a separate maven artifact (curator-test - see the Maven/Artifacts section of https://github.com/Netflix/curator/wiki).
They're pretty self explanatory, or you can download the curator code base and see how they're used internally in their own test cases.
We've used both successfully within unit tests at $DAY_JOB.
You could use Apache Curator Utilities provided in-process ZooKeeper server TestingServer that can be used for testing.
With maven you can dependency as follows
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-test</artifactId>
<version>3.2.1</version>
</dependency>
And you can create in process zookeeper server as folows
TestingServer zkServer;
#Before
public void setUp() throws Exception
{
zkServer = new TestingServer(2181, true);
}
#After
public void tearDown() throws Exception
{
zkServer.stop();
}
For testing Cluster use can use TestingCluster, which creates an internally running ensemble of ZooKeeper servers
You could use the zookeeper-maven-plugin, which is documented here.
The zookeeper project produces a "fat-jar" that it uses itself for system test.
There is a written up README, showing how easy it is to launch, but unfortunately it is not being made as an artifact, so cannot be linked to maven.