Play Framework: IntegrationSpec ignoring configuration provided to FakeApplication when running play test - unit-testing

I am using Play 2.2 and Specs2 and having the following test
import org.specs2.mutable.Specification
import org.specs2.runner.JUnitRunner
import play.api.test.Helpers.running
import play.api.test.{FakeApplication, TestBrowser, TestServer}
import java.util.concurrent.TimeUnit
import org.openqa.selenium.firefox.FirefoxDriver
import org.fluentlenium.core.domain.{FluentList, FluentWebElement}
import org.openqa.selenium.NoSuchElementException
"Application" should {
"work from within a browser" in {
running(TestServer(port, application = FakeApplication(additionalConfiguration = Map("configParam.value" -> 2)), classOf[FirefoxDriver]) {
.....
}
}
}
configParam.value is being accessed the following way in the application
import scala.concurrent.Future
import play.api.libs.json._
import play.api.Play._
import play.api.libs.ws.Response
import play.api.libs.json.JsObject
object Configuration {
val configParamValue = current.configuration.getString("configParam.value").get
}
When running play test the configParam.value being used is the one from application.conf instead of the one passed in FakeApplication.
What am I doing wrong here?

The problem is probably with the Map passed to additionalConfiguration.
You're passing an Int and trying to get a String with "getString"
Try changing to this:
running(TestServer(port, application = FakeApplication(additionalConfiguration = Map("configParam.value" -> "2")), classOf[FirefoxDriver]) {
Notice the " around the 2.

Related

About GenericOptionsParser getRemainingArgs method

package com.ibm.dw61;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import com.ibm.dw61.MaxTempReducer;
import com.ibm.dw61.MaxTempMapper;
public class MaxMonthlyTemp {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] programArgs = new GenericOptionsParser(conf, args)
.getRemainingArgs();
if (programArgs.length != 2) {
System.err.println("Usage: MaxTemp <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "Monthly Max Temp");
job.setJarByClass(MaxMonthlyTemp.class);
job.setMapperClass(MaxTempMapper.class);
job.setCombinerClass(MaxTempReducer.class);
job.setReducerClass(MaxTempReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(programArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(programArgs[1]));
// Submit the job and wait for it to finish.
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Questions :
1) This is a map-reduce code to extract max temperature for each month. The coder is trying to get non-generic options using the getRemainingArgs method. But the next line says if the number of non-generic options is not 2, that means there is an error and the program will immediately abort. I couldn’t figure out what is the coder’s logic here. Anyone kind enough to explain?
2) In another example Wordcount, the coder didn’t perform this step of getting non-generic options. So under what circumstances do we have to perform this step and testing whether the non-generic options numbers 2?
as you can see in the Hadoop API documentation, purpose of the method getRemainingArgs is to extract application-specific arguments , those that are not related to Hadoop framework. in this code, you should specify two arguments, first your input and then output, as you can see in the Usage

How to write Elastic unit tests to test query building

I want to write unit tests that test the Elastic query building. I want to test that certain param values produce certain queries.
I started looking into ESTestCase. I see that you can mock a client using ESTestCase. I don't really need to mock the ES node, I just need to reproduce the query building part, but that requires the client.
Has anybody dealt with such issue?
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.unit.DistanceUnit;
import org.elasticsearch.test.ESIntegTestCase;
import org.elasticsearch.test.ESTestCase;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
import com.google.common.collect.Lists;
public class SearchRequestBuilderTests extends ESTestCase {
private static Client client;
#BeforeClass
public static void initClient() {
//this client will not be hit by any request, but it needs to be a non null proper client
//that is why we create it but we don't add any transport address to it
Settings settings = Settings.builder()
.put("", createTempDir().toString())
.build();
client = TransportClient.builder().settings(settings).build();
}
#AfterClass
public static void closeClient() {
client.close();
client = null;
}
public static Map<String, String> createSampleSearchParams() {
Map<String, String> searchParams = new HashMap<>();
searchParams.put(SenseneConstants.ADC_PARAM, "US");
searchParams.put(SenseneConstants.FETCH_SIZE_QUERY_PARAM, "10");
searchParams.put(SenseneConstants.QUERY_PARAM, "some query");
searchParams.put(SenseneConstants.LOCATION_QUERY_PARAM, "");
searchParams.put(SenseneConstants.RADIUS_QUERY_PARAM, "20");
searchParams.put(SenseneConstants.DISTANCE_UNIT_PARAM, DistanceUnit.MILES.name());
searchParams.put(SenseneConstants.GEO_DISTANCE_PARAM, "true");
return searchParams;
}
#Test
public void test() {
BasicSearcher searcher = new BasicSearcher(client); // this is my application's searcher
Map<String, String> searchParams = createSampleSearchParams();
ArrayList<String> filterQueries = Lists.newArrayList();
SearchRequest searchRequest = SearchRequest.create(searchParams, filterQueries);
MySearchRequestBuilder medleyReqBuilder = new MySearchRequestBuilder.Builder(client, "my_index", searchRequest).build();
SearchRequestBuilder searchRequestBuilder = medleyReqBuilder.constructSearchRequestBuilder();
System.out.print(searchRequestBuilder.toString());
// Here I want to assert that the search request builder output is what it should be for the above client params
}
}
I get this, and nothing in the code runs:
Assertions mismatch: -ea was not specified but -Dtests.asserts=true
REPRODUCE WITH: mvn test -Pdev -Dtests.seed=5F09BEDD71BBD14E - Dtests.class=SearchRequestBuilderTests -Dtests.locale=en_US -Dtests.timezone=America/Los_Angeles
NOTE: test params are: codec=null, sim=null, locale=null, timezone=(null)
NOTE: Mac OS X 10.10.5 x86_64/Oracle Corporation 1.7.0_80 (64-bit)/cpus=4,threads=1,free=122894936,total=128974848
NOTE: All tests run in this JVM: [SearchRequestBuilderTests]
Obviously a bit late but...
So this actually has nothing to do with the ES Testing framework but rather your run settings. Assuming you are running this in eclipse, this is actually a duplicate of Assertions mismatch: -ea was not specified but -Dtests.asserts=true.
eclipse preference -> junit -> Add -ea checkbox enable.
right click on the eclipse project -> run as -> run configure -> arguments tab -> add the -ea option in vm arguments

How to force an Apache Mahout application read directly from the HDFS

I have implemented an Apache Mahout application (attached bellow) which does some basic computations. To do so it is required to load the dataset from my local machine. This application comes in the form of a jar file, but then its being executed within a hadoop pseudo-distributed cluster. The terminal command for that is: $ hadoop jar /home/eualin/ApacheMahout/tdunning-MiA-5b8956f/target/mia-0.1-jar-with-dependencies.jar mia.recommender.ch03.IREvaluatorBooleanPrefIntro2 "/home/eualin/Desktop/links-final"
Now, my question is how to do the same, but this time by reading the dataset from the HDFS (we, of course, suppose that the dataset is already stored in HDFS, e.g. in /user/eualin/output/links-final}. What should change in that case? This might help: hdfs://localhost:50010/user/eualin/output/links-final
package mia.recommender.ch03;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.mahout.cf.taste.common.TasteException;
import org.apache.mahout.cf.taste.eval.DataModelBuilder;
import org.apache.mahout.cf.taste.eval.IRStatistics;
import org.apache.mahout.cf.taste.eval.RecommenderBuilder;
import org.apache.mahout.cf.taste.eval.RecommenderIRStatsEvaluator;
import org.apache.mahout.cf.taste.impl.common.FastByIDMap;
import org.apache.mahout.cf.taste.impl.eval.GenericRecommenderIRStatsEvaluator;
import org.apache.mahout.cf.taste.impl.model.GenericBooleanPrefDataModel;
import org.apache.mahout.cf.taste.impl.model.file.FileDataModel;
import org.apache.mahout.cf.taste.impl.neighborhood.NearestNUserNeighborhood;
import org.apache.mahout.cf.taste.impl.recommender.GenericBooleanPrefUserBasedRecommender;
import org.apache.mahout.cf.taste.impl.similarity.LogLikelihoodSimilarity;
import org.apache.mahout.cf.taste.model.DataModel;
import org.apache.mahout.cf.taste.model.PreferenceArray;
import org.apache.mahout.cf.taste.neighborhood.UserNeighborhood;
import org.apache.mahout.cf.taste.recommender.Recommender;
import org.apache.mahout.cf.taste.similarity.UserSimilarity;
import java.io.File;
public class IREvaluatorBooleanPrefIntro2 {
private IREvaluatorBooleanPrefIntro2() {
}
public static void main(String[] args) throws Exception {
if (args.length != 1) {
System.out.println("give file's HDFS path");
System.exit(1);
}
DataModel model = new GenericBooleanPrefDataModel(
GenericBooleanPrefDataModel.toDataMap(
new GenericBooleanPrefDataModel(new FileDataModel(new File(args[0])))));
RecommenderIRStatsEvaluator evaluator =
new GenericRecommenderIRStatsEvaluator();
RecommenderBuilder recommenderBuilder = new RecommenderBuilder() {
#Override
public Recommender buildRecommender(DataModel model) throws TasteException {
UserSimilarity similarity = new LogLikelihoodSimilarity(model);
UserNeighborhood neighborhood =
new NearestNUserNeighborhood(10, similarity, model);
return new GenericBooleanPrefUserBasedRecommender(model, neighborhood, similarity);
}
};
DataModelBuilder modelBuilder = new DataModelBuilder() {
#Override
public DataModel buildDataModel(FastByIDMap<PreferenceArray> trainingData) {
return new GenericBooleanPrefDataModel(
GenericBooleanPrefDataModel.toDataMap(trainingData));
}
};
IRStatistics stats = evaluator.evaluate(
recommenderBuilder, modelBuilder, model, null, 10,
GenericRecommenderIRStatsEvaluator.CHOOSE_THRESHOLD,
1.0);
System.out.println(stats.getPrecision());
System.out.println(stats.getRecall());
}
}
You can't, directly, since the non-distributed code has no knowledge of HDFS. Instead, copy the file to a local location in setup() and then read it from a local file.

How to write isolated unit tests for routes in Play 2.0 framework?

My play framework application is in scala (not Java). I found a page describing how to use the utility class play.test.Helpers for unit testing routes. The example was in Java, not scala. I wrote the test in scala, but I get the error "Message: This is not a JavaAction and can't be invoked this way."
Here is the page I found describing how to unit test routes in play framework 2.0: http://digitalsanctum.com/2012/05/28/play-framework-2-tutorial-testing/
...and here is the code I tried to write to test my app:
package conf
import org.scalatest._
import play.mvc.Result
import play.test.Helpers._
class routeTest extends FunSpec with ShouldMatchers {
describe("route tests") {
it("") {
// routeAndCall() fails. Message: This is not a JavaAction and can't be invoked this way.
val result = routeAndCall(fakeRequest(GET, "/"))
result should not be (null)
}
}
}
Is the problem because my action is Scala and not Java? Can I unit test my routes over Scala controllers?
You should use play.api.* imports from Scala code. play.* is a Java api. So your code should look like:
package conf
import org.scalatest._
import org.scalatest.matchers._
import play.api._
import play.api.mvc._
import play.api.test.Helpers._
import play.api.test._
class routeTest extends FunSpec with ShouldMatchers {
describe("route tests") {
it("GET / should return result") {
val result = routeAndCall(FakeRequest(GET, "/"))
result should be ('defined)
}
}
}
Or even better using FlatSpec:
package conf
import org.scalatest._
import org.scalatest.matchers._
import play.api._
import play.api.mvc._
import play.api.test.Helpers._
import play.api.test._
class routeTest extends FlatSpec with ShouldMatchers {
"GET /" should "return result" in {
val result = routeAndCall(FakeRequest(GET, "/"))
result should be ('defined)
}
it should "return OK" in {
val Some(result) = routeAndCall(FakeRequest(GET, "/"))
status(result) should be (OK)
}
}
Also, routeAndCall doesn't return null. It returns Option[Result], i.e. Some[Result] or None, so null check doesn't work in this case.

Akka scheduled job questions

I have been experimenting with Play 2.0 and using Akka for a recurring scheduled job. I would like the job to run every 5 minutes. I have this really basic test and it works for the most part. Based on this test it should create a PDF file every 5 minutes. What happens is I get 4 files written every 5 minutes and sometimes more. I am not exactly sure why. Below is my code.
package models;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.*;
import javax.persistence.*;
import play.libs.*;
import play.db.ebean.*;
import akka.util.*;
import static java.util.concurrent.TimeUnit.*;
import com.itextpdf.text.Paragraph;
import com.itextpdf.text.pdf.PdfWriter;
#Entity
public class EmailService extends Model {
public EmailService() {
// Run the Service every 5 minutes
Akka.system().scheduler().schedule(
Duration.create(0, MILLISECONDS),
Duration.create(5, MINUTES),
new Runnable() {
public void run() {
try {
// TEST
com.itextpdf.text.Document document = new com.itextpdf.text.Document();
PdfWriter.getInstance(document, new FileOutputStream(UUID.randomUUID().toString() + ".pdf"));
document.open();
document.add(new Paragraph("Hello World!"));
document.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
);
}
}
Ideas why it runs multiple times?